Close Menu
WashingtonExec
    Podcast Episodes
    LinkedIn Facebook X (Twitter) Instagram YouTube
    LinkedIn Facebook X (Twitter) Instagram YouTube
    WashingtonExec
    Subscribe To The Daily
    • News & Headlines
    • Executive Councils
    • Videos
    • Podcast
    • Events
      • 🏆 Chief Officer Awards
      • 🏆 Pinnacle Awards
    • About
    • Contact Us
    LinkedIn YouTube X (Twitter)
    WashingtonExec
    You are at:Home»OpEd»Who Should Own the Risk of Bringing AI to Mission? Hint: Not the User
    OpEd

    Who Should Own the Risk of Bringing AI to Mission? Hint: Not the User

    By Rebecca FairDecember 29, 2025
    Share
    LinkedIn Facebook Twitter Email
    Rebecca Fair headshot
    Rebecca Fair, Two Six Technologies

    Rebecca Fair is the senior vice president of information advantage at Two Six Technologies, where she leads strategic initiatives to optimize data utilization and information systems for maximum impact. 

    When people learn where I work and what I do, they often ask me what I think of some new piece of defense tech, especially an AI tool or feature. There are several different frameworks I use to evaluate if new tech is fit for mission, but here is a big one:

    If an Agentic AI asks you, the user, to pick which model or parameters it should use, RUN AWAY.

    Agentic AI is a powerful new architecture that changes what is fundamentally possible in software. But ultimately, it’s just a software architecture. The novelty of AI is allowing software providers to pass along to the users the risks of bringing non-deterministic outcomes into the software.

    But users should demand more.

    Integrating AI into software is risky business. This is true in any context, but it’s especially true when that software is part of our national security infrastructure. The risk can be worth it, and the gains can be immeasurable, but the AI integration risk should belong to the developers, not the end users. To win in this space, companies must infuse deep mission expertise into the entire AI experience. We shouldn’t just test if the code runs; we should test if the mission was accomplished.

    Most software providers operating in a SaaS data operating system model today just give you a “pipe” to an AI model, maybe a splashy UI, and call it a day. They open access to powerful large language models to the user, provide basic features to interrogate the results, and call that trust and transparency. They claim they are being “transparent,” but they leave the hardest part—deciding if the AI’s answer is actually right—entirely up to the user.

    We take a different approach.

    Agent design at Two Six Technologies is focused where it should be: does the tool reliably and consistently deliver the mission outcomes the user needs? Users shouldn’t need to worry about which model they should use, or which agents to call. They shouldn’t need to ask themselves (or the model) if the prompt they just wrote will take into account specific edge cases.

    If users are focused on the tool—and what it can and can’t do—they aren’t focused on their mission.

    The foundational models on which most mission applications are being built are designed to please, because one of their measures of success is to grow users. I recommend you require your AI mission partners to ensure the right model is delivering the output you need to act on. A VC in early 2024 noted AI is like water. It’s necessary, ubiquitous, and the same thing in every bottle. It’s the packaging for a specific use case that matters. Look for partners who are packaging the AI for your needs and not forcing you to figure out whether you need Fiji or Liquid Death.

    Our commitment to our mission partners has always been that we vet “just now” technology that might make a difference for mission. If we assess the tech is legit, we adapt it for mission and deploy it for customers through our products.

    As we enter 2026 we are energized by the magic that the mainstreaming of AI agents is bringing to active operations. We are engaged with mission partners on building trusted AI through frequent, consistent, use to understand its strengths and weaknesses before going operational. And, we are taking responsibility for making sure that the new tech we deploy is mission ready.

    Previous ArticleTechnologies That Shaped 2025 — and Will Define 2026

    Related Posts

    Technologies That Shaped 2025 — and Will Define 2026

    ‘Touchless ID’ Facial-matching Tech Streamlines Secure Airport Experience for TSA PreCheck Travelers

    AI is Now Federal Mission Infrastructure: From Pilot Programs to Production in 2025

    Comments are closed.

    LinkedIn Follow Button
    LinkedIn Logo Follow Us on LinkedIn
    Latest Industry Leaders

    Top Public Sector Leaders to Watch in 2026

    Top CIOs to Watch in 2026

    Load More
    Latest Posts

    Who Should Own the Risk of Bringing AI to Mission? Hint: Not the User

    December 29, 2025

    Technologies That Shaped 2025 — and Will Define 2026

    December 29, 2025

    Top Public Sector Leaders to Watch in 2026: Chainalysis Government Solutions’ Erin Regen

    December 29, 2025

    Top Public Sector Leaders to Watch in 2026: HII Mission Technologies’ Grant Hagen

    December 29, 2025

    Top Public Sector Leaders to Watch in 2026: CyberArk’s Ernie Rhyne

    December 29, 2025
    Quick Links
    • Executive Councils & Committees
    • Chief Officer Awards
    • Pinnacle Awards
    • Advertise With Us
    • About WashingtonExec
    • Contact
    Connect
    • LinkedIn
    • YouTube
    • Facebook
    • Twitter

    Subscribe to The Daily

    Connect. Inform. Celebrate.

    Copyright © WashingtonExec, Inc. | All Rights Reserved. Powered by JMG

    Type above and press Enter to search. Press Esc to cancel.