RTX‘s BBN division received a contract award to support DARPA‘s “In The Moment” program. ITM aims to develop the foundations needed for algorithms that are trusted to independently make decisions in scenarios like mass casualty triage and disaster relief, where complex and rapid decisions are needed in dynamic situations where there is often no human consensus and no clear right answer.
“ITM is about more than getting AI to provide the correct answer in very controlled scenarios. We’re not talking about training AI on labelled data to help identify a cancerous tumor on an X-Ray,” said Alice Leung, Raytheon BBN principal investigator. “What we’re trying to accomplish instead is the ability to create AI systems that humans would allow to make decisions independently in uncontrolled environments. To accomplish this, we need to determine how human experts make really difficult decisions and assess whether to trust the decisions of others. We’ll be conducting both decision-making research and trust research.”
CIO INFLUENCE: CIO Influence Interview with Lior Yaari, CEO and Co-Founder at Grip Security
The Raytheon BBN-led team, which includes Kairos Research, MacroCognition, and Valkyries Austere Medical Solutions, will use a cognitive interviewing technique to understand how experts—in this case, medical professionals and first responders—evaluate information and make tough trade-offs to act decisively at critical decision points. This qualitative information will be used to design scenario-based experiments to study how differences in an individual’s decision-making attributes can explain their choices, and how the alignment of attributes between two different people impacts the willingness to delegate decisions to another. This will enable AI to be tuned to match an expert population, or even to be tuned to match an individual expert.
CIO INFLUENCE: CIO Influence Interview with Russ Ernst, Chief Technology Officer at Blancco
“Because the way we make decisions varies from person to person, it’s unlikely that a one-size fits all trusted AI model exists,” said Leung. “Instead, in theory, we should be able to create AI systems that adapt to the user and domain. Decisions are difficult because of uncertainty and trade-offs between competing goals. We want to be able to tune an AI’s attributes such as risk tolerance, process focus, or willingness to change plans to better match a user or a group of users.”
DARPA is bringing together multiple teams to collaborate on this program. Other teams will focus on the development of prototype AI decision-makers that start with baseline knowledge and can then be tuned to match a set of target attributes. The research products from this program will be integrated and evaluated to determine how well the algorithmic agents were able to make decisions consistent with the target human attributes when faced with difficult scenarios. The program will also test whether human experts trust these aligned agents over the baseline agents or other actual humans. In these program evaluations of trust, the human experts will be shown a record of decisions in difficult scenarios without knowing whether the decision-maker was an AI or a human.
CIO INFLUENCE: CIO Influence Interview with Bill Lobig, VP of Product Management at IBM Automation
[To share your insights with us, please write to sghosh@martechseries.com]