• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Preventing Bias in AI: Q&A with Michael Armstrong

Feature
Article

The CTO of Authenticx discusses preventing bias from becoming part of AI models.

Michael Armstrong
CTO
Authenticx

Michael Armstrong
CTO
Authenticx

Michael Armstrong is CTO at Authenticx, a company focused on building and training AI models for pharmacovigilance. He recently spoke with Pharmaceutical Executive about the complicated processes that go into building these programs to promote reliability, trust, and accountability.

PE: Can you discuss your overall view of AI and how it should be used?
Michael Armstrong: We take conversations between HCPs, pharma companies, patients, and others and analyze those conversations. We use a lot of AI for that, but we also believe in keeping humans in the loop. That’s critical. We put a lot of resources and thought into how to make AI and humans play nice together. We view AI as a tool to make life better, and not something that should replace humans.

PE: How important is it to build these AI models so they’re able to limit bias?
Armstrong: That’s a big question that people ask all of the time. First off, we have to actually put this in practice. The data that we analyze and build from doesn’t have any demographics. Everything is de-identified to start with. Our step one is to eliminated demographics from data sets. That’s a pretty good to start addressing that problem.

The bias exists from the training data. It’s being interpreted and trained into models based on the data that is fed into it. I use this analogy all the time: it’s like we’re teaching a toddler. They’re kind of smart and know things, but also get confused very quickly. Whatever those inputs are, they’re taking them and synthesizing them into a mental map of the world and then responding based on those inputs.

For us, we’re looking at what those inputs are. Another big part of training AI is labeling, which is a big part of the data set that we create that the model learns from. This is why we have humans coming in, listening to the conversations, and then labeling the conversation with various things. Maybe there’s a friction point or a safety event. We start with humans listening to the conversation and identifying adverse events, negative sentiments, those kinds of things.

Those labels are another opportunity for bias to make its way in. We work very hard to review the labels to ensure that what we’re doing is objective. We have checks and balances built into the data collection process. About 90% of our creation effort is focused on not letting the bias in.

On the backend, we track everything our AI says. We’re looking for patterns of signs of bias that may sneak in there. That’s how we do it.

This matters because we want our AI to be as objective as possible. We want people to be able to rely on the output. That’s what makes it a good tool, and bias degrades that reliability and trust.

PE: How do you ensure that the human element isn’t lost from AI?
Armstrong: Hopefully, leaders understand that it’s a tool to be leveraged. We try to push that philosophy at conferences, speaking engagements, and what we publish. We sell software into healthcare, and this is a big part of our philosophy that we share with customers from day one. We say that they’re buying the software, which has AI, but we recommend that it’s utilized with a human. So, we try to push this through thought leadership.

You may see some companies that aren’t smart about it and think that they can just throw some AI at a process based entirely on cost savings. That’s going to impact people negatively, and it’s also going to impact those businesses. I think it’s important that market forces work correctly and if that does happen, those businesses should feel the pinch. To my mind, we drive understanding of this as best we can, but if somebody does something stupid, they need to feel the effect of that.

If you don’t treat customers correctly because of AI, you should feel that.

PE: How do combat the issue of AI hallucinations?
Armstrong: That’s a challenge across the board. First off, we need to ask what a hallucination is. I previously used the analogy that the current state of AI is like a toddler. It’s getting smarter, but the problem is that it doesn’t know how to say “I don’t know.” If it doesn’t know the answer or if it doesn’t have enough inputs, it tries to answer anyway. Maybe it shouldn’t. Maybe it should say “I don’t know the answer to this.”

It's ironic that it’s smart in some ways, but not so much in other ways. For us, it’s tricky because the definition of responsible AI is still evolving. There are ideas around self-harm and things like that, but if you’re in healthcare, self-harm is something I want my AI to identify and flag. We’ve worked really hard on identifying indicators of hallucinations. This includes both pre- and post-processing steps. We’re trying to ensure that what’s fed into the AI makes a lot of sense. We also check to see that what comes out makes sense. It’s a lot of work and responsibility.

Related Videos