Generative AI is just one part of the artificial intelligence and machine learning that is being used by life science organizations, emerging as a major area of interest and an area in which costs and ROI are still largely unknown.
In 2025, artificial intelligence (AI) will be at the center of business strategy with huge investments, especially in life sciences. Over time, AI will reduce the cycle length and failure rate in scientific research and critical drug discovery areas, thus significantly lower the dollars per drug approved.
One report shows that the life sciences market size for AI investment is expected to reach nearly $10 billion before 2032.1 Bain and Company reports that 40% of pharma companies are including anticipated savings from generative AI (GenAI) in their 2024 budgets.2
At the same time, growing costs is one of the primary threats to its success. GenAI specifically is rapidly being integrated into the life science value chain. While GenAI is just part of AI and machine learning (ML) that is being completed by life science organizations, it is certainly one of the major areas of interest and is an area in which costs and ROI are still largely unknown.
Gartner surveyed life science executives in June 2024 to assess sentiment and activity.3 Seventy-two percent of respondents have at least one GenAI use case in production and 30% are deploying six or more, while 92% have at least one use case currently in pilot. However, Gartner reports that more than half of organizations abandon their efforts due to cost-related missteps.
Because the AI space is changing rapidly, it’s essential that companies are budgeting correctly and using their investments strategically. Here’s how to plan your AI budget to be successful in the year ahead
From elementary science classes to graduate work studies, the ‘Make-Test-Decide’ cycle is taught and understood logically. Today, the injection of AI and scientific intelligence platforms are speeding up this Lab-in-a-Loop concept, a beautiful interplay between the scientific wet lab—in which experiments are physically performed—and the dry lab, in which experiments can be simulated and modeled and where informed AI engines can recommend the next experiment to test in the wet lab.
First and foremost, when it comes to AI, investments should make the life of scientists easier by supporting that Lab-in-a-Loop lifecycle, which means:
We know most companies are spending money on AI—but are you spending it efficiently? Companies need a spend strategy or else costs will escalate without control.
At its core, there are a few large drivers of cost when we think about AI, which include data, compute, and people (i.e., the scientists). You can’t simply dump data in a data lake. “Garbage in, garbage out” is a tale as old as time.
GenAI can’t deliver expected results unless proper data architecture and infrastructure is in place. You want to maximize storage, compute, security, GPUs, training, and storage costs. That means seeking out smart, integrated data strategies that enhance an organization's existing software spend.
Data in itself is a large cost model. And it’s often the most volatile because of quality, availability, and governance around it due to the number of data sources and amount of data within each of those sources. While acquiring some of those data could be free, such as from public sources, creating experiments that utilize a company's proprietary data is often prohibitively expensive.
These data come from years of experiments, experience, and knowledge created in the discovery process, and it’s what separates a company from their competition. More data doesn’t always translate to a better model.
However, using it effectively at scale is a costly virtuous cycle. Companies want to perform work from wet labs to create more data, and then bring in public data sources, which in turn will generate more data, to then plug those data back into models, to get better models, and so on and so on. We talk frequently about a Lab-in-a-Loop—think of this as “Data-in-a-Loop.” The cost to support such a process is immensely expensive.
Plus, cost consideration is not limited to buying, creating, and modeling, as maintenance is also a major factor. Just because you buy data doesn’t mean it's easy to model, cleanse, and clean. You must label and follow the ontology structure to align with what you’re modeling. The easier it is to add more data into the model, the better the outcomes.
That’s a time-consuming and complex task that requires teams to understand how models get trained, and how they can be easily used with more experiments and data to create more data. Add in instrumentation challenges, combined with the general challenges and cost of maintaining on-premise or client-cloud solutions, and the result is significant IT overhead costs and time to manage, creating drastic increases in the overall time and costs of R&D.
It’s no surprise that that same Gartner report shows that a group of early adopters is distinctly outpacing the rest of organizations in life sciences.4 Nearly 15% of respondents have already deployed 11 or more use cases in production, with an additional 15% having deployed between six and 10 use cases. These leaders are actively defining the pathways for AI application for accelerating drug discovery.
Ultimately, leading organizations will establish budgeting and governing models that prioritize high-return use cases. Those use cases will align with their investment themes around big bets for the future of the business.
In addition, organizations often bill AI investments to IT, although they typically deliver cost-benefits to the respective functional budgets. Don’t let these conflicts hinder adoption; find ways to incentivize business unit leaders to invest in disruptive, value-generating AI initiatives.
At the highest level, use your AI budget on science, versus operational efficiency or other AI use cases. Some other best practices include:
In such a rapidly emerging area, it’s important to get creative and think out of the box. Within the next five years—not the next 50 years—the cost and timeline of drug discovery will drastically change. As the landscape and complexity of AI within life sciences continue to evolve, CIOs must be mindful of their investments and their ability to safely and effectively implement and scale these technologies into their existing landscapes moving forward.
As exciting as the promise of AI is, we must remember that organizations are built on people. While new technology can be predictive and automated, we must ensure that scientists aren’t overburdened.
That means employing software that enables FAIR data, FAIR processes, and ensuring that the cost of science doesn’t get cut to fund the cost of technology. Tremendous efficiencies and scientific advancement are possible, but ultimately the role of the scientist must be at the forefront in the development of any policies or technologies on your AI journey.
About the Author
Stephen Tharp, SVP Customer Operations for Dotmatics.
References
1. AI In Life Sciences Market Size, Companies and Future Analysis:
https://www.towardshealthcare.com/insights/ai-in-life-sciences-market
2. How to Successfully Scale Generative AI in Pharma:
https://www.bain.com/insights/how-to-successfully-scale-generative-ai-in-pharma/
3. Q24 LLMs and Generative AI: Life Science Manufacturer Perspective:
https://www.gartner.com/document-reader/document/5638991?ref=sendres_email&refval=79002868
4. Q24 LLMs and Generative AI: Life Science Manufacturer Perspective:
https://www.gartner.com/documentreader/document/5638991?ref=sendres_email&refval=79002868
5. Q24 LLMs and Generative AI: Life Science Manufacturer Perspective
https://www.gartner.com/document-reader/document/5638991?ref=sendres_email&refval=79002868