The alliance’s president discusses the current state of data sharing, security concerns, and the adoption AI.
Modern technology requires large amounts of data to work properly. While this has vastly improved the ability of many companies to produce results, it also creates more opportunities for security breaches. Dr. Becky Upton, president of The Pistoia Alliance, spoke with Pharmaceutical Executive about this issue.
Pharmaceutical Executive: Is the increase in AI adoption expected to continue in the coming years?
Dr. Becky Upton: AI will indeed become a powerful tool in the life sciences and pharma industries, but it won’t be a panacea. While AI accelerates drug discovery, optimizes development, and reduces costs, it will be one of many tools used in R&D and healthcare.
There are some notable AI use cases we can get excited about though. For example, AI supports precision medicine by analyzing genetic and real-world patient data for personalized treatments and integrates large datasets for better clinical decision-making. Additionally, AI enhances real-world evidence analysis and boosts lab efficiency through automation. AI-driven non-animal models such as digital twins will have the potential to streamline drug testing. Regulatory bodies are increasingly supportive of the long-term replacement of animal models, with new legislation such as the FDA Modernization Act 2.0. AI aligns with FAIR data initiatives, improving data accessibility and collaboration across the industry.
However, with all these use cases, we must note that AI will complement, not replace, existing methods and human researchers.
PE: What are the major security and privacy concerns surrounding AI?
Upton: The risks of AI really depend on its use case and the data that is being worked with. Our members have raised several concerns at various Alliance webinars and Community of Expert sessions. Firstly, regarding data privacy, AI relies on large datasets, often containing sensitive patient information. Ensuring compliance with regulations like GDPR and HIPAA is critical to protect patient data privacy and confidentiality. There should be informed consent for patients about the use of their data to train AI models. Data integrity and security are also critical. Inaccurate or tampered data can lead to incorrect AI model training and flawed healthcare recommendations, which may endanger patient safety in clinical use cases.
Bias is an increasing issue as AI use grows. Depending on the data they are trained with, models can unintentionally perpetuate biases from the training data, leading to inaccurate predictions or unequal treatment across different patient groups. Transparency and explainability of models will be key for combatting bias and data integrity concerns, so researchers and clinicians can understand how AI decisions are made.
Finally, regulatory confusion is now a widespread problem on a global scale for pharma. Pistoia Alliance research earlier this year found only 9% of life science professionals know EU and US AI regulations well, with more than a third (35%) having no understanding at all. The EU AI Act is causing particular confusion since rules are based on AI’s potential risk and level of impact to consumers. High risk applications such as medical devices, drug manufacturing, and diagnostic AI will require conformity assessments, while limited risk applications such as chatbots must be clearly labelled as AI tools. These new assessments are adding to the long list that pharma companies must already comply with, set by other regulators such as the FDA, EMA, and so on. Of course, it’s good that these regulations are shaping an ethical AI future, but the lack of harmonization and clarity on compliance is currently creating uncertainty for companies operating across multiple jurisdictions.
PE: What is causing companies to become more open to data sharing?
Upton: Companies are becoming more open to sharing data, but progress is slow. Pre-competitive collaboration is growing as companies increasingly realize the value of sharing expertise on areas like best practices for adopting new technologies and preventing repeated experiments – which has a huge cost and time benefit. The complexity of drug discovery, AI’s need for large datasets, and regulatory pressures are pulling companies together. An example is the MELLODDY project, which brought together ten pharmaceutical companies to create predictive models using vast datasets of small molecules, without compromising the confidentiality of proprietary data. MELLODDY used a federated learning approach, meaning that each company's data remained behind their firewalls, with algorithms trained across distributed datasets.
Whilst there is a desire to become more open and to share data, there are still silos of data within organizations themselves. Data silos hinder collaboration and slow innovation, while concerns about security and regulatory compliance, such as with GDPR and HIPAA, pose risks to research integrity.
PE: What are the main data management challenges for pharmaceutical companies?
Upton: The main data management challenges in the pharmaceutical industry include data silos, where information is isolated across departments, preventing efficient collaboration and insights. Ensuring data security and compliance with strict regulations like GDPR and HIPAA is critical to protect sensitive data, such as patient information and proprietary research. Data quality and integrity issues can lead to inaccurate research findings, impacting the validity of clinical trials and drug development. Data integration is another challenge, as different systems and formats make it difficult to aggregate and analyze diverse data sources effectively. Finally, scalability and storage of the growing volumes of data require robust systems to ensure accessibility without escalating costs.
To successfully harness AI in the pharmaceutical industry, vision and leadership from the top are essential. Leadership must understand that AI thrives on well-structured, high-quality data. Without proper data organization and governance, AI models cannot deliver accurate insights or drive innovation. For AI to be effective, data must be structured, standardized, and interoperable across systems. This requires a strategic commitment to robust data management, ensuring that data are not only accessible but also curated for AI readiness. In addition, leaders should foster a culture that values cross-functional collaboration and data sharing, while supporting investments in advanced infrastructure and training to realize AI’s full potential.
The Impact of Artificial Intelligence on the Creation of Medicines
October 24th 2024Najat Khan, chief R&D officer, chief commercial officer, Recursion, and Fred Hassan, director, Warburg Pincus, discuss how artificial intelligence can help reduce healthcare costs at the 20th Annual Young & Partners Pharmaceutical Executive Summit held at the Yale Club of New York.
Plan Ahead: Mastering Your AI Budget for 2025 Success
October 9th 2024Generative AI is just one part of the artificial intelligence and machine learning that is being used by life science organizations, emerging as a major area of interest and an area in which costs and ROI are still largely unknown.