AI allows patients, doctors, and payers to make informed decisions based on evidence at a much faster pace.
Complete information about the patient journey and their experience is needed to evaluate near-term and delayed treatment benefits and risks, as well as how treatments perform alone, in combination with, and in comparison to, other treatment options.
The 21st Century Cures Act1 has been a game changer by giving patients the right to access their EMR data and share it. This has paved the way for companies like PicnicHealth to act as an agent for patients who consent to having their health records acquired, organized, and analyzed to support more informed decision-making.
AI can organize patient information (by medication, test, procedure, or encounter, for example) to give patients easy access to more complete data. Patients may then choose to share their de-identified health information and further contribute to studies through patient reported outcomes (PRO) surveys and, sometimes, wearable digital health technologies.
AI acquires and organizes information faster than ever before to help patients, clinicians, and payers make more informed, evidence-based decisions. For life sciences companies, this supports better understanding about the safety and effectiveness of their treatments as used in real-world settings and among diverse subgroups.
How does AI track data and insights across the patient journey?
AI radically simplifies the review and coding of electronic medical records (EMRs), including processing and analyzing unstructured data which are critical to understanding areas such as the extent and severity of a patient’s condition, treatment tolerability, and the burden of illness.
As a simple example, AI can assemble, organize, and cross-check health information to see if a prescribed medication from a clinician was filled by a patient or if a lab test was performed and, if so, what were the results. Well-trained AI continually reviews and audits data and produces accurate information that is as good or better than trained clinicians interpreting medical data.
For instance, we are starting to see great results from AI-based diagnostic tools used to interpret complex medical imaging such as computed tomography, magnetic resonance imaging, and positron emission tomography.
How does AI avoid certain pitfalls, like tokens or information gaps?
Information gaps are the norm in observational research. The key question is whether data is missing systematically, randomly, or because the expected test or measurement was not done, perhaps due in part to issues relating to access or economics. With patient access to more independent healthcare and testing facilities, the problem of disconnected data is becoming an even bigger issue.
Tokens use unique, encrypted codes to link patient records without revealing personal identifiable information. In theory, tokenization is a great advance since it uses protected health information to generate a hashed identification code behind fire-walled data, and then uses that encrypted code to link data from disparate sources.
But, in practice, we are awaiting information about the accuracy of these token-based data linkages since we don’t know how often the linkages show true matches as opposed to near (incorrect) matches.
AI removes much of this guesswork about assessing the patient experience and their full journey through the healthcare system by organizing patient information, enabling better information sharing, and giving researchers access to data they may never have had, all without reliance on tokens.
What should life sciences professionals be careful of when using AI in these processes?
Professionals should recognize that AI is a great tool to support linking patient records and organizing real-world health information, but not sufficiently powerful on its own to make sense of the data. Real-world data needs real-world data scientists to transform data into evidence.
There is no substitute for rigorous scientific approaches to evaluate treatment benefits and risks, including fit-for-purpose study designs and analytics. I often refer to Karl Popper’s work on “Conjectures and Refutation,” where we develop a hypothesis and test it in a population. A true cause-and-effect relationship should be evident in more than one study population.
Interestingly, this concept is being put to new tests as we face results from randomized clinical trials that may differ from real-world evidence, raising questions about whether the differences were due to the characteristics of study populations, healthcare settings, and other factors.
In my experience, the best AI tools are used to organize and code data, with supplementary human review as appropriate and according to the intended purpose of the research.
How does engaging patients in studies enable researchers to collect more information directly?
Once enrolled in a study, a patient who cares about the research is more likely to continue participating in a study. When patients are getting value through their study participation, such as receiving easy-to-use health data at their fingertips, study engagement and retention improve significantly.
With better retention, studies are completed more quickly and at lower cost, and sponsors can collect more patient-centric information, including the burden of illness and impact on activities of daily living and quality of life.
All of this will enhance our ability to understand how well treatments work, which patients respond best, and conversely, which patients are unlikely to benefit and potentially at greater risk of harm.