Automation is an essential aspect of data management and processing today in the pharma industry. At the same time, it’s important for companies to strike a balance between humans and machines, write Emily Eller and Kevin Frymire.
What’s the right amount of automation? It’s a question the airline industry is grappling with today in the wake of several tragic incidents. It’s also a question the pharmaceutical industry needs to address.
In jetliners, automation can mean smoother rides for passengers and minimize opportunities for human errors. As evidenced in 2008 when a Qantas flight suddenly plummeted toward the Indian Ocean and two recent crashes involving the Boeing 737 MAX, however, there are perils associated with automation – especially automation that is unaccompanied by sufficient human oversight and intervention. In those three air disasters, the pilots struggled to overrule the computers.
For pharma companies, automation of data processes can increase efficiencies, enable close-to-real-time reporting and equip executives to make faster decisions. As vendors release data more frequently and pharma companies collect more data, commercial teams can faster produce more precise reports for the sales team, the C-suite and other stakeholders. Amid this push for near-real-time reporting, however, maintaining data integrity becomes increasingly difficult. If not addressed promptly, small errors in data that flow into automated processes can wreak havoc – e.g. misaligned territories, inaccurate incentive compensation payouts, and bad forecasts.
To harness the efficiencies and benefits associated with automation and minimize the inherent risks, companies must recognize the importance of human intervention and involvement in automated processes. No automation tool today can account for every contingency or potential data error. Without data professionals closely monitoring data and automation processes, a company puts itself at risk of executing error-ridden commercial plans, which can hamper sales success and deplete an organization’s trust in the commercial team’s work.
It’s crucial that pharma companies carve out a major role for data professionals as they expand the use of automation. Below are five keys to avoiding bad data-generated and automation-fueled crises.
The starting point for any pharmaceutical data team is to be skeptical about the accuracy of all data. Data vendors vary in terms of the quality and consistency of their deliverables, and it’s not rare to see formatting errors or duplicate entries. These problems especially come into play when a company receives specialty pharmacy data, since these vendors often input much of their data manually. Skepticism about data is a crucial first line of defense against errors that can infiltrate analytics and reporting – and generate significant downstream consequences.
Pharma commercial teams must make their processes resilient and proactively scour new data assets for errors, so they can catch and correct issues with data promptly. Teams should be on the lookout for out-of-the-ordinary changes in data. For example, if a new data set shows a large increase or decrease in prescription volume for a physician, it’s worth a closer look. Importantly, this effort to check data should include a combination of automated checks and manual reviews by data professionals.
It’s easier to identify flaws in data if a company has integrated teams and strong knowledge-sharing processes.
In most organizations, multiple teams have their hands in data. If these teams aren’t coordinating efforts and cross-communicating, they can introduce errors. For example, say a company’s data team decides to tweak how it organizes shipment data from two years ago. They don’t expect their change to impact automated reporting processes, so they don’t provide details to the commercial team. However, the data team’s change ends up removing shipment data from 2017, which changes inventory count – a number that informs earnings guidance. Suddenly, a small change in data management processes has turned into a potential crisis.
It’s also important to share and document knowledge within a data team. A team can’t afford to have too much knowledge reside with a couple of data professionals. Instead, teams must work collaboratively and develop repeatable processes that new team members can learn and execute quickly. Without a focus on knowledge sharing across departments and within teams, a company runs the risk of repeating errors.
Because small errors can quickly infiltrate multiple reports and cascade into larger problems, a data team must build in automated checks (or “circuit breakers”) that stop data processes if tripped. For instance, if a duplicate entry exists, the circuit breaker would stop the entire reporting process. At that point, the data team would investigate the problem before turning the automated process back on.
Data teams should set up these circuit breakers to catch anomalies in data. In one situation, a company’s data capture system experienced a half-day outage. The team that manages the data capture system didn’t alert the commercial team about the outage. So, when the commercial team received prescription data a day later, the volume was significantly lower than it should have been. Having an automated check in place that stops all data processing if overall prescription volume rises or drops by a certain amount would ensure this error doesn’t find its way into other areas of the company’s commercial operations.
When creating a new automated report process, data teams should implement more checks than needed and test for several weeks before using the process to create stakeholder-facing reports. But, once a process is up and running, it’s important to be careful about implementing too many strict circuit breakers without appropriate tolerance levels. After all, these checks delay reports to key parties. Data teams should, on a regular basis, evaluate the circuit breakers and tolerance levels they have in place to ensure all are important in preventing significant reporting errors.
After every airline disaster, the National Transportation Safety Board pieces together all data points from the flight data recorder, listens to the cockpit voice recorder, and attempts to recreate every action by the flight crew, before producing a final report that outlines various fixes. With every report, the NTSB seeks to ensure the issues that caused a specific incident never happen again. Pharmaceutical commercial teams should similarly investigate every error that occurs and implement an automated fix that addresses not only the specific error, but also potential future related errors. A data team should figure out why upfront manual checks didn’t catch the error and then assess the circuit breakers they have in place to identify the gap. Then, the team would either tweak or add to these automated checks to ensure it catches the same (or similar) errors in the future.
Fixing an error and any associated ramifications is important. But, a data team can’t just patch the error. Instead, the team owes it to stakeholders to thoroughly investigate and implement processes to ensure it doesn’t face the same problem twice.
It’s easy to become complacent with and too trusting of automation. Some commercial teams may feel that, with automation, there’s little need for a robust data science operation. But, as we’ve established, having good “pilots” is crucial to identifying errors and limiting the fallout from those errors. But any data professional won’t do. A pharmaceutical company needs access to data experts who have – to steal a line from pop culture - a “very particular set of skills.”
Those skills include deep data analytics expertise – including experience building and managing automation tools – as well as life sciences industry knowledge. Pharmaceutical industry data contains complexities and nuances that only the most seasoned data practitioners can recognize and address.
For example, if a company generates prescriptions based on an off-label use of its product, it needs to separate those off-label prescriptions from on-label prescriptions for incentive compensation reporting. This effort is crucial to ensuring field sales representatives are only compensated for and motivated to pursue on-label promotion. Data professionals must understand how the drug is being prescribed and build automated processes to identify prescription records for off-label uses of the product. Data elements such as HCP specialty or ICD-10 code can be used to filter out off-label prescriptions, but, often, data professionals need to be more creative when dealing with messy data sources (e.g. use keywords or string searches to filter out problematic account types). If off-label data were lumped in with on-label use data, a data professional without deep industry knowledge may not catch the error.
If a company’s data team has this comprehensive skillset (either in-house or through a consulting partner), the company can more effectively identify errors in data and implement the automated checks needed to prevent serious problems with reporting.
Automation is an essential aspect of data management and processing today in the pharmaceutical industry. At the same time, it’s important for a pharma company to strike a balance between humans and machines. Those that do will reap the many benefits of automation and “fly right,” while also avoiding the catastrophic errors that can result when automation goes unchecked.
Emily Eller and Kevin Frymire are Associate Partners at Beghou Consulting.