top of page
Ricki Chase

AI Takes Center Stage at FDLIs Annual Meeting

Updated: May 21


The Food and Drug Law Institute's annual meeting on May 15-16, 2024, had an element of artificial intelligence (AI) throughout many presentations. As most can appreciate, fast-moving AI development is presenting significant promise and great challenges for use in the healthcare environment, and the FDA is not unlike many other government agencies in trying to keep up.


On Tuesday, a panel of interested AI stakeholders comprised of the General Counsel for the U.S. Senate Committee on Health, Education, Labor, & Pensions, law firm Hogan Lovells, the Coalition for Health AI, Emergent Biosystems, and Verily Life Sciences offered their perspectives on where AI is now in the life sciences and health management space and where they see it in the future.


Coleen Gessner, Executive Vice President of Quality and Ethics and Compliance at Emergent, discussed how Emergent has over 1,800 employees using ten generative AI tools focused on improving the productivity of the biologics manufacturer. She stated that Emergent is currently not using AI in deep learning. AI promises to create efficiencies within regulated industries, and productivity intelligence can lead to innovative production practices and improved operations management.


Ms. Gessner went on to imagine the factory of the future where AI monitors each manufacturing line for real-time feedback and integration of process performance, operational control, and a collection of a continuous stream of quality data yet without the qualified human factor to respond to signals and effectively detect and prevent quality problems the technology is of little benefit. Additionally, she warned of a financial bias. The ability of the wealthier members of the community to implement such advanced strategies while leaving the smaller industry players behind. The implication is that the consolidation of manufacturing into fewer and fewer hands may only exacerbate the already tenuous supply chain complicated by a lack of diversity in providers.


Joe Franklin, Head of Strategic Affairs, Verily, pointed out that the various types of AI each have their intended use and focused deliverable. He emphasized that this is an early time for companies dealing with AI and that it is critical first to identify the type of AI being used or proposed for use to determine best the potential associated risks involved. He encouraged AI risk mitigation strategies, including working across organizations and early involvement of legal teams and consultants who can bring subject matter expertise. There is a lack of compliance and quality regimes for dealing with this new technology. He encouraged an early start to understand what those regimes will need to be to leverage the power of AI best while controlling the risks it presents.


Brian Anderson, CEO of the Coalition for Health AI (CHAI), brought the discussion down to the most basic level. How does the community align and agree on the technical definitions used to define what good AI is, particularly in a consequential space such as healthcare? With over twenty government agencies collaborating with CHAI and private sector innovators, there is a drive to define a common language.


Beyond defining a common language to build understanding and develop a necessary regulatory framework, the power of AI must be understood to prevent its potential contribution to healthcare inequity. It is essential to validate the AI models independently. This is specifically of concern with the use of generative AI and model algorithms for use in clinical studies where the failure to identify potential biases in the analysis of specific patient populations can lead to unintended discrimination of those patient populations when deriving the therapies from those clinical studies. Mr. Anderson emphasized the need for qualified third parties to validate the AI models independently and to identify missed biases before those AI tools are submitted to the FDA for use in studies. The training of the AI model and the data sets upon which that training is based will be vital in preventing bias, which may, at best, disregard some patient populations and, worse, lead to adverse clinical outcomes.


Within the FDA, the use of AI is currently limited. While most can imagine opportunities to use AI to create internal Agency efficiencies, others can imagine the FDA using AI to more readily digest post-market data signals and adopt a preventative role instead of a reactive role. Additionally, there is some discussion of opportunities to use AI to allow for real-time studies and leveraging of real-world data, specifically in drug approval for rare diseases where the clinical studies are routinely few and small in the patient population. Regardless, the message is that the FDA has not yet identified its regulatory construct for AI across all potential modalities and has not yet defined its internal practices, procedures, and regulatory construct.


Barrett Tenbarge, General Counsel, stated, and Dr. Gottlieb, ex-FDA Commissioner, reiterated on Thursday during his speech at the Wiley Awards luncheon, that the FDA does not currently have any policy proposals or bills pending before Congress regarding FDA's own AI policies and controls or those proposing a novel or revised framework for industry's use of AI. Mr. Tenbarge encouraged anyone with stakeholder interest to discuss both the good and the bad associated with new AI technologies in the life sciences and healthcare sector, as it will take experience and communication to drive policy development.





17 views0 comments

Recent Posts

See All

留言


bottom of page