
The rise of agentic artificial intelligence (AI) is liable to have multiple use cases for healthcare in patient-facing areas including remote monitoring, but it will also bring new legal considerations along with it, an expert says.
Generative AI (genAI) has recently been rolled out at the US Food and Drug Administration (FDA) to streamline the regulatory submissions process. But unlike genAI’s reactive and task-specific nature, agentic AI holds the potential to take further, proactive steps based on a set of goals and with minimal human intervention.
According to Meghan O’Connor, attorney at law firm Quarles & Brady, while agentic AI is still in its early development stages, its proactive nature will eventually allow for interesting use cases in the healthcare industry due to its autonomous functionality and advanced, ‘change-of-thought’ reasoning capacity.
O’Connor told Medical Device Network: “We are likely to see key impacts in patient-facing settings with personalised treatment, remote monitoring, and workflow automation such as patient scheduling, intake, and prior authorisations.”
The application of agentic AI exists at macro and micro levels, O’Connor said, meaning it holds the potential to influence patient-specific health care delivery models that rely on real-time decision- making and which could benefit from adaptive learning, including diagnostics and decision support.
O’Connor continued: “There are also operational and financial opportunities with the potential to free up staff and reduce provider burnout with reliable workflow automation, change outstanding balances, and reduce claims denials with better data validation. In addition, on the macro level, opportunities exist for application in areas such as public health management in identifying, for instance, viral outbreaks in real-time.”
The applications of the agentic AI appear vast for the broader healthcare space, but O’Connor cautioned that as the technology’s capabilities increase, the health and life science industries must be prepared to invest “time and resources in educating the public and providers in order to build trust and comfort in interacting with and relying on AI software.”
The likely rise of new legal considerations
As agentic AI continues to advance and be applied across various healthcare domains, O’Connor foresees new legal questions arising, particularly around distinguishing between medical devices and hardware, AI software, and human healthcare providers when apportioning negligence.
“Traditionally, product liability case law has looked to a standard of care as the level of care that a reasonably prudent person would exercise in similar circumstances,” O’Connor said.
“In the context of AI, would the standard of care need to be higher because the AI software should be held to a higher standard than a reasonably prudent person? And how can we distinguish between the hardware, software, and human healthcare provider when AI is integrated into care delivery models?
“Time will tell when it comes to product liability implications, but we’ll have a front row seat as this new body of caselaw develops.”
source: medicaldevice