The healthcare industry is on the cusp of a technological revolution, with artificial intelligence (AI) poised to reshape everything from diagnostics to surgical procedures. But the path to widespread AI adoption is fraught with challenges, from evolving regulatory landscapes to complex intellectual property considerations. Jason Novak, partner-in-charge at the San Francisco office of Norton Rose Fulbright, an IP and data rights attorney, and Thomas Kluz, a distinguished venture capitalist and Managing Director of Venture Lab, share insights that healthcare leaders need to consider. Together, they offer their insights on the innovation of AI in healthcare and the evolving regulatory environment shaping its future.
The Rise of Robotic Surgery
Robotic surgery, which was previously limited to specialized procedures, is poised for substantial growth thanks to advancements in AI, augmented reality (AR), and virtual reality (VR). Thomas Kluz envisions a future where these technologies enable surgeons to perform operations remotely, potentially transforming healthcare accessibility in underserved areas. "The ability to leverage computer vision in real-time during surgical procedures is incredibly exciting," Kluz noted. "I believe this is the future of the industry."
"Robotic surgery has been around for a few years now, but the integration of AI, AR, and VR is set to revolutionize the field," said Jason Novak. "The question is, where will this lead us? In the next few years, I see the value proposition of robotic surgery expanding significantly within the US healthcare system," Novak added.
Data Rights: The Foundation of AI Innovation
Jason Novak highlights the essential role of data rights in driving AI innovation: "In the AI world, you don't have innovation without data rights. You need the rights to use the data to build, create, and train your models."
Data rights, which differ from data compliance, involve the ownership and use rights associated with data as a valuable asset. The ability to access third-party data for commercial purposes and specific fields of use is vital for advancing AI technology. However, the current state of diligence regarding data rights among investors, lawyers, and AI companies is not sophisticated, potentially leading to breaches of data rights agreements or the absence of such agreements altogether.
Data rights are closely intertwined with innovation and intellectual property (IP), as they fuel the AI-trained models that produce groundbreaking results. Therefore, prioritizing data rights over IP may be necessary for companies due to their significance as the foundation of future intellectual property.
Intellectual Property in the AI Era
Protecting intellectual property in the age of artificial intelligence comes with its own set of unique hurdles. Jason Novak highlighted the concept of "divided infringement," stating: "The concept of divided infringement deals with situations where multiple parties are needed to carry out all the steps of a patented process, making it unlikely to attribute infringement to a single entity. In the context of remote surgical procedures, this scenario could play out with more than one party supplying the technology or software, managing the robotic tools, and performing some aspect of the procedure from a remote location."
While the breadth of divided infringement has increased via case law in the last few years, the multi-party nature of the legal concept can pose obstacles to safeguarding intellectual property. Moreover, indirect infringement principles such as induced infringement and contributory infringement should also be considered in developing a strategy, as those can become involved in a remote surgery situation. However, these principles also have standards that make proof difficult.
To mitigate these challenges, companies should implement a comprehensive strategy using very sophisticated IP counsel experienced in this industry. This involves drafting patents strategically to encompass both the complete method, sub-methods, and physical individual components, among other considerations, all to implement and/or avoid the above concepts in favor of standard direct infringement burden of proof.
The Hesitation Towards Dynamic AI in Healthcare
In the healthcare sector, embracing change, especially on the software side, has not always been a swift process. The trajectory of digital health took a massive turn based on need, not want, with COVID-19 acting as the catalyst. While experts had foreseen substantial growth in digital health between 2015 and 2025, the sector did not see significant expansion until the pandemic forced the industry's hand.
Static models, known for their predictability and transparency in upholding patient safety through predefined and unchanging parameters, have been preferred. On the other hand, dynamic AI technologies like deep learning models are often viewed as "black boxes" due to their complex internal workings. While these models can improve performance by adapting to new data, concerns may arise about potential unintended consequences and lack of human control. These concerns, which again can stem from a lack of education and experience, can impede their adoption in the healthcare industry.
Challenges in the Evolving Regulatory Environment
The primary issue in the changing regulatory landscape for AI, particularly concerning the FDA, is the FDA's discomfort with the ambiguity inherent in AI. The FDA prefers static, unchanging algorithms that are easier to understand and regulate.
"The FDA is understandably cautious about new technologies, but I think there's a lack of comfort and understanding specifically around AI," said Thomas Kluz, highlighting the FDA's preference for static models.
However, AI algorithms, especially those utilizing deep learning, are dynamic and develop over time as they analyze more data. This adaptability, essential for AI's efficiency, presents a challenge to the FDA's conventional regulatory framework tailored for static products. Furthermore, as AI algorithms undergo updates and enhancements, it remains uncertain how the FDA will manage the approval process for these new iterations.
The Balancing Act: Innovation vs. Regulation
To strike a balance between fostering innovation and complying with regulations in AI-driven healthcare solutions, it is important for companies to actively collaborate with regulatory bodies to keep up with changing expectations. They should implement strong data management protocols to ensure clarity in ownership and access rights, which are key for advancing AI technology. Maintaining transparency and clarity in AI algorithms is crucial for building trust and aiding regulatory comprehension. Given the ever-evolving nature of AI, it is necessary to have version control and iterative development processes in place to facilitate effective regulatory monitoring.
Emphasizing clinical validation and evidence collection is vital to showcase the safety and effectiveness of AI solutions. Working closely with industry peers and following established standards can influence regulatory frameworks and encourage best practices. Lastly, businesses should embrace flexible business models that can navigate diverse regulatory demands and payment structures across various markets.