Published on December 18, 2023, 11:29 am

Artificial Intelligence (AI) has been making significant advancements in the healthcare sector, particularly with generative AI models. However, a group of researchers has raised concerns about the dominance of big tech companies in this field. In an article published in Nature, they argue that medical professionals should take the lead in developing and deploying generative AI to protect privacy and safety, rather than leaving it in the hands of commercial interests.

Tech giants like Google and Microsoft have been at the forefront of developing generative AI for healthcare purposes. Google recently introduced MedLM, a set of specialized AI models focused on healthcare that are available through its Vertex AI platform. These models are based on Med-PaLM 2, Google’s second iteration of large-scale medical language models designed to cater to specialist-level inquiries.

Microsoft, on the other hand, unveiled Medprompt, a prompting strategy enabling their GPT-4 model to outperform specialized models like MedPaLM-2 in medical question benchmarks. Earlier this year, Microsoft showcased the potential use of GPT-4 for various medical tasks.

Despite these impressive developments, researchers highlight some potential risks associated with relying solely on proprietary large language models (LLMs), such as those used by ChatGPT. One concern is becoming overly dependent on LLMs that are difficult to evaluate and subject to sudden changes or discontinuation without notice. This unpredictability could undermine patient care, privacy, and safety.

Another issue lies in the inherent limitations of LLMs themselves. These models occasionally produce hallucinations or generate convincingly false results. Updating their knowledge base when circumstances change, such as during the emergence of a new virus, requires costly retraining.

Moreover, utilizing medical records to train these models poses significant privacy risks as sensitive information could be reconstructed and shared upon request. This raises concerns around the privacy of individuals with rare diseases or conditions.

Additionally, LLMs based on vast amounts of internet data can perpetuate biases related to various factors like gender, race, disability, and socioeconomic status. Even if external parties have access to the underlying models, evaluating their safety and accuracy remains a challenge.

To address these concerns, researchers propose a more transparent and collaborative approach. They suggest that healthcare institutions, academic researchers, physicians, patients, and technology companies should work together worldwide to develop open-source LLMs specifically for healthcare applications.

This consortium-led approach would involve creating an open-source foundational model using publicly available data. Consortium members would then contribute knowledge and best practices to refine the model further by incorporating patient-level data from their respective institutions.

Adopting such an open approach offers several advantages over proprietary LLMs in the healthcare domain. It ensures reliability and robustness in the models while allowing for shared and transparent evaluation. Furthermore, it facilitates compliance with privacy regulations and other requirements governing sensitive medical information.

In conclusion, while generative AI holds great promise for revolutionizing healthcare practices, it is crucial to consider the potential risks associated with allowing big tech companies to have full control over its development. By fostering collaboration among stakeholders within the healthcare sector and beyond, we can create open-source LLMs that prioritize privacy, safety, and inclusivity in the pursuit of improved healthcare outcomes for all.

Share.

Comments are closed.