Published on December 13, 2023, 11:29 am

Google’S Medlm: Utilizing Generative Ai To Assist Healthcare Workers

Google has announced its new initiative, MedLM, which aims to utilize generative AI models to assist healthcare workers in completing various tasks. The company has developed Med-PaLM 2, a model that performs at an “expert level” on medical exam questions. MedLM features two models: one larger model designed for complex tasks and a smaller, fine-tunable model ideal for scaling across tasks.

Google acknowledges the importance of finding the most effective model for each specific task as use cases can differ significantly. For instance, summarizing conversations may require a different model than searching through medications. The company is working closely with organizations like HCA Healthcare and BenchSci to pilot these models in real-world scenarios. HCA Healthcare has been utilizing MedLM to help physicians draft patient notes at emergency department hospital sites, while BenchSci integrated the model into its evidence engine for biomarker identification and classification.

The competition in the healthcare AI market is heating up, with major players such as Microsoft and Amazon also vying for dominance. By 2032, this market could be worth tens of billions of dollars. Amazon recently introduced AWS HealthScribe, which employs generative AI to transcribe and analyze patient-doctor conversations. Likewise, Microsoft is piloting various AI-powered healthcare products, including medical assistant apps powered by large language models.

However, there are valid concerns surrounding the accuracy and reliability of AI in healthcare. Past experiences have highlighted issues with claims made by companies like Babylon Health that their AI systems can outperform doctors in disease diagnosis. IBM had to sell its Watson Health division at a loss due to technical problems compromising customer partnerships.

Generative models like Google’s MedLM family represent a step forward in sophistication compared to previous technologies. Nevertheless, studies have proven that despite their advancements, these models are not particularly accurate when it comes to answering healthcare-related questions, even basic ones.

Researchers co-authored a study involving ChatGPT and Google’s Bard chatbot, which posed questions about eye conditions and diseases. The responses from these models were often wildly incorrect. In some cases, ChatGPT generated cancer treatment plans filled with potentially deadly errors. Additionally, models like ChatGPT and Bard have been known to propagate racist and debunked medical ideas when asked about kidney function, lung capacity, or skin-related matters.

In October, the World Health Organization (WHO) issued a warning about the risks associated with using generative AI in healthcare. It highlighted concerns such as the potential for generating harmful wrong answers, disseminating health-related disinformation, and unintentionally revealing sensitive health data. While WHO acknowledges the benefits of technology in healthcare, including generative AI systems, it cautions against hasty adoption without proper testing due to the risk of errors by healthcare workers and harm to patients.

Google reiterates its commitment to responsible deployment of generative AI tools in healthcare and emphasizes its focus on enabling professionals in a safe manner. The company aims not only to advance healthcare but also ensure that these benefits are accessible to everyone.

With ongoing collaboration between technology companies and healthcare experts, it is hoped that advancements in AI will enable significant improvements in patient care while maintaining high standards of accuracy and accountability.


Comments are closed.