WHO Cautions against AI usage within the Health Sector

Artificial Intelligence AI and humans in Customer Experience CX not an either or situation

You are probably guilty of taking to Google and self-diagnosing yourself. You simply think that you have a certain disease or condition that you might have, run an online search for possible symptoms, and boom, you start manifesting those symptoms and the actual disease.

Well, perhaps that is a bit more dramatic. Nonetheless, we all go to Google before we seek the professional advice of a trained health expert. More powerful technologies have disrupted professions across all industries in recent years – for instance, generative artificial intelligence (AI).

The healthcare sector has not been spared, although it has yet to be seriously impacted like other industries. Before, people had Google and the likes for self-diagnosing themselves. Lately, people are learning to use AI to run their own prognosis.

Given the immense power posed by AI, running a wrong prognosis could see a patient go far down the wrong path in their quest to heal a condition or sickness. Generative AI is a tool, and just like any other tool, it can be efficient in doing the wrong things when placed in the wrong hands.

World Health Organization raising an Alarm

WHO has raised the alarm, citing grave risks posed by AI. To be precise, the organization is seriously concerned with the large multi-modal models (LMMs). It says the technology is quite new and needs more long-term data to warrant total reliance on its outputs.

LMMs, in the “geek-language”, means a generative AI model that can harvest vast amounts of data from several sources and generate outputs in the form of texts, pictures, and videos. This technology can be powerful within certain key sectors of the healthcare industry, such as:

  1. Executing clerical tasks
  2. Drug synthesis
  3. Patient-guided use
  4. Medical training for healthcare workers

Indeed, both tech and health experts agree that LMMs’ ability to mimic human behavior and its ability to solve problems through interactive nature makes it an invaluable tool.

However, WHO warns that the technology is still in its infancy stage and it is quite possible that it will – at some point – output inaccurate recommendations. That is especially if it is fed with inaccurate data, or it encounters a unique situation that has not been previously captured by its data input and possible simulations of the problem.

As cited by sections of the media, WHO warns, “As LMMs gain broader use in health care and medicine, errors, misuse, and ultimately harm to individuals are inevitable.”

WHO Puts Bumps on AI Adoption

To mitigate possible harm stemming from reliance on AI, WHO has put in place measures to mitigate possible risks associated with reliance on the technology. The organization has put in place key policies to be adhered to by healthcare providers. Some of these measures include:

  1. Guarantee of patient’s privacy protection and choice to opt-out of AI-run healthcare service.
  2. Cybersecurity standards employed in the AI technology adopted by the healthcare industry.
  3. Synergy between healthcare experts and tech gurus developing and designing the AI systems for the healthcare industry. WHO also wants patients to be roped in on this development.

Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Jeremy Farrar, a Chief Scientists at WHO.

Related posts

Capturing Energy Data: DAQ Systems in Renewable Sources

Understanding the Growth and Impact of Retail Media Networks on Marketing Strategies

AfriLabs Exchanges Strategic MOU with Ministry of Investment of Saudi Arabia at LEAP 2024