Tuesday, September 9, 2025
spot_img

WHO’s move for regulation of AI in healthcare highlights risks: Report

Date:

Share post:

spot_imgspot_img

New Delhi, Oct 25:  The World Health Organization’s (WHO) recent considerations for the regulation of artificial intelligence (AI) in healthcare, highlights the potential challenges associated with using AI tools in the sector, according to a report on Wednesday.

The WHO regulatory considerations touch on the importance of establishing safety and effectiveness in AI tools, making systems available to those who need them, and fostering dialogue among those who develop and use AI tools.

The WHO recognises the potential of AI in healthcare, as it could improve existing devices or systems through strengthening clinical trials, improving diagnosis and treatment, and aiding the knowledge and skills of healthcare professionals.

The report by GlobalData, a data and analytics company, notes that AI technologies are and have been deployed quite quickly, and not always with a full understanding of how they will work in the long run, which could be harmful to healthcare professionals or patients.

“AI has already improved several devices and systems, and there are so many benefits of AI. However, there are risks too with these tools and the rapid adoption of them,” said Alexandra Murdoch, Senior Analyst at GlobalData, in a statement.

AI systems in medical or healthcare often have access to personal and medical information, so there should be regulatory frameworks in place to ensure privacy and security. There are a number of other potential challenges with AI in healthcare, such as unethical data collection, cybersecurity risks, and amplifying biases and misinformation.

A recent example of biases in AI tools comes from a study conducted by Stanford University. The study results revealed that some AI chatbots provided responses that perpetuated false medical information about people of colour.

The study ran nine questions through four AI chatbots, including OpenAI’s ChatGPT and Google’s Bard. All four of the chatbots used debunked race-based information when asked about kidney and lung function.

“The use of false medical information is deeply concerning and could lead to a number of issues, including misdiagnoses or improper treatment for patients of colour,” Murdoch said.

The WHO has released six areas for regulation of AI for health, citing a need to manage the risks of AI amplifying biases in training data. The six areas for regulation are transparency and documentation; risk management; validating data and being clear about the intended use of AI; a commitment to data quality; privacy and data protection; and fostering collaboration.

“With these areas for regulation outlined, governments and regulatory bodies can follow them and hopefully develop some regulations to protect healthcare professionals and patients, and also use AI to its full potential in healthcare,” Murdoch said.

IANS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_imgspot_img

Related articles

Assam: 3 ex-MLAs join Congress; party targets BJP over corruption

Guwahati, Sep 8: In a boost to its preparations for the upcoming Assam Assembly elections, the Congress on...

US media watch: Expert welcomes PM Modi-Trump exchange

Washington, Sep 8: Gordon Chang, a Senior Fellow at Gatestone Institute, has welcomed the lowering of tensions between...

USTM celebrates the 100th Birth Anniversary of Bharat Ratna Dr Bhupen Hazarika

Guwahati, Sept 8: On the occasion of the Birth Anniversary of Dr. Bhupen Hazarika, the legendary “Sudha Kontho”...

RGU Chancellor declares ₹1 crore corpus fund for the Dr. Bhupen Hazarika Centre for Creativity

  GUWAHATI, Sept 8: Shri Bimal Bora, Minister of Cultural Affairs, Government of Assam, today inaugurated the “Sudhakantha Museum”...