Now currently offering Covid, Flu, RSV, Shingles, and Pneumonia Vaccines.
Haworth Apothecary Logo

Get Healthy!

International Group of Health Experts Raise Alarm About Dangers of AI
  • Posted May 10, 2023

International Group of Health Experts Raise Alarm About Dangers of AI

Artificial intelligence (AI) research and development should stop until its use and technology are properly regulated, an international group of doctors and public health experts said.

Certain types of AI pose an "existential threat to humanity,"the experts wrote in the May 9 issue of the journal BMJ Global Health. The group -- led by Dr. Frederik Federspiel of the London School of Hygiene and Tropical Medicine in the United Kingdom -- included experts from the United States, Australia, Costa Rica and Malaysia.

AI has transformative potential for society, including in medicine and public health, but also can be misused and may have several negative impacts, they said.

The experts warned that AI's ability to rapidly clean, organize and analyze massive data sets, which may include personal data and images, make it possible to be used to manipulate behavior and subvert democracy.

There are already examples, they noted. AI was used in this way in the 2016 U.S. presidential election; in the 2017 French presidential election; and in elections in Kenya in 2013 and 2017, the experts reported.

"When combined with the rapidly improving ability to distort or misrepresent reality with deep fakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts,"the authors warned.

AI-driven surveillance can be used to control and oppress people, they added. China's Social Credit System is an example, combining facial recognition software and analysis of "big data" repositories of people's financial transactions, movements, police records and social relationships, according to the report.

At least 75 countries have been expanding these types of systems, including liberal democracies, the team said.

Another area of threat is in the development of Lethal Autonomous Weapon Systems (LAWS). Attached to small mobile devices such as drones, these can locate, select and engage human targets without human supervision. This could kill people "at an industrial scale,"the authors explained.

Over the next decade, widespread use of AI technology could also cost tens to hundreds of millions of jobs.

"While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behavior,"the authors stated.

Increasing automation tends to shift income and wealth to the owners, and contributes to inequitable wealth distribution across the globe, they said.

"Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health,"the authors wrote.

Self-improving general purpose AI (AGI) is a great threat. It could learn and perform the full range of human tasks, the experts explained in a journal news release.

"We are now seeking to create machines that are vastly more intelligent and powerful than ourselves," they wrote. "The potential for such machines to apply this intelligence and power -- whether deliberately or not -- in ways that could harm or subjugate humans -- is real and has to be considered."

If realized, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons and digital systems could, they said, well represent the "biggest event in human history."

The window of opportunity to avoid serious and potentially existential harms is closing, the authors cautioned.

"The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimize risk and harm and maximize benefit,"they wrote.

This will require international agreement and cooperation and avoiding an AI "arms race," the team suggested.

"If AI is to ever fulfill its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances,"the experts concluded. "This includes ensuring transparency and accountability of the parts of the military--corporate industrial complex driving AI developments and the social media companies that are enabling AI-driven, targeted misinformation to undermine our democratic institutions and rights to privacy."

More information

Read about the National Artificial Intelligence Initiative.

SOURCE: BMJ Global Health, news release, May 9, 2023

HealthDay
Health News is provided as a service to Haworth Apothecary site users by HealthDay. Haworth Apothecary nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2024 HealthDay All Rights Reserved.