Site icon Gadgets Africa

EU call for measures to limit ChatGPT, others

Japan warns ChatGPT maker OpenAI over data collection

OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration

European Union lawmakers on Monday encouraged international leaders to meet and identify measures to limit the development of advanced artificial intelligence systems like ChatGPT.

The 12 MEPs, who are all working on EU law on technology, urged US President Joe Biden and European Commission President Ursula von der Leyen to convene the conference and said AI companies should be held more accountable.

The announcement comes just weeks after Twitter CEO Elon Musk and over 1,000 technology leaders urged a six-month ban on the development of systems more powerful than OpenAI’s newest edition of ChatGPT, which can mimic humans and generate text and graphics based on prompts.

The Future of Life Institute wrote an open letter in March warning that AI might disseminate misinformation at an unprecedented rate and that if left unchecked, computers could “outnumber, outsmart, obsolete, and replace” humans.

The MEPS stated that they disagreed with some of the “more alarmist statements” in the FLI message.

“We are nevertheless in agreement with the letter’s core message: with the rapid evolution of powerful AI, we see the need for significant political action,” they wrote.

The letter advised both democratic and “non-democratic” countries to consider future governance systems and to exercise caution in their pursuit of extremely powerful AI.

Last week, China’s cyberspace government issued preliminary guidelines for supervising generative AI services, stating that companies must submit security assessments to authorities before public launch.

The Biden administration has also sought public feedback on potential accountability measures for AI systems, as concerns about their impact on national security and education arise.

Nearly two years ago, the European Commission suggested draught regulations for an AI Act, under which AI tools are anticipated to be classed according to their estimated level of risk, ranging from low to unacceptable.

Exit mobile version