Britain is cracking down on artificial intelligence companies that acquire data without permission.
Britain’s national information watchdog has warned AI firms that they might face fines if they fail to obtain consent before collecting people’s personal data.
According to the information commissioner, organisations that use generative AI technology are subject to data protection rules, which means they must obtain consent or demonstrate a legitimate interest in collecting personal information.
Regulators are said to be increasingly concerned about the privacy implications of the generative AI boom, which has been pioneered by startups such as OpenAI and its popular ChatGPT model.
It refers not just to the personal data collected from individuals via large language models such as ChatGPT, but also to the corporations’ scraping of massive amounts of data from across the internet, some of which is personal.
Companies such as Amazon, JPMorgan, and Accenture have prohibited their employees from utilising the application out of concern for how the information given may be used.
According to OpenAI CEO Sam Altman, generative AI could represent a “existential risk” to humans.
The crackdown comes after Rishi Sunak met with executives from three of the world’s largest AI firms, OpenAI, Google-backed Anthropic, and DeepMind, last week, amid growing concerns about the technology’s impact on society.
The Prime Minister stated that the technology needed “guardrails” and that he had discussed the potential of deception as well as larger “existential” threats.
The competition watchdog has already initiated an investigation into the AI sector, including an examination of the technology’s safety consequences.
In March, Italy’s data protection authority temporarily disabled ChatGPT because there was “no legal basis that justifies the massive collection and storage of personal data.”