An Israeli venture firm has stated in a report that companies that use generative artificial intelligence tools like ChatGPT may be jeopardizing personal customer information and trade secrets.
According to a research from Team8, which was handed to Bloomberg News prior to publication, widespread adoption of new AI chatbots and writing tools could leave organisations open to data leaks and lawsuits.
The concern is that hackers could use the chatbots to get access to critical corporate information or commit activities against the firm. Concerns have also been raised that confidential information supplied into chatbots currently could be utilised by AI businesses in the future.
Microsoft and Alphabet, the parent company of Google, are racing to add generative AI capabilities to chatbots and search engines, training their models on data scraped from the Internet to provide customers with a one-stop shop for their inquiries. According to the article, if these technologies are fed secret or private data, it will be extremely impossible to remove the information.
“Enterprise use of GenAI may result in access and processing of sensitive information, intellectual property, source code, trade secrets, and other data, through direct user input or the API, including customer or private information and confidential information,” the report said, classifying the risk as “high.”
It defined the hazards as “manageable” if appropriate controls are implemented.
Contrary to prior claims that such cues could potentially be read by others, the Team8 paper emphasises that chatbot queries are not incorporated into large-language models to train AI.
Ann Johnson, a corporate vice president of Microsoft, was engaged in the report’s creation. Microsoft has spent billions of dollars in OpenAI, the company that created ChatGPT.
“Microsoft encourages transparent discussion of evolving cyber risks in the security and AI communities,” according to a Microsoft representative.
The report also includes contributions from dozens of chief information security officers from U.S. corporations. Michael Rogers, the former chief of the US National Security Agency and US Cyber Command, also backed the Team8 study.