ChatGPT can leak secret corporate data

Israeli cybersecurity company Team8 has warned that artificial intelligence (AI) tools, such as the ChatGPT chatbot, could jeopardize clients’ confidential information and trade secrets. The company claims that the widespread adoption of chatbots and AI-based tools could make companies vulnerable to data breaches and lawsuits.

Big technology companies, such as Microsoft and Alphabet, are rushing to add generative AI capabilities to improve search systems, but this could lead to the use of confidential or private data for training.

ChatGPT recently experienced a glitch that caused user requests to be displayed to other users. Team8’s report highlights three other “high-risk” issues related to the integration of generative AI tools and emphasizes the growing threat of information exchange through third-party applications, such as Bing and Microsoft 365 tools.

The report was approved by Michael Rogers, former head of the United States National Security Agency and United States Cyber Command, and its co-authors included dozens of leaders of information security services at American companies. A representative from Microsoft stated that “Microsoft encourages transparent discussion of the evolution of cyber risks in communities focused on security and artificial intelligence.”

Companies need to assess the potential risks associated with using third-party tools based on generative AI, such as ChatGPT. While these tools can provide benefits such as improving customer service, they can also make companies vulnerable to data breaches and lawsuits.

Today, companies are forced to take more active measures to protect their data. It is important to ensure that confidential information is not transmitted through third-party applications. As the use of generative AI continues to grow, it is important to prioritize cybersecurity measures to protect confidential data.

SHARE

LEAVE A REPLY

Please enter your comment!
Please enter your name here