Samsung’s ChatGPT incident made headlines
Posted: Mon Feb 10, 2025 4:08 am
Recently, many in the technology and security communities have been raising alarm bells about the lack of understanding and sufficient regulatory guardrails around the use of AI technologies. We are already seeing concerns about the reliability of the results produced by AI tools, intellectual property and personal data leaks, and privacy and security breaches.
2023 will be the year we will remember as the beginning of the AI era, driven by the technology everyone is talking about: ChatGPT.
after the tech giant unwittingly leaked its own secrets to an AI service. Samsung isn’t alone: A Cyberhaven study found that 4% of employees have put sensitive corporate data into a large language model (LLM). What many don’t know is that when they train a model with their corporate data, the AI provider can reuse that data elsewhere.
“Within days of ChatGPT’s launch, we identified multiple bahamas mobile database actors on the dark web and specially-accessed forums sharing not only vulnerabilities, but also functional malware, social engineering tutorials, money-making schemes, and more — all made possible by using ChatGPT,” a revelation from cybersecurity research firm Recorded Future sounds like we’re about to better equip cybercriminals.
As for privacy, when a person signs up for a tool like ChatGPT, it can access their IP address, browser settings, and browsing activity — just like modern search engines. But the risk is higher because “without consent, a person’s political views or sexual orientation can be revealed, which can lead to embarrassing or even career-damaging information being released,” says Jose Blaya, director of engineering at Private Internet Access.
It is clear that we need better regulations and standards for the implementation of these new AI technologies. However, what is missing here is a discussion of the important role of data governance and management, as this can play a critical role in the adoption and safe use of AI in enterprises.
2023 will be the year we will remember as the beginning of the AI era, driven by the technology everyone is talking about: ChatGPT.
after the tech giant unwittingly leaked its own secrets to an AI service. Samsung isn’t alone: A Cyberhaven study found that 4% of employees have put sensitive corporate data into a large language model (LLM). What many don’t know is that when they train a model with their corporate data, the AI provider can reuse that data elsewhere.
“Within days of ChatGPT’s launch, we identified multiple bahamas mobile database actors on the dark web and specially-accessed forums sharing not only vulnerabilities, but also functional malware, social engineering tutorials, money-making schemes, and more — all made possible by using ChatGPT,” a revelation from cybersecurity research firm Recorded Future sounds like we’re about to better equip cybercriminals.
As for privacy, when a person signs up for a tool like ChatGPT, it can access their IP address, browser settings, and browsing activity — just like modern search engines. But the risk is higher because “without consent, a person’s political views or sexual orientation can be revealed, which can lead to embarrassing or even career-damaging information being released,” says Jose Blaya, director of engineering at Private Internet Access.
It is clear that we need better regulations and standards for the implementation of these new AI technologies. However, what is missing here is a discussion of the important role of data governance and management, as this can play a critical role in the adoption and safe use of AI in enterprises.