Your Employees are Unknowingly Exposing Company Data to Unauthorized AI
Share This Story, Choose Your Platform!
A Message from Nexigen’s Chief Information Officer
Unauthorized AI is emerging as a significant threat to data security. Legal documents, HR data, source code, and other sensitive corporate information are being fed into unlicensed, publicly available AIs at an alarming rate. This trend is creating a mounting “shadow AI” problem for IT leaders.
Recent studies have revealed that employees in many organizations are extensively using unauthorized AI models without the knowledge of their CIOs and CISOs. They are sharing company legal documents, source code, and employee information with non-corporate versions of AIs, such as ChatGPT and Google Gemini, which poses significant risks.
According to research from Cyberhaven Labs, about 74% of ChatGPT usage at work is through non-corporate accounts, potentially allowing the AI to use or train on that data. The Cyberhaven Q2 2024 AI Adoption and Risk Report, based on actual AI usage patterns of 3 million workers, also found that over 94% of workplace use of Google AIs Gemini and Bard comes from non-corporate accounts.
The report highlights that nearly 83% of all legal documents shared with AI tools are done so through non-corporate accounts, while approximately half of all source code, R&D materials, and HR records are being fed into unauthorized AIs. The amount of data entered into all AI tools has seen a nearly five-fold increase between March 2023 and March 2024.
Where Does the Data Go?
Many users may not fully understand what happens to their company’s data once shared with an unlicensed AI. For instance, ChatGPT’s terms of use state that while the content ownership remains with the users, the AI may use the content to improve its services, including training itself. Users can opt out of this, but the default settings pose risks.
So far, there have been no high-profile leaks of major company secrets by public AIs, but security experts are concerned about the potential for data breaches once an AI ingests sensitive information. OpenAI’s recent announcement of a new Safety and Security Committee underscores the seriousness of these concerns.
Assessing the risk of sharing confidential or sensitive information with publicly available AIs is challenging. While companies like Google and OpenAI are unlikely to leak sensitive business data, there are few regulations governing what AI developers can do with the data provided by users. This lack of regulation could be exploited by second- and third-tier AI developers, who may be less scrupulous or lack robust cybersecurity measures.
Risky Behavior
Sharing company or customer data with any unauthorized AI creates significant risks. Whether the AI model trains on the data or shares it with other users, the information now exists outside company control. Organizations should consider signing licensed agreements with AI vendors, which include data use restrictions, allowing employees to safely experiment with AI.
One major problem with shadow AI is that users often don’t read the privacy policies or terms of use before uploading company data. This lack of awareness about data handling practices adds to the risk.
Training and Security
To mitigate these risks, organizations need comprehensive education programs and strict access controls on sensitive data. Establishing an AI acceptable use policy is essential. Employees should be educated about the dangers of granting wide access to sensitive documents, as even seemingly minor actions can lead to significant security breaches.
Using AI, even licensed ones, necessitates robust data management practices. Access controls should limit employee access to only the information necessary for their roles. Implementing longstanding security and privacy best practices is crucial in the AI age.
Deploying AI tools is a stress test of a company’s security and privacy protocols. It emphasizes the need for meticulous planning and risk management. As AI continues to evolve, it can either be the best ally or the worst enemy of security and privacy officers. Therefore, it’s crucial to adopt a cautious and well-considered approach to AI deployment.
Conclusion
In conclusion, while there is significant pressure to deploy AI quickly, it is vital to ensure that basic controls and policies are in place to protect sensitive data. Taking the time to establish robust security measures can prevent potentially devastating breaches and maintain the integrity of company information.
Request a free consultation now to safeguard your company’s data with cutting-edge AI security solutions.
Get Started Now
Ready to integrate Nexigen into your IT and cybersecurity framework?
Schedule a 30-minute consultation with our expert team
Breathe. You’ve got IT under control.
Ready to integrate Nexigen into your IT and cybersecurity framework?
Refine services and add-ons to finalize your predictable, no-waste plan
Complete the form below, and we’ll be in touch to schedule a free assessment.