AI

Rising Enterprise GenAI Use Could Pose Risky for Corporate Data

By Zbynฤ›k Sopuch, CTO, Safetica

The enterpriseโ€™s love affair withย third-partyย GenAI productivity tools isย going strong,ย even asย the increasedย usageย addsย pressure to already stretched corporate security teams.ย In the United States,ย aroundย 75 percentย ofย  leadersย / managers and 51 percent of frontline employeesย use GenAI for productivity.ย 1ย Employeesย typically access GenAI toolsย throughย a browser,ย using the technology forย productivity, research,ย andย content creation.ย Security issuesย arise when GenAI usesย the informationย to train itselfย and to answer general queries.ย With this in mind, corporateย IT departments shouldย reviewย GenAI security now, especially withย Gartner predictingย that by 2030 more than 40% of global organizations will suffer security incidents dueย toย AI tools.2ย 

Browsers, Third-Partyย Tools, and Data Leaksย 

The browser powers much of enterprise GenAIย useย as it isย an easy-to-use gateway to the Internet.ย As such, it lacks many enterprise-grade protections, such as network endpoint security and data loss prevention protocols.ย ย To understand the security issues withย third-party GenAI tools, it helps to see the browser asย operatingย in an information Wild West without the strictness of endpoint security or data loss protection enablement.ย ย 

The typical corporate user does notย plan toย exposeย  corporateย data, butย he/she does not understandย the path ofย theย information inputted.ย GenAI platformsย areย programmed toย be alwaysย learning, so any dataย it getsย is used for self-training and in answers to other queries.ย ย GenAIย does not differentiate between โ€œsafeโ€ and sensitive data inputs, meaning customer or employee PII, intellectualย property,ย ย andย insiderย corporate knowledgeย are treated asย regularย information.ย 

GenAI Data Leaks in the Real Worldย ย ย ย ย 

Inย early 2023,ย Amazonโ€™s internal legal teamย noticed thatย outputs from OpenAIโ€™s ChatGPTย on coding read the same as answers to problems given to prospectiveย Amazonย employees.ย The companyโ€™sย employeesย hadย been placingย data snippets intoย ChatGPT for tasks like debuggingย or confirming a correct answer.ย Unbeknownst to Amazon,ย ChatGPTย thenย trained itself on the data and included it inย allย queries, including those from outside the company.ย Similarlyย in 2023,ย Samsungย experienced three leaks of confidential informationย as a result ofย employees using GenAI. The exposed information includedย source code,ย detailedย meetingย notes, and hardwareย data.ย ย 

As a result of the leaks, Amazonย immediatelyย instituted a policy prohibiting the sharing of confidential information on third-party GenAI.ย Samsungย bannedย GenAI tools for employees, and by December 2025 only certain Samsung departments can useย third-party GenAI.ย Both companies discovered that it isย very difficultย toย deleteย information from third-party GenAI services once inputted.ย 

The Rise of Shadow GenAIย ย 

Many companies may not have the luxury of discovering andย stopping information being used by GenAIย because they are not awareย that their employees are using them.ย It is estimated that workers in 90 percent of companies are using chatbots,ย butย theyย not letting their IT departments know.3ย In addition to data appearing on GenAI platforms, shadow AI use can leave the company vulnerable to more intricate attacks, such asย data poisoning, launch prompt injection attacks,ย phishing, and other threats.ย Shadow AIย also has compliance risks, with the lack of governance leaving companies vulnerable to fines for not following data privacy protocols.ย 

Malwareย and Phishingย 

GenAI tools can be used by threat actors to generate scripts, obfuscate code, or craft social engineering code to discover network vulnerabilities. There have not been many cases of malware spreading by GenAIย use, but the instances are expected to rise inย the coming years.ย 

GenAIย canย enable hyper-personalized phishing by generating context-aware emails, deepfakes, orย evenย voice clones.ย GenAIย makesย it harder for employees to spot fakesย during usage, leading to potentialย unauthorized accessย ofย networks or Intellectual Property.ย 

The Best Company Response to GenAI Useย 

There are steps a company can take toย utilizeย the benefits of GenAI third-party tools whileย still keepingย their data and compliance in check.ย 

Establish Policiesย 

First, the company should develop and enforce policies on GenAI platform use. The guidelines should include allowable platforms, types of data that can be input, and the departments that are allowed to use these tools.ย The โ€œRulebookโ€ should also address regulatory compliance, gaining input fromย theย legal and security teams on best practices.ย 

Build a Zero-Trust Architectureย 

A company should consider aย zero-trust architecture to secure access to GenAI, especially browser-basedย tools. Thisย step couldย include multi-factor authentication, least-privilege access,ย data encryption,ย and real-time monitoring to prevent dataย loss or unauthorized sharing.ย The ability to block risky interactions, such as large data uploads, should be enabled.ย ย Finally, the company should conduct regular auditing of all data flows.ย ย 

Start Classifying Sensitive Dataย ย 

A company should classify sensitive information and implement controls to prevent it from being implemented into GenAI prompts. Similarly, models, datasets, and integration should beย monitoredย to make sureย noย data poisoningย from bad commands being copied from GenAIย has taken place.ย ย ย 

Monitorย AI Outputs and Mitigate Malware/Phishing Risksย 

Requiringย ย reviewsย of AI-generated code or content for malwareย or bad information before the information is used can help mitigate threats.ย ย ย 

Make It a Team Effortย 

The best GenAI protection will come from policies and practicesย developed across IT, security, business, and legal teams. The policies should be rolled out and updated with employee training.ย 

Scale carefullyย 

Once a company has a good monitoring and response process in place, it can start to scale GenAI use. This increase should occur with an eye to emerging threats in mind.ย 

Byย understanding GenAI third-party tool security risks, monitoring for shadow use, and building a comprehensive usage and policy rule set, a company can be more confident in its use of GenAI. The goal is to stay aware of emerging threats and to make sure your practices are aligned with data safety and governance mandates.ย ย 

Author

Related Articles

Back to top button