GenAI in the Enterprise: The Growing Threats We’re Not Talking About

AI-generated attacks are rising but the bigger risk may be what employees are putting into GenAI tools every day
The 2025 Verizon Data Breach Investigations Report (DBIR) delivered a sobering data point: The volume of AI-assisted malicious emails has doubled in just two years, from 5% to 10% of all phishing attempts.
Generative AI (GenAI) has officially entered the threat landscape
While phishing with AI-generated text is on the rise, the more immediate risk likely lies inside our own organizations. Employees are using these tools, often without
any oversight.
What the Verizon DBIR Says
The report found that:
– 15% of employees are using GenAI tools on corporate devices
– Many are doing so with personal accounts, outside of SSO or MFA
– In some cases, sensitive data is being entered into tools with unclear retention policies or unknown data-sharing models
The threat isn’t just what GenAI does. It’s what we do with it.
The Real-World Risk of “Shadow AI”
Across industries, employees are turning to tools like ChatGPT, Gemini, Claude, and Copilot to help with:
– Drafting emails
– Generating code
– Writing documentation
– Summarizing reports
– Creating presentations
– Translating customer feedback
In most cases, these use cases are benign, even helpful. But problems emerge when:
– Proprietary source code is submitted for debugging
– Confidential client details are entered into prompts
– Internal strategies or roadmaps are pasted for analysis
– Corporate logins are used in consumer-grade tools with no visibility from IT
This creates a new kind of shadow IT. Shadow AI. And with it, a growing surface area for:
– Unintentional data leakage
– Exposure of sensitive credentials
– Retention of corporate IP by third-party LLMs
– Legal ambiguity about who owns the input/output of the model
What the Platforms Are Doing
Major providers like OpenAI and Google have started monitoring their tools for misuse, particularly by nation-state actors.
In early 2025, both companies published reports documenting:
– Attempts to use LLMs to generate phishing content
– Social engineering via persona modeling
– Code generation to support known malware strains
– Disinformation crafting for influence campaigns
This level of platform-level monitoring is necessary. It also confirms the point that GenAI is now a toolset that both attackers and defenders are learning to wield.
Why Internal Enforcement Still Matters
Most organizations still lack clear policies or controls on GenAI usage, especially across departments like marketing, HR, customer service, and software development.
Some key questions every organization should be asking:
✔️ Do we allow GenAI use on corporate devices at all?
✔️ Are employees using personal email accounts to access AI tools from work machines?
✔️ What’s our guidance on sharing proprietary code, documents, or sensitive information with public LLMs?
✔️ Are browser extensions or chat tools silently capturing data?
If there’s no policy in place or no enforcement of it, there’s no audit trail. No visibility. And no way to stop an accidental breach before it happens.
So, What Can Be Done?
Here are a few practical steps to reduce GenAI-related exposure:
1. Create and communicate) a GenAI usage policy
It should cover what can and can’t be submitted to public LLMs, approved tools, and account types.
2. Control access
Where possible, enable GenAI access only through company-managed accounts tied to SSO and MFA.
3. Use enterprise-grade versions
If GenAI tools are business-critical, explore enterprise licenses with admin controls and data governance protections.
4. Monitor browser usage
Tools like CASB or browser isolation can help track which web-based LLMs are in use and flag risky behavior.
5. Train for judgment
Awareness campaigns should go beyond phishing and teach employees what GenAI can (and can’t) be trusted with.
What You Can Do Next?
Set clear boundaries for responsible use.. AI-generated attacks are rising but the greater risk may be coming from inside the firewall. Every prompt matters. Every paste matters. Because once sensitive data leaves your system, you can’t take it back.