9 min read

AI Doesn't Create New Security Problems — It Exposes the Ones You Already Had

A modern data center viewed through a cracked glass wall — servers glow blue and purple behind the fractures, suggesting hidden fragility beneath polished infrastructure

There’s a narrative taking hold in boardrooms and security conferences that goes something like this: AI is creating an entirely new category of cyber risk, and we need entirely new defenses to deal with it.

It’s an understandable reaction. But I think it’s mostly wrong — and the data supports a different, more uncomfortable story.

AI isn’t introducing new categories of risk. It’s accelerating and exposing the ones that were already there — the visibility gaps, the ownership vacuums, the governance shortcuts that organizations have been carrying for years.

Canada’s own National Cyber Threat Assessment 2025-2026, published by the Canadian Centre for Cyber Security, puts it directly: AI is the number one trend reshaping the country’s threat environment. Not because it creates novel attack vectors from scratch, but because it makes existing ones faster, more personal, and harder to detect. The assessment describes a country entering “a new era of cyber vulnerability” — driven not by a lack of tools, but by the amplification of long-standing weaknesses.

This is a pattern I’ve seen repeatedly across organizations of very different sizes and maturities. The technology changes. The underlying gaps don’t.

The Amplifier Effect

IBM’s 2025 Cost of a Data Breach report contains what I consider the most telling statistic in the current AI security landscape: 97% of organizations that experienced an AI-related breach lacked proper access controls.

Not 97% lacked “AI-specific” controls. They lacked proper access controls — the foundational kind that have been a security requirement since long before anyone was talking about large language models.

These organizations didn’t have an AI security problem. They had an access control problem that AI finally made impossible to ignore.

The Pentera AI Security & Exposure Benchmark 2026 — a survey of 300 U.S. enterprise CISOs conducted in December 2025 — tells the same story from a different angle. Three dimensions stand out:

Visibility is incomplete everywhere. 67% of CISOs report limited visibility into where and how AI is operating across their environments. No CISO — zero — reported full visibility with no presence of Shadow AI. Even the 33% who believe they have good visibility acknowledge that unauthorized or unmanaged AI usage likely persists. You can’t secure what you can’t see, and right now, most organizations can’t see the full picture.

Ownership is diffuse. 56% of enterprise CISOs report that AI security is a shared responsibility across multiple teams — security, IT, infrastructure, application teams — rather than being assigned to a single owner. Only 20% place it solely with the security team. When I read “shared responsibility,” what I actually hear is “unclear accountability.” This is a structural gap that predates AI entirely. Most organizations have been wrestling with shared-versus-clear ownership of security responsibilities for years. AI just made it impossible to ignore.

Legacy tools are doing the heavy lifting. 75% of CISOs report relying on traditional endpoint, cloud, application, or API security tools to protect their AI systems — tools that were designed for a different attack surface. Only 11% have dedicated AI security tooling in place. This mirrors what happened with cloud adoption a decade ago: organizations extended existing controls to a fundamentally different environment, and it took years for purpose-built security to catch up.

Strip away the AI label and these are the same challenges security leaders have been reporting for a decade — just with higher stakes and less room to defer.

The Threat That Doesn’t Need a Hacker

Most of the AI security conversation focuses on external threats — attackers using AI to craft more convincing phishing emails, generate deepfakes, or automate exploitation. And those threats are real. Microsoft’s Digital Defense Report 2025 found that AI-driven phishing campaigns are three times more effective than traditional ones, and KPMG Canada reported in February 2026 that 81% of Canadian companies experienced attempted or successful AI-powered fraud over the past year.

But there’s a quieter risk that I think deserves more attention — one that doesn’t require a hacker at all.

When organizations deploy internal AI tools — a copilot over their document repositories, a RAG system connected to internal databases, an AI-powered search across their knowledge base — they create a new access channel to existing data. And that channel doesn’t respect the informal boundaries that have historically kept sensitive information contained.

I’ve seen this firsthand. In most organizations, an employee in marketing might technically have read access to a SharePoint library containing thousands of documents — including financial projections, HR reports, or strategic plans. In practice, they’d never find those documents. They wouldn’t know to look, wouldn’t know the right folder, wouldn’t stumble across them in the course of their work.

Now give that same employee an AI assistant that can search across everything they have access to. Ask it the right question — or even the wrong one — and suddenly that financial projection surfaces in a summary. The AI is doing exactly what it was designed to do: find relevant information. The problem is that the permissions were always too broad. The data classification was always incomplete. Nobody fixed it because nobody needed to — until now.

The World Economic Forum’s Global Cybersecurity Outlook 2026 reflects this shift. Data leaks from generative AI are now the number one AI-related concern among cybersecurity leaders, at 34% — up from 22% the previous year, and surpassing concerns about adversarial AI capabilities. Security leaders are waking up to the fact that the most likely AI incident in their organization won’t come from a sophisticated attacker. It’ll come from an employee asking their copilot the wrong question.

IBM’s data reinforces the point: Shadow AI — unauthorized or unmanaged AI usage — was a factor in 20% of breaches and added an average of $670,000 to breach costs. Gartner found that 57% of employees use personal GenAI accounts for work, with 33% admitting to inputting sensitive information into unapproved tools.

This isn’t malice. It’s the same dynamic I described in an earlier article on Shadow AI: people trying to do their jobs well, using the tools available to them, without a framework to guide what’s appropriate. The difference with internal AI tools is that the organization itself built the channel — and often didn’t realize what it was connecting.

More Tools, Same Breaches

If the problem were simply a lack of investment, you’d expect organizations spending more on security to be getting breached less. They’re not.

Gartner projects global security spending at $213 billion in 2025, rising to $240 billion in 2026 — a 12.5% increase. The Pentera survey found that enterprises run an average of 47 security solutions, with 40% operating 51 or more. And 68% added net-new security tools in the past year.

Organizations are spending more, deploying more tools, and getting breached at roughly the same rate. At some point, the answer to “we got breached” can’t keep being “buy another tool.”

This is what I was getting at when I wrote about boards asking “what are the risks we’re not seeing?” The risks that matter most aren’t the ones addressed by adding another tool to the stack. They’re the foundational ones — data classification, access governance, clear ownership, regular validation — that have been on the to-do list for years but never reached the top.

AI didn’t put those items on the to-do list. But it’s rapidly moving them from “we’ll get to it” to “we should have gotten to it.”

What Actually Works

If the diagnosis is that AI amplifies existing gaps, then the treatment isn’t to build a separate “AI security program.” It’s to use AI adoption as the catalyst to fix what should have been fixed already.

Based on what I’m seeing in the data and in the organizations I work with, a few things separate those managing this well from those that aren’t.

The first is simply treating AI as a governed enterprise asset — not a special case, not a side experiment. AI systems get inventoried, classified, and subjected to the same deployment discipline as any other enterprise technology.

Then there’s the skills question, which I think is underappreciated. The number one barrier to AI security, cited by 50% of CISOs in the Pentera survey, is a lack of internal expertise. Limited visibility into AI usage comes second at 48%. Budget constraints? Only 17%. The bottleneck isn’t money — it’s knowledge. I keep seeing organizations try to buy their way out of a problem that’s fundamentally about capability.

The hardest one, and the one with the biggest payoff, is fixing the data and access fundamentals. AI adoption forces a question that many organizations have been deferring for years: do we actually know what data we have, who can access it, and whether those permissions are appropriate? This was always question number one for boards overseeing AI initiatives — “Is our data actually ready, or are we building on sand?” In a security context, it’s the same question with higher stakes. And the organizations that use AI deployment as the trigger to finally get this right will be more secure across the board — not just for AI.

Finally — and I’ll go deeper on this in a future article — someone needs to own this. Not “shared responsibility across stakeholders.” One team, or one leader, with clear accountability for AI security posture. Execution can be distributed. Accountability can’t.

The Real Question

The instinct in many boardrooms right now is to ask: “Do we have an AI security strategy?”

I think that’s the wrong question. The right one is: do we have a security strategy that’s mature enough to handle AI?

Because if your data governance is solid, your access controls are well-managed, your asset visibility is comprehensive, and your ownership is clear — AI doesn’t fundamentally change the equation. It adds surface area. It accelerates timelines. But it doesn’t require you to reinvent security from scratch.

And if those foundations aren’t in place? Then you don’t have an AI security problem. You have a security problem — one that AI is about to make very visible, very quickly.

That’s actually the good news. This isn’t a new discipline to master. It’s the one you already know, done properly. The organizations that use AI adoption as the forcing function — the reason to finally close the gaps they’ve been carrying — won’t just be more secure against AI threats. They’ll be more secure, period.

Share this article

About the Author

André Boisvert

André Boisvert

CIO & Strategic Consultant

CIO and strategic consultant helping organizations navigate AI, digital transformation, and IT strategy. Sharing weekly strategic perspectives on enterprise technology.

LinkedIn

Related Articles

Stay Updated

Get weekly insights on AI, digital transformation, and IT strategy delivered to your inbox.