While You Decide Whether to Adopt AI, Your Employees Already Have
Let’s be honest.
Most of the executives I meet are not against AI. They want to innovate, they see the possibilities, they read the same articles as everyone else. But as soon as the conversation turns to actually deploying something, it often lands in the same place: “Yes, but security… privacy… we’re not sure.”
And then nothing moves.
Except that while we hesitate, something important is happening in their offices — without them.
The Problem You Can’t See: Shadow AI
Here’s what the data tells us clearly: banning AI doesn’t stop it. It just makes it invisible.
According to a 2025 WalkMe study of American workers, 78% of employees admit to using AI tools not approved by their employer. And 46% say they wouldn’t stop even if asked. Another study (UpGuard, 2025) goes even further: over 80% of workers use unsanctioned AI tools — including, strikingly, nearly 90% of security professionals themselves.
This isn’t malice. It’s often someone in marketing using ChatGPT to draft a proposal, an analyst pasting a report into an online tool to extract key points, a developer asking an AI assistant to review their code. People who simply want to do their job well.
The problem is that without a framework, without policies, without approved tools, these employees are using consumer platforms that can store, analyze, and potentially reuse what’s sent to them. About 38% of employees have shared sensitive data with AI tools without authorization. And one in five organizations has already experienced a breach directly linked to shadow AI.
The reality is that the risk many leaders are trying to avoid by doing nothing is materializing anyway — just outside their line of sight.
Quebec Is Falling Behind — Slowly, But Surely
Data from the Institut de la statistique du Québec (ISQ, November 2025) is clear: only 12.7% of Quebec businesses use AI in their operations. On its own, this isn’t alarming. What is alarming is the relative pace: in spring 2025, Quebec progressed by 3.3%, while Ontario advanced by 7.8%. Twice as fast.
It’s not a chasm yet. But it’s a gap that’s widening, quietly.
And according to Statistics Canada, among businesses that reject AI, security concerns rank third — behind a lack of knowledge about what AI can actually do.
In other words: it’s not the fear of risk that paralyzes the most. It’s the fear of the unknown.
The Real Risks — No More, No Less
I don’t want to minimize the stakes. They’re real. But there’s an important difference between the risks we imagine and those that truly deserve attention.
What deserves attention: sending customer data, confidential financial information, or personal data into an unconfigured consumer tool. Not having a clear policy on what employees can and cannot do with AI. Ignoring your obligations under privacy laws — like the GDPR in Europe or Quebec’s Law 25 — which apply whether you use AI or not.
What’s often overestimated: the idea that all AI tools “steal” data or that using AI automatically means losing control. Major enterprise platforms — corporate versions of Microsoft Copilot, Claude, Google Gemini — offer clear contractual guarantees: no use of data for model training, regional hosting, access controls, audit logs. That’s not the same as pasting a confidential contract into free ChatGPT on a personal phone.
Nuance matters. Many organizations treat both situations as identical, and that’s what creates unjustified paralysis.
What Paralysis Really Costs
A KPMG Canada survey (2025) reveals that 51% of Canadian adults already use generative AI at work — and 79% report measurable productivity gains. The majority saves between one and five hours per week.
Your competitors who’ve made the leap aren’t waiting for you.
There’s an asymmetry that leaders don’t always see: the risk of mismanaging something is visible, concrete, nameable. The risk of doing nothing is diffuse, invisible — but just as real. Falling behind on productivity, losing talent who want to work with modern tools, leaving your employees to fend for themselves with unmanaged tools. All of this has a cost. It’s just harder to put in a board presentation.
A 5-Step Plan to Move Forward Without Rushing In
Formal frameworks exist for structuring all of this — the NIST AI Risk Management Framework, ISO 42001 — and for organizations aiming for advanced maturity or certification, they’re essential. But to get started, you don’t need to master them. The essentials boil down to five concrete steps:
Step 1 — Know What You Have (2 weeks)
Before deciding what to deploy, do a quick inventory: what data do you process? Which data is sensitive — customers, employees, finances, trade secrets? Which is public or internal with minimal consequence? This basic classification guides everything else. No need to be exhaustive: one day with the right people around a table is often enough.
Step 2 — Look at What’s Already Happening (1 week)
Ask your IT team or an external consultant: what AI tools are your employees already using, officially or not? Shadow AI is probably already present. Better to know and regain control than to keep ignoring it.
Step 3 — Choose and Approve Tools with Intent (2 to 4 weeks)
Identify two or three tools suited to your needs and risk profile. For most organizations, an enterprise version of a recognized tool — Microsoft 365 Copilot, Google Gemini for Workspace, Claude for Work, ChatGPT Enterprise — offers a solid starting point with strong contractual protections. Document what’s approved, what’s restricted, what’s prohibited. And communicate it clearly.
Step 4 — Train Teams on the Essentials (1 day)
No need to turn everyone into a cybersecurity expert. One hour of practical training on three questions is enough: what should never go into an AI tool? How do you tell an approved tool from an unapproved one? What do you do if you’re not sure? Simple, concrete, memorable.
Step 5 — Update Your Privacy Policy (in parallel)
Privacy laws — like the GDPR in Europe or Quebec’s Law 25 — already require a personal information governance policy. If it hasn’t been updated to include AI, now is the time. This isn’t a six-month legal project — half a day with your compliance officer and a good template can cover the essentials.
What’s interesting about this approach is that it creates its own momentum. At first, you move cautiously because you don’t yet know your environment well. But as the framework takes shape, as teams have their first concrete, positive experiences, as reflexes develop — confidence grows. And with confidence comes velocity. Decisions are made faster, use cases multiply, the organization learns. This isn’t a transformation that’s decreed. It’s one that’s built, one step at a time.
The First Step Is the Only One Missing
The organizations that will get the most out of AI in the coming years won’t necessarily be those with the best technical resources from the start. They’ll be those that decided to move forward — with intelligence and caution, but move forward nonetheless.
Shadow AI is already here. The competition is moving. The opportunity is real.
The question is no longer whether you’ll adopt AI. It’s whether you’ll do it in an organized way, or endure what’s already happening without you.