[stock-market-ticker]
Posted in

Hidden AI Demand in Organizations: The Silent Driver of Growth

corporate office with AI hologram interface, employees analyzing data, futuristic workplace
Representative image. For illustrative purposes only.

The numbers are striking. According to Gartner research across 500 companies, 68% of employees use unauthorised AI tools at work — up from 41% in 2023. A survey by Gusto found that 45% of US workers have used AI at work without telling their employers about it, and more than half said their productivity would drop without it. Perhaps most telling: some 66% pay for these tools themselves, out of their own pockets.

This is not casual experimentation. It is employees making a considered economic decision that the AI tools available to them on the open market are worth their own money, precisely because they deliver results that their official corporate tools either cannot or have not yet been configured to provide.

The Scale of What’s Already Happening

The phenomenon has a name — shadow AI, the AI equivalent of the shadow IT that preceded it — but the dynamics are meaningfully different. Shadow IT usually involved unauthorised apps or cloud services: a personal Dropbox account, an unapproved project management tool. Shadow AI goes further. When an employee pastes a client’s financial model into ChatGPT to reformat a table, or uploads an internal strategy document to Claude to generate a summary, the data leaves the organisation’s control perimeter entirely. Unlike a file moved between storage locations, AI inference moves context, logic, and proprietary information into third-party systems the organisation cannot audit, monitor, or recall. According to Cisco’s 2025 study, 46% of organisations had already reported internal data leaks through generative AI tools.

The risk is real. But the HBR researchers — led by Elena Alfaro, head of global AI adoption at BBVA, alongside academics from Universidad Carlos III de Madrid — argue that leading with compliance is the wrong response. The right response is to recognise what the behaviour is actually saying.

Shadow AI Is Demand, Not Deviance

Employees who quietly open a personal laptop to run their work prompts through ChatGPT are not behaving maliciously. They are behaving rationally. They have identified a tool that makes their work faster, better, or easier. They are using it. The fact that the corporate version of the same capability either doesn’t exist, is clunky to use, or hasn’t been made available yet is an organisational failure — not a personal one.

As the HBR piece frames it: rather than treating employees’ unauthorised use of consumer AI tools as a compliance problem, companies should recognise it as a signal of untapped demand and redirect that energy into a structured enterprise-wide strategy.

This reframe matters enormously. Companies that respond to shadow AI with bans and surveillance are likely to drive the behaviour underground without eliminating it — and in doing so, they lose the intelligence that the behaviour was providing. The tools employees choose, the tasks they apply them to, the workflows they build around them: all of this is rich data about where AI can create value within the organisation. Dismissing it as a security problem discards that intelligence.

History bears this out. Samsung famously banned ChatGPT after discovering that engineers had inadvertently pasted proprietary chip-design code into the chatbot, potentially exposing trade secrets. The company later reversed course. Research consistently shows that blanket AI bans simply drive usage underground — nearly half of employees report they would continue using personal AI accounts even after an organisational ban. When, by contrast, organisations deploy enterprise-grade approved alternatives, unauthorised usage drops by up to 89%.

The BBVA Model: Follow the Energy, Then Structure It

BBVA’s approach offers a practical template for what the alternative looks like. Rather than pushing employee experimentation underground, the bank brought it into a trusted environment. The strategy was built on a deceptively simple insight: employees are already motivated. The job of leadership is not to create that motivation but to channel it safely.

“We created this atmosphere of being in a safe place to learn and to use AI,” said Elena Alfaro. “Instead of shadow AI, we gave them a platform that was safe so they could start experimenting.” The bank deployed initially to 3,000 employees, then rapidly expanded to 11,000. It provided specific AI training for 250 leaders, including the CEO and chairman. It ran an internal competition called Bot Talent, inviting employees to propose innovative and secure uses of generative AI for their daily work — and discovered a grassroots innovation ecosystem it did not know it had.

The results were concrete. ChatGPT Enterprise usage was saving BBVA employees an average of two hours of work per week. Employees across Peru, Mexico, and Spain were creating their own GPTs and sharing them across the bank. Front-line teams closest to the work were identifying use cases that no centralised AI team had imagined. “Once adoption starts,” said Antonio Bravo, BBVA’s global head of data, “it accelerates.”

What Leaders Are Missing

The deeper insight in all of this is about organisational intelligence. Most companies approach AI adoption as a technology deployment problem: select the tools, build the infrastructure, train the users, enforce the policies. But shadow AI reveals that the demand side of the equation is already solved. Employees are not waiting to be told that AI is useful. They have already decided. They are already using it.

What leaders are missing is the signal embedded in that behaviour. Which departments have the highest rates of unofficial AI use? What tasks are employees most commonly using it for? Which consumer tools are they choosing, and why? The answers to those questions map the organisation’s actual AI opportunity — not the theoretical one visible from a boardroom strategy session, but the real one visible from the work being done every day.

Consider this: according to IDC, 65% of employees already use AI tools, with 39% using free, unapproved versions and a further 17% paying for their own. More than 80% of workers report using unapproved AI tools in their jobs, with fewer than 20% relying exclusively on approved solutions. These figures are not an indictment of employees. They are an indictment of organisational strategy that is lagging a workforce reality by a substantial margin.

The Path Forward

The HBR framework suggests three moves. First, treat unauthorised use as a demand signal — map it, understand it, and use it to prioritise where enterprise AI should be deployed first. Second, make approved tools genuinely better than the consumer alternatives, not just safer. If the sanctioned option is clunkier than ChatGPT, employees will continue to use ChatGPT. The security argument is not compelling to someone who is trying to meet a deadline. Third, build a culture where employees feel safe to surface how they are using AI rather than hiding it — because the intelligence embedded in those honest conversations is irreplaceable.

The employee sitting with two laptops, one authorised and one not, is not a compliance risk waiting to be managed. They are the organisation’s most motivated AI adopter. The question for leadership is whether to see them as a problem or an asset.

Written by Shalin Soni, CMA specializing in financial analysis, global markets, and corporate strategy, with hands-on experience in financial planning and analytical decision-making.

ALSO READ

• Xi Jinping Says the World Order Is Crumbling and He’s Ready to Fill the Gap
• 21 Hours, No Deal: Inside the Historic US-Iran Peace Talks That Ended in Islamabad
• Zaslav’s $887 Million Golden Parachute Sparks Governance Concerns

Source: Based on Harvard Business Review and publicly available information.

[stock-market-ticker]