Well Engineered Tech - Blog

Shadow IT as an AI Strategy

- Hamburg, Germany

Dieser Artikel ist auch auf Deutsch verfügbar.

In university, I learned that shadow IT is the enemy1, meaning IT systems that business units introduce behind central IT’s back because official channels are too slow or central IT doesn’t understand what the departments actually need. We’re talking about Excel macros in controlling, a self-hosted Trello instance in marketing, or a private Dropbox for customer data. On top of that, there are small side solutions for process automation, where teams bypass the actual workflow implementation and shortcut individual steps with a tool, maybe a no-code flow, a bot, or a script that only covers their part of the process. For central IT, this often stays invisible until something breaks. Then an Excel file surfaces somewhere that has been quietly holding together an entire reporting process, or a bot that exactly one person knows how to operate, or a no-code flow that works with real customer data but isn’t documented anywhere. That’s exactly why shadow IT has almost always been a problem. There’s no overview and no clear owner, standards are missing, security is missing, integration is missing, and as soon as someone leaves the team or an update changes something, a process suddenly stops. Data silos emerge on the side, and what started as a practical shortcut quickly becomes a compliance issue and eventually a support nightmare.

The solution was always the same, a central IT organization that sets standards, approves systems, and controls the zoo of applications, and that worked well enough for traditional software at least.

With AI, I’m no longer sure that’s true.

The difference lies in what drives the whole thing, because traditional IT projects benefit from standardization and shared platforms. It’s easier when there’s one ERP system and one CRM that apply to everyone, and one deployment pipeline, and the technical expertise sits in the IT department while the business units deliver requirements. With AI, it’s often exactly the other way around. The tools have become so accessible that many people can operate them themselves. What’s becoming scarce isn’t the technology, but knowledge about one’s own daily work.

Which tasks in recruiting actually eat up time isn’t something the central AI team knows, but the recruiter who has been reviewing applications for eight years does. And which review steps in accounting can actually be automated is something the controller knows, not the AI engineer.

A central AI team can provide infrastructure, manage licenses, and enforce security policies, but where AI actually creates value in each department, that’s something it cannot know, because it simply lacks the context. The result is generic pilot projects that look impressive but miss the reality of what departments actually do.

In job postings, I’m seeing more and more positions for Chief AI Officers2, and that’s not the problem in itself, because an internal AI advisory function can absolutely make sense as a point of contact for questions around infrastructure and security. In fact, such a role is a good thing. An internal AI advisory can help clarify questions, support infrastructure and security, and make good examples visible.

It only tips over the moment this becomes a Center of Excellence that primarily approves, categorizes, and dictates what’s right and what’s wrong. Then it quickly becomes less about value and more about responsibilities, approvals, and the big program from above.

And if we’re being honest, reality already looks different. Whether there’s an AI policy or not, many people are already using these tools today. Sometimes officially, often quietly, sometimes even at their own expense. Not because they want to break rules, but because they want to make their daily work easier. Maybe they just want to get through the same mountain of work in less time and then call it a day earlier. That’s not wrong, that’s human. On the contrary, it’s pretty smart and completely understandable when you think about your own resources. These people are optimizing their day, just like the company wants to optimize its costs. Those who actually achieve something with AI usually do so where they know the daily routine, in their own domain, driven by their own initiative.

And that initiative thrives on being voluntary. As long as someone feels they’re allowed to experiment on their own, they invest time, play with ideas, build little shortcuts, and tell colleagues about them. As soon as it’s dictated from above which tools are allowed and how everything is supposed to work, the mood shifts quickly. Curiosity turns into a sense of duty or defiance.

Psychologist Jack Brehm called this reactance in 1966, the completely normal counter-reaction when people feel their freedom is being restricted3. Suddenly it’s no longer about doing something better, but about not doing anything wrong. And with that, what was there at the beginning often disappears, curiosity, playfulness, and that small moment when the day suddenly becomes a little bit easier.

If you take this seriously, you end up with an uncomfortable consequence, at least for management and for everyone who prefers to push change through from the top. For everyone else, it’s rather good news. Change has to happen where the work happens. And that’s exactly where it gets built. HR tackles its own processes, Finance the review steps and approvals, Sales the proposals, emails, and follow-up processes.

Teams implement their solutions with AI themselves, often from the idea all the way to the running tool. They test vendors, buy a subscription, refine prompts, and use tools like Claude Code to build small helpers, maybe a form, a flow, a bot, or a mini-app. This then connects to what’s already there, Outlook, Teams, Jira, Excel, SharePoint. And if a piece of code is needed, they have it generated and adapt it, instead of waiting weeks for a slot with IT.

These aren’t just clever shortcuts, but ultimately real building blocks that actually change processes. AI makes this possible because you can suddenly do so much more yourself without having to hand off every detail to a central team.

The central team doesn’t become obsolete with this. But its job is different from what many role descriptions currently suggest. It’s less about rolling out an AI strategy from above, and more about preparing the ground so that business units can get started quickly and safely in the first place. That means access, contracts, a few clear rules about data and tools, help with security questions, and a set of standard interfaces so that not everyone has to start from zero.

But as soon as this team becomes a bottleneck and every tool, every subscription, and every idea has to go through a committee first, things get sluggish again. Then you wait for approvals, write tickets, and lose exactly the speed that makes AI so attractive right now.

And this isn’t just a big corporation problem. Especially for mid-sized companies, AI offers the first real chance in ages to build things themselves and catch up with the big players without setting up a massive program. If we kill this chance first with processes, order, and governance, all that’s left in the end is stagnation again.

Yes, this feels like shadow IT with a new label at first. Maybe it is. But today, implementation isn’t the problem. With AI, that’s suddenly fast and cheap. The bottleneck is rather knowing what actually helps, and that knowledge sits with the people who run the process every day.

Whether this works out everywhere, I don’t know. Data privacy and sprawl are real risks. But a purely central AI strategy has a catch. It’s supposed to decide for others what makes sense. That often ends in slides, committees, and yet another process. And that’s exactly when the advantage is gone.


  1. Gartner, Definition of Shadow IT , Gartner Information Technology Glossary ↩︎

  2. MIT Sloan Management Review, Five Trends in AI and Data Science for 2025 , 2025 ↩︎

  3. Jack W. Brehm, A Theory of Psychological Reactance , Academic Press, 1966 ↩︎