This blog post is Human-Centered Content: Written by humans for humans.
Not all of it. But a tremendous amount of AI spending isn’t going toward value. It’s going toward the appearance of pursuing value.
There are echoes of the dot-com bubble here. The technology is real and incredibly impactful. Pets.com didn’t prove the internet was fake. It proved that performing digital transformation was more fundable than doing the actual work. The opportunity was real. The money just went to the wrong places for many, many companies.
The ones who got it right were transformative.
We’re watching the same movie again. The companies that will win with AI won’t be the ones with the $4 million roadmap, the quarterly steering committee or the most polished strategy deck. They’ll be the ones where someone on the team shipped something that works, measured whether it worked and then did it again.
The Trap Is Structural, Not Personal
Here’s the thing: Most of the AI theater out there isn’t happening because leaders are lazy or uninformed. It’s happening because the default playbook for enterprise technology adoption was designed for a different pace.
An executive reads a McKinsey report and announces an AI initiative. A consulting firm gets hired to assess readiness. Six months later there’s a governance framework nobody’s read and a steering committee that meets quarterly to review a roadmap that hasn’t shipped anything.
This is inertia, not incompetence. The same process that worked for ERP implementations and cloud migrations actively works against you when the technology evolves faster than your review cycle. We’ve seen this pattern before: The dot-com era, the self-service analytics wave, mobile, Big Data and many others.
Meanwhile, your best data analyst figured out how to use Claude to cut three hours off her weekly reporting workflow. She didn’t ask permission. She just did it on her phone during lunch. And she’s getting more real value from AI than the entire official initiative.
The official effort goes toward the appearance of AI adoption, the org chart, the vendor selection, the change management plan, while the actual value gets created quietly by individuals solving their own problems.
The tragedy isn’t that the top-down work is useless. Some of it matters. The tragedy is that it moves so slowly that by the time it delivers anything, the motivated people have either found their own path, given up waiting or the technology has evolved.
The Real Pattern: Ground Up
Every successful AI adoption we’ve been part of follows the same shape. It starts with people, not platforms.
The first users aren’t the ones who were told to use AI. They’re the ones who were already experimenting. The data architect who started using Claude Code to prototype faster. The project manager who found that AI cut her proposal drafting time in half. The consultant who built a small app in an afternoon that would have taken a week of back-and-forth with a developer.
These people don’t need convincing. They need sanctioned tools, clear guardrails and an organization that gets out of their way.
When we rolled out AI across InterWorks, that’s exactly where we started. We identified the people already motivated, gave them safe tools and let adoption spread through demonstrated value rather than mandatory rollouts. A year-long trial with a test group. Progressive expansion of Claude Code access. By the time we rolled out Claude Enterprise at the start of 2026, over 25% of the organization was already in motion.
The people who weren’t interested watched from the sidelines until they saw what their colleagues were producing. Then they wanted in. That’s how adoption actually works. Not mandates. Momentum.
“Ground Up” Doesn’t Mean “Unsupported”
This is where the critique of top-down AI gets incomplete. “Just let your best people build things” sounds great until you run into reality.
Your best data analyst just built a workflow that pastes client financial data into a tool with unclear data retention policies. Your developer deployed an AI-assisted app to a public URL with no security review. Someone shared a document through a connector that exposed it to a model being trained on user inputs.
Ground-up adoption without organizational support isn’t empowerment. It’s unmanaged risk.
The organizations getting real value from AI are holding two things at once: individual empowerment and institutional responsibility. They’re enabling their best people to move fast while doing the unglamorous work that makes speed safe.
Policy Before Tools
Not a 40-page document nobody reads. Specific answers to specific questions. Can I paste this client document into Claude? Can I upload this spreadsheet? Can I ask about this project by name?
“Be careful with client data” isn’t a policy. It’s a hope.
Understand Your Blast Radius
When we turned on Claude Enterprise, we didn’t enable everything at once or for everyone at once. No email connector. No file storage integration. We started with capabilities where if someone made a mistake, the damage was contained. Then we expanded connector by connector, each time asking: What’s the worst that could happen if this is on?
Audit What You Already Have
Your platforms are shipping AI features constantly. Snowflake added Cortex. Databricks has Mosaic AI. Tableau, Sigma and Power BI all have AI capabilities now. Some are on by default. You’re probably paying for capabilities you’re not using, and some of what’s enabled might not meet your security requirements.
Before you add anything new, know what you already have.
Governance at the Speed of Technology
A committee that meets quarterly is reviewing decisions that are already six months stale. Ours meets twice a week now and it’s just keeping up.
The Gap Nobody’s Talking About
There’s a specific gap in AI adoption that most organizations haven’t identified yet, and it’s one of the biggest sources of untapped value.
Most people’s experience with AI is a chat window. They type a question, get an answer, maybe use it to draft an email or summarize a document. That’s useful, but it’s the shallow end of the pool.
The deep end is tools like Claude Code, Cursor and platform-native AI development capabilities. The gap between what someone can do with a chat interface and what they can do with these tools is enormous. We watched non-developers at our company kickoff build working applications in a single session. Not toy demos. Real tools that solved real problems.
Most of your technical people don’t know these capabilities exist, let alone how to use them on their own codebase.
But with great power, comes great responsibility. More people building more things faster means everything downstream must evolve. Code review processes. Deployment pipelines. Security scanning. When AI lets a data analyst build and deploy a Streamlit app in an afternoon, you need to ask: Who reviewed the code? Where is it running? What data does it access? Who maintains it after the person who built it moves on? You also have to address all the training across this entire lifecycle to enable people to do this safely.
The gap between “anyone can build it” and “anyone can safely deploy it” is where the risk lives. And it’s where most organizations have no infrastructure at all.
What This Looks Like in Practice
If you’re waiting for the perfect data strategy before starting with AI, you’re making the same mistake as the companies in 2000 who were waiting for the perfect e-commerce platform. The technology isn’t going to slow down while you get organized.
Here’s what we’ve seen work, both internally and with our clients:
- Find your power users. They already exist. The people experimenting on their own, the ones who light up when you mention AI. Give them sanctioned tools, clear boundaries and room to run.
- Write a real policy. Not principles. Specifics. What data can go into which tools. What requires review. What’s off limits. Make it short enough that people actually read it.
- Train for actual jobs. Your data architect needs different training than your project manager. Make it hands-on, make it specific to their workflow, and show them tools they don’t know exist. Foundational AI literacy matters, but that’s not where the ROI lives.
- Think about blast radius. Start with low-risk capabilities and expand outward. Every new connector, every new integration: What’s the worst case if someone makes a mistake here?
- Stand up governance that can keep pace. Weekly, not quarterly. Empowered to make decisions, not just review them.
- Treat this as an operating model, not a project. AI adoption doesn’t have a finish line because the technology doesn’t have one. The organizations that declare victory and move on will discover six months later that everything changed and nobody updated anything.
Why We Work This Way
We built this playbook by running it ourselves. InterWorks rolled out AI across a 250-person global consulting organization, starting with a small group of trusted power users, writing a usage policy before rolling out an enterprise tool, launching with limited capabilities, learning how people used them and expanding from there.
We got things wrong. Generic training was a waste of time. We put integrations on the roadmap before discovering roadblocks that made them impossible. Our original policy didn’t account for the growing list of AI capabilities in the everyday tools we already have onboarded. Those lessons are baked into everything we bring to client work now.
Fifteen years of data platform expertise across Snowflake, Databricks, Tableau and Sigma, combined with hands-on AI rollout experience across every department and role. We’re not advising from theory. We bring the same practitioners who built our internal rollout to your organization.
We work as an embedded partner. Not a six-month assessment that produces a binder. We sit alongside your team, help them work through the practical friction that shows up when AI gets used, and help the organization learn how to enable rather than bottleneck. The tools change, the capabilities evolve, new risks emerge. We stay through all of it.
The dot-com era proved that the technology was never the problem. The organizations that treated it as a performance lost. The ones that treated it as practice won. And right now, the same pattern is playing out with AI. The best time to stop performing and start practicing was six months ago. The second-best time is today.
