This blog post is Human-Centered Content: Written by humans for humans.
It Started with a Policy, Not a Tool
Before anyone at InterWorks touched an AI tool, every employee signed an AI Usage Policy. This wasn’t bureaucratic caution. It was the foundation. The policy set expectations for what’s acceptable, what’s not and what the consequences are. When things inevitably went sideways (and they did), we had something to point to.
Why this matters for you: Policy first isn’t optional. It protects the organization if something goes wrong and it gives people clarity about what they can actually do. Most organizations skip this step and pay for it later.
We Gave Power Users Room to Run
Our approach was never top-down mandated adoption. We started by identifying the people who were already experimenting, the ones using ChatGPT on their phones during lunch, and gave them safe, sanctioned tools to work with. A year-long Perplexity trial with a test group. Claude Code access starting in July 2025, progressively expanding to over 25% of the organization.
The pattern was consistent: enable the motivated, make access and training available for everyone, and let adoption spread through demonstrated value rather than mandatory rollouts. The people who weren’t interested watched from the sidelines until they saw what their colleagues were producing. Then they wanted in.
Why this matters for you: You don’t need everyone on board on day one. You need the right people experimenting safely, and you need the results to be visible enough that others want to follow.
We Launched Small and Expanded by Blast Radius
When we rolled out Claude Enterprise at the start of 2026, we didn’t turn on everything at once. No Box connector. No Slack. No Outlook integration. We started with a version that could work with local files, Excel and PowerPoint, things where if someone made a mistake, the damage was contained to their own machine.
Then we expanded connector by connector, each time asking the same question: What’s the worst that could happen if this is on? Email and file storage connectors are where the real risk lives. Before turning those on, we had to understand what sensitive information lived where, fix years of default sharing settings that created exposure when AI could suddenly search across everything, and make sure access controls were actually right.
Why this matters for you: Think about blast radius for every capability you enable. Start with the low-risk ones and work outward. Organizations that have been more locked down may have less cleanup work. But most, especially those who’ve been loose with permissions over the years, are sitting on real exposure they don’t know about.
We Discovered AI Was Already Everywhere
Choosing Claude was only part of the story. We realized AI was already embedded across our entire toolset. Office products, Figma, Adobe, Notion, project management tools. Features quietly enabled, sometimes by default. What used to be harmless (an employee using their own todo app) could now be a vector for data leaving the organization into external models used for training.
We had to audit everything. What AI features are active? What data are they accessing? What are the training policies? Are we making intentional decisions, or are we just accepting defaults?
Why this matters for you: Your platforms are shipping AI features constantly. Snowflake added Cortex. Databricks has Mosaic AI. Tableau, Sigma and Power BI all have AI capabilities now. Some of these are on by default. You’re probably paying for capabilities you’re not using, and some of what’s enabled might not meet your security requirements. Part of any AI engagement should be helping you see what’s already there.
Security and Compliance Required Specifics, Not Principles
“Be careful with client data” doesn’t work as a policy. People needed plain answers: Can I paste this client document in? Can I upload this spreadsheet? Can I ask about this project by name?
We worked with SecOps and Legal to get specific. The answers depend on the tool, the data classification and the client. We upgraded our policies, did thorough review and built a continual process for reviewing user needs, emerging capabilities, and policy gaps. Many things that were “nice to haves” in the past became mandatory with powerful technology like AI.
AI-generated content also increases volume. More content going out the door means more opportunities for something to slip through that shouldn’t. Can your review processes handle twice the throughput? Do you know what’s being generated? Who’s reviewing it before it goes to a client?
Why this matters for you: Every customer needs this conversation, and the answers depend on your industry, your clients and what kind of data you handle. Generic guidance won’t cut it.
DevOps Became a Governance Problem
More people generating more code meant our entire development pipeline had to evolve. Code review processes, deployment strategies, security scanning. We adopted Railway for rapid deployment and implemented Cloudflare Zero Trust to secure what we deploy. We created automated scans of GitHub to make sure people only deploy things in ways we can monitor, that match our requirements and that we have governance over what goes live.
This isn’t just an engineering concern. It’s an organizational one. When AI lets a data analyst build and deploy a Streamlit app in an afternoon, you need to ask: Who reviewed the code? Where is it running? What data does it access? Who maintains it after the person who built it moves on to the next thing? The gap between “anyone can build it” and “anyone can safely deploy it” is where risk lives.
Why this matters for you: If your teams are starting to build with AI-assisted coding tools, your DevOps and security practices need to keep pace. This is true whether the code is coming from your own people, contractors or partners.
Governance Had to Be a Living Process
We stood up a governance committee: Legal, SecOps, IT, HR/Training, Communications. Not a quarterly review board. A group that talks multiple times a week, because AI moves too fast for monthly meetings. They represent their domains and can make decisions on behalf of their departments.
We also created a clear intake point. Where do you go if you need access? If you have a question? If you need training? Without this, people either stop trying or find their own path around the process. Both outcomes are bad.
Why this matters for you: You need this group, and you need it empowered to make decisions. It doesn’t have to be big. The right people need to be in the room, and they need to be able to act. Without this, everything stalls or decisions get made inconsistently.
Training Had to Be About Actual Jobs
Nobody needed a lecture on how large language models work. They needed to know how to use the tools for what they do every day. A data architect needs different training than a project manager who needs different training than a content writer.
People were also all over the map in terms of readiness. Some jumped in immediately. Others wanted nothing to do with it. You have to plan for both. Early adopters need room to run. Hesitant people need support, not pressure.
The difference between what people can do with an AI chat interface and what they can do with tools like Claude Code, Cursor or Cortex Code is massive. Most people don’t know these tools exist, let alone how to use them. Exposure and hands-on training close this gap faster than documentation ever will.
Why this matters for you: Generic AI training is a waste of time. It has to be practical and role-specific. And the education gap around coding tools specifically is huge. Your technical people need to see these tools in action on their own work to understand what’s possible.
This Isn’t a Project. It’s an Operating Model.
A year in, we’re still at it. The AI rollout committee still meets weekly. We’re still onboarding people. We’re still finding AI features in tools we thought we’d already audited. The policy we wrote continues to be revised.
That’s the point. AI adoption doesn’t have a finish line because the technology doesn’t have a finish line. New capabilities ship, new risks surface, people develop new skills and ask new questions. The organizations that treat this as a project will finish, declare victory and discover six months later that everything has changed and nobody updated anything.
The organizations that treat this as an operating model, one that evolves as fast as the technology, are the ones that will keep getting value from it.
What We’ve Learned That Applies to Every Organization
Every organization’s version of this story is different. Some are just starting and don’t know where to begin safely. Others are already deep into agentic coding and need help evolving their engineering processes to keep pace. Most are somewhere in between, with pockets of experimentation, gaps in governance, and a growing sense that the current approach isn’t scaling or they are falling behind the rapid pace of change.
The specifics matter more than the framework. Your industry, your data sensitivity, your existing platform stack, your team’s readiness — these shape what “doing AI well” actually looks like. A 500-person financial services firm and a 200-person manufacturing company aren’t going to run the same playbook. They shouldn’t.
What doesn’t change is the approach. You have to start with where people actually are, not where a maturity model says they should be. You have to make policy specific enough to be useful. You have to govern at the speed the technology moves. And you need people in the room who’ve done this work, not just studied it.
That’s how we operate. We’ve been through the hard parts, made the mistakes and built the scar tissue that only comes from doing it yourself. If your organization is working through any part of this, we’d like to hear what you’re seeing.
