This blog post is AI-Assisted Content: Written by humans with a helping hand.
Last year was hyped as the “Year of the Agent.” Looking back, that prediction largely played out. We saw autonomous frameworks explode, agentic tools everywhere and a massive shift in how the industry talked about AI. After a year of breathless demos and “look what this can do!” excitement, 2026 is shaping up to be the morning after.
Leading voices from Stanford to Microsoft are predicting a “hangover” year. They see an anti-AI backlash brewing, driven by fatigue with low-quality generative slop and companies demanding to see receipts on ROI. The era of evangelism is ending. The era of auditing is beginning.
From where I sit leading the AI Center of Excellence at InterWorks, the reality is more nuanced. While the hype cycle is crashing, the utility cycle is just hitting its stride. For us, 2026 isn’t about the magic fading. It’s about the magic becoming mundane, common and pragmatic — which is to say, pragmatically useful.
Here’s what we’re seeing on the ground.
AI Is Just Technology Now
There’s a useful essay from researchers at Columbia University called “AI as Normal Technology.” The core argument is simple: AI isn’t a separate species, it’s a tool. Yes, a powerful one tool to be sure, but one that follows the same patterns as every other transformative technology before it.
That framing matters because it changes what “progress” looks like. Progress isn’t benchmark scores or demo videos. It’s the slow, messy, human work of figuring out where AI actually helps, where it doesn’t, and the hard work we need to do to integrate it.
I used to give a presentation about data science and machine learning. My conclusion was always that we needed more engineers in business to apply models then data scientists to develop new ones. Consider this: It took nearly 40 years for electricity to impact businesses. That’s 40 years of electric dynamos existing before they showed up in productivity statistics. The technology worked. What took decades was redesigning factories, retraining workers and rethinking workflows around what electricity made possible.
We’re in that same phase with AI. The models are impressive. The productivity gains? They’re real, but they require the unsexy work of integration, engineering and change management. We have to capture new data, create new processes, rethink systems and work with end users to apply solutions to the messy, real-world environment our businesses exist within.
From “Copilots” to Curators
The biggest shift we’ve witnessed isn’t in the models themselves. It’s in how we relate to them.
In summer 2025, we had a handful of GitHub Copilot licenses. Today, 25% of our entire company is using Claude Code, most of those users aren’t even software developers. They’re using Claude Code as a general-purpose agent to automate and accelerate all kinds of tasks.
In development, we’ve moved past the era of copilots (where AI finishes your sentence) to something closer to what Simon Willison calls “vibe engineering.” Our developers aren’t just writing code. They’re curating the context, rules and structure for agents that can run autonomously for over an hour without interruption. I’ve personally seen a huge growth in Claude Code’s independence by following Anthropic’s recent two agent strategy (where a “planner” agent coordinates a “coder” agent).
This shift from “minutes of help” to “hours of autonomy” has changed what we build and how we build it. Want to try a different UI? Set off the agents to create a few variations and come back a few hours later to test the results. We’re also tackling a lot of projects we either avoid because of the tedium or wouldn’t tackle because of the time required. More exciting, we’re building small, bespoke tools that previously didn’t have the ROI to exist. Think: a SaaS status page, automated server log analyzers, custom domain troubleshooters for Tableau Cloud, reports that summarize support tickets by customer.
None of this makes headlines. All of it makes margins.
The Capability-Reliability Gap
Here’s the part where I’m supposed to declare victory. I won’t.
The hardest lesson of 2025 was what researchers call the “capability-reliability gap.” The path to an impressive AI demo is easy (just check out any booth at your local tech conference). The path to wide-scale deployment is something else entirely. AIs will hallucinate confidently, lose context mid-task and fail in ways that are genuinely hard to predict.
We’ve learned this the hard way. Our internal dev channels are equal parts amazement at time saved and frustrating stories of agents doing odd things. It’s all made harder because the results of AI aren’t predictable and what worked one day might not work on another. The models are getting better, but “better” isn’t the same as “ready to hand to my executives.”
This is why the human-in-the-loop isn’t going away. If anything, as AI takes on more complex tasks, the quality of human oversight matters more, not less. What changes is the nature of the work: less execution, more judgment. Less typing, more checking.
The Anti-Slop Workflow
I agree with some of the AI commentators: 2026 is likely the year where we see growing AI backlash. However, I believe it’s specifically against AI slop — the lazy, low-quality content generated for volume’s sake that we’re seeing take over more and more of our online feeds.
But remember, in business, there’s zero backlash against competent utility.
In 2026, our consultants will keep using agents to automate the boring, high-friction parts of their day, from the mundane (project documentation) to the technically impressive (programmatically replacing data sources across complex Tableau environments). Not AI for AI’s sake, but AI where it delivers practical value for our customers.
The companies that win in 2026 won’t be the ones with the flashiest demos. They’ll be the ones who’ve quietly figured out where AI fits in the messy reality of our work.
The Verdict for 2026
If 2025 was about the possibility of the agent, 2026 is about the integration of the agent.
The magic isn’t disappearing. It’s becoming infrastructure: Invisible, reliable, boring. And that’s exactly what it should be.
