This blog post is Human-Centered Content: Written by humans for humans.
As I work with AI agents like Claude Code on new problems, I’ve noticed a fascinating pattern: Ask me to code a web page and the AI can make it appear like magic. Ask me to use the same AI to clean up a messy dataset? Suddenly I’m the one doing most of the heavy lifting, carefully guiding my silicon partner through every step.
There’s a growing gap in outcomes and perspectives between software engineers versus data engineers that comes down to the fundamental difference in their work. This gap reveals something profound about where we’re headed with AI in the enterprise.
AI is Great at Code
The AI thrives in software engineering because it is fundamentally about translating clear intentions into code. The problems are well-defined. The patterns are established. And most importantly, the code itself is the product.
I’m not saying that there aren’t challenges around context and complexity, but think about it: When you write a sorting algorithm, the context you need is minimal. You have inputs, outputs and a clear definition of success. The AI has seen millions of these patterns. It knows the dance.
But data? Data is different. Data is messy. Data has stories to tell that aren’t visible in its schema.
AI Hasn’t Figured out Data Yet
Last week, I threw some survey data at Claude Code, hoping for the same magical experience. Its first instinct? Jump straight into counting and aggregating — technically correct, completely useless. It generated metrics like “% of team members who responded” without knowing how many people were on each team. It was like watching a chef begin cooking without knowing what ingredients were in the fridge.
Here’s what the AI didn’t ask:
“What story are you trying to tell with this data?”
“Are those open-ended responses hiding gold we should mine first?”
“What biases might be lurking in how this was collected?”
“How does this connect to your actual business problem?”
“What are useful ways to make these responses actionable?”
The AI wanted to count. I needed it to think.
The Interrogation Gap
I call this the “Interrogation Gap.” It’s the space between what AI agents can execute and what they should explore. Current AI hasn’t been trained to be suspicious of data in the right ways. It doesn’t know to poke at it, question it, turn it upside down and shake it to see what falls out.
I had to be the interrogator, so I developed a process the AI would never have suggested but executed brilliantly once directed:
First, I made it slow down and look. I asked it to write a loop in python to read each user’s response. As output, it would create several note files that it would read and update iteratively. What tools are users mentioning? What problems do they face? What are surprising or useful suggestions? Suddenly, patterns emerged and the picture of what the analysis should be, became clearer.
Then, we built a model from those observations. Those themes became categories. We could loop back through the dataset and annotate each row with what tools were mentioned or issues encountered. This turned the categories into something that could be quantified.
Finally, we connected it to the business context. Not just “45% of respondents mentioned X,” but “This cluster of power users has a workflow we’re not supporting, and here’s what they’re doing instead.”
The speed improvement was still there, but more than time saved, my analysis was better for having an LLM be able to provide attention to every response and take notes. What I achieved was better than keywords and sentiment analysis, even if I didn’t save a huge amount of time. To get there, I had to bring the strategy. I had to be the detective.
Why This Matters for Your Organization
This gap is a critical insight for anyone implementing AI in their organization. It speaks to the type of issues we encounter applying AI to solutions and where we need to put our own efforts to get good results.
If you’re in software development, you’re probably already seeing massive productivity gains. In my teams, we’ve been able to tackle technical debt like upgrading PHP versions or enhancing our code coverage in ways we wouldn’t have tackled previously. Your developers are probably shipping faster; your backlog is shrinking too. Software life is good. The AI can handle more of the “what” because the “why” is often embedded in the requirements. Our software experts are more focused on architecture and quality than ever before.
But if you’re in data, analytics, or any field where context is king? You need a different playbook. Your experts are more valuable than ever. AI can help them build python scripts or bar charts faster than ever, but they are the ones that know which questions to ask, which assumptions to challenge, which threads to pull.
Right now, we see AI enhances experts, not replacing them. It’s replacing the mechanical parts of analysis: The writing of SQL queries, the generation of charts, the formatting of reports. It cannot replace the human who knows that last quarter’s spike was due to a one-time event. The analyst who remembers that this dataset excludes your biggest customer. The expert who can spot when the AI is confidently analyzing the wrong thing.
Those experts are irreplaceable.
Your Next Challenge
Here’s my challenge to you: Next time you hand data to an AI agent, don’t start with “analyze this.” Start with “help me understand this.” Make it show you the data through different lenses before it starts calculating anything.
You might be surprised what stories emerge when you slow down the robot and speed up the detective work.
Because in the end, software engineering is about building things right. But data work? That’s about building the right things. It’s a distinction that makes all the difference.