This blog post is Human-Centered Content: Written by humans for humans.
They know everything about their corner of the business. They built the systems everyone else depends on. When something breaks, they can diagnose it in 30 seconds. They are, without question, indispensable. Every organization has someone like this.
Unfortunately, they also speak a different language than the rest of the team.
When a message from the expert arrives in Slack, most recipients do the same thing: They paste it into a search engine, read half a result, get confused and send a follow-up message asking for clarification. This message can often be dense, technical and full of shorthand. The expert answers, and an hour later there’s another message and the cycle repeats itself.
By the end of the day, your high-value expert has wasted hours translating themselves for people who genuinely needed to understand something but couldn’t decode what they read.
This is what a knowledge silo looks like in practice. Often, it’s not purposefully hoarded information, but simply a language barrier.
The Problem Isn’t the Expert
The instinct when you spot this pattern is to ask why the expert isn’t communicating more clearly. But that framing misses the real issue. The expert is communicating clearly, just in their own native language. The audience simply doesn’t speak the same language.
The audience is operating the same way on their end. No one wants to ask a question that makes them look uninformed. They just don’t know what question to ask. So they either wait, guess or break the expert’s concentration with yet another message.
Subject matter experts can spend somewhere between a quarter and nearly half of their working time translating themselves for colleagues who can’t decode their domain. That’s not a minor inefficiency. It’s hours of deep work traded for clarification threads. Worst of all, the people best positioned to solve your hardest problems are the ones stuck doing this.
Luckily, the fix for this problem follows a repeatable pattern. The same structure works whether the silo is an infrastructure team, an unwieldy codebase, a legal department or a data science team. We’ve used this pattern at InterWorks across a few different contexts, and we’ve been amazed at how well its worked each time.
What Generic AI Gets Wrong
When you first think about solving this with AI, the instinct is to reach for a general-purpose assistant. That helps, but only to a point. Generic AI knows a lot about most domains. Ask it about cloud infrastructure and you’ll get a reasonable answer about cloud infrastructure in general.
But “in general” is rarely what your team needs. They need to know about your specific systems, your deployment configuration, your naming conventions, your application’s behavior in a particular scenario. Generic AI answers a question nobody on your team is asking.
That’s the distinction: Generic AI knows the domain. Context-loaded AI knows your domain.
Two tools, one pattern
Take a real scenario: Your infrastructure engineer manages a Kubernetes environment and your application developers are PHP/Laravel people. The infrastructure engineer can read a pod’s CrashLoopBackOff status and know immediately whether it’s a misconfigured deployment manifest, a failing health check or a memory limit the app is blowing past.
Your developers see the same error and have no idea where to start. When the engineer drops a Slack update about a rolling deployment, ingress rules or a namespace conflict, most of the team pastes it into a search engine, reads half a result, and sends the infrastructure engineer a silly question.
The fix is to load an AI assistant with the right context: A jargon glossary that maps infrastructure shorthand to developer-friendly language, documentation of your actual Kubernetes setup (your specific namespaces, services and naming conventions), knowledge of how your application behaves during deployments, and a consistent output format for every response.
You could even setup a system like this to run in two modes. In the first, someone could paste in a message from the infrastructure engineer and get back plain English plus a “what does this mean for you?”
In the second mode, the developer could ask a question directly and get an answer drawn from the loaded context. When the tool doesn’t know, it could say so and then hand back the exact question to send the expert, already phrased in their vocabulary.
That escalation path matters. The goal isn’t to replace the expert. It’s to make the conversations that do reach them land better. When developers have already done some translation work themselves, the questions arrive better-formed and the conversations are shorter.
The same dynamic shows up with large codebases. A product with years of accumulated complexity creates the same bottleneck: Only a few people know it well, and everyone else has to interrupt them to get answers. New team members spend significant time hunting through files or interrupting senior developers with questions, and over time, even experienced developers aren’t immune.
You could build an AI assistant loaded with internal developer documentation, frontend user documentation, and the codebase itself. Ask the system a question in plain English, then watch as it searches a defined sequence of sources, returns an answer with source files cited and flags its confidence level. When it doesn’t find the answer, it says so and flags the gap for someone to document.
That in itself is often a useful side effect: The tool surfaces what hasn’t been documented yet, which over time improves the documentation itself.
How to Build One
Both of these could be made as Claude Code skills: Markdown files that define a role, load context and give the AI a consistent job to do. You don’t need a custom application or a dedicated AI team. You need a clear definition of what the tool should do, the right reference material, and a few hours.
The output format is worth thinking about early. What does this AI produce, for whom, and in what shape? The infrastructure translator we built produces a plain English summary, a “what this means for you,” and a flag for whether action is required. The codebase navigator returns an answer, the source files it drew from, and a confidence level. Getting that format right before you build the reference material saves you from having to redo things later.
The reference material is where most of the value comes from. Don’t skip this part! For a communication translator, that means a jargon glossary (You might be able to build with the expert or by drawing from real Slack messages). You’ll also need documentation of your actual systems (not generic K8s docs, but your namespaces, your services, your naming conventions). Also, collect a handful of real examples showing what a good translation looks like. For a codebase navigator, load in the developer documentation, the user-facing documentation, and the codebase itself. You might also reference external documentation for key dependencies, such as underlying frameworks.
Don’t forget to include escalation behavior. When the AI reaches the edge of what it knows, the wrong answer is “make something up.” It should say it doesn’t know, then generate the exact question to send the expert, already phrased in their vocabulary. That way even a miss is useful.
Test it with real examples before rolling it out. Use actual messages from the expert. Use actual questions the team has asked. If the output doesn’t hold up, the reference material needs more work.
Where Else this Works
This pattern fits anywhere there’s a specialist whose expertise others need but can’t access on their own. DevOps engineers are the obvious case, but data scientists translating findings for business teams hit the same wall. So do lawyers explaining regulatory tradeoffs to product managers, financial analysts trying to connect P&L mechanics to department budgets, and security engineers fielding “are we hacked?” questions from people who don’t know what a CVE is.
The vocabulary changes, the reference docs change and the output format changes. The structure doesn’t change.
A system like this might take a few hours to build. After that, though, it runs on its own. Every colleague who gets a reliable answer without pulling the expert away from their current focus means hours back in the expert’s day.
Your experts don’t have to be the bottleneck. The knowledge they’ve built can reach further than they can.
