Key takeaways
When I started running agent shadowing sessions at Mosaic, I went in without strong assumptions. I hadn't spent my career inside support organizations. I was just watching.
Which is probably why what I found surprised me as much as it did.
Session after session, across different companies, different team sizes, different products, we kept getting stuck at the same place. Not near the end of the workflow, at the point of resolution. Right at the beginning.
Before the agent had written a single word of a response, before they'd looked up a knowledge article, before they'd even decided what kind of problem they were dealing with—that's where the time was going. That's where the friction lived.
I started calling it the ‘diagnosis step’. And once I saw it, I couldn't unsee it.
Most conversations about AI in support focus on resolution: faster answers, automated responses, smarter search. But if support teams are spending most of their time before they even reach resolution, that's also where the real opportunity lies. And that's where most AI implementations miss the mark.
EMBED: Clip 4 — "What the Shadowing Process Reveals"
My advice to any VP of Support: Find a way to watch
If there’s one thing I’d recommend to every support leader, it’s this: find a way to sit next to your agents and observe how they actually work. We all know what the dashboard shows us, such as ticket volume, average handle time, CSAT, and resolution rates.
But the dashboard doesn't show you what happens inside a case before it gets resolved. It doesn't capture the context switching, the Slack messages to senior SMEs, the three different tools an agent checks before they can even classify what kind of problem they're looking at.
That's exactly what our shadowing sessions are designed to surface. We spend real time with agents, watching their actual workflow, not the one they report. And what we find is almost always different from what their managers expect.
Support agents are an open book. They’ll share everything, like what works, what doesn't, what takes longer than it should. They won’t hide the friction. They just have never been asked to surface it.
It’s worth adding that the goal of these sessions isn't just to observe. It's also to identify exactly where in the workflow an improvement would have the biggest impact for your agents. Whether that's time to resolution, escalation rate, case wrap-up time, or the tedious work that bleeds into hours after a shift ends.
What I consistently find is that the highest-impact opportunity isn't where anyone expects it to be.
The 80% problem: Most support time is spent before resolution even begins
Across every shadowing session we've run, a pattern emerges that I now think of as one of the most underappreciated facts about B2B support: Support agents spend roughly 80% of their time on diagnosis, before resolution even begins.
That means the majority of your agents’ time is spent understanding the problem, gathering the full context of the ticket, categorizing it correctly under the appropriate product line or issue type, and determining the first troubleshooting step. Not on the resolution itself.
The main reason is that B2B support is genuinely complex, and traditional knowledge environments aren’t designed to handle it. The teams we work with manage multiple product lines, often inherited through acquisitions they didn't choose. They're supporting enterprise customers with highly specific configurations. They're handling tickets that span multiple systems and require cross-functional knowledge that no single person or knowledge hub fully holds.
In that type of environment, diagnosing a ticket correctly is real intellectual work. It requires pulling together context from multiple places, applying judgment, and making decisions that directly affect whether the case gets resolved or escalated.
And when agents don't have what they need to make those decisions, they escalate, pushing the ticket to a senior engineer or another team. Then what happens? You're doubling the effort and compounding the delay.
EMBED: Clip 5 — "Where B2B Support Actually Breaks Down"
Unpacking the diagnosis step: what's actually happening
Let me make this concrete, because the diagnosis step sounds abstract until you've watched it play out in real time.
An agent receives a ticket. A customer is reporting an issue: something isn't working the way it should. Before the agent can do anything useful, they need to answer a set of questions:
- What product or feature is this actually about?
- What does this customer's configuration look like, and how does that affect what's happening?
- Has this issue come up before, and if so, how was it resolved?
- Who is the right person or team to escalate to if I need help?
- What's the correct first step to begin troubleshooting?
Answering those questions almost always requires jumping between systems. Confluence for internal documentation. A knowledge hub for product information. Slack, to see if a senior SME has tackled something similar. Possibly a CRM to check account context. Multiple browser tabs, multiple tools, multiple interruptions, all before a single line of the response has been written.
These are what I call the silent killers. Each one seems small, but they compound.
A wrong categorization at step one could mean the wrong SME gets looped in at step four.
A missing knowledge article might mean the agent has to interrupt a senior engineer who's already handling three other things.
A new product that launched without an adequate enablement plan often leads to the entire team guessing on answers for months.
Every one of these friction points adds time. Collectively, they account for most of the time support teams spend on any given case; time that never shows up in a resolution metric because the resolution hasn't happened yet.
EMBED: Clip 6 — "The Diagnosis Step"
Rethinking what AI in support should actually do
Most AI implementations in support are built around the assumption that the hard problem is finding the right answer. So they optimize for search: better knowledge retrieval, faster document surfacing, smarter chatbots.
That's not inherently wrong. But if you've watched an agent spend the first 40 minutes of a case just trying to understand what they're dealing with, you start to realize it's not the whole picture.
The opportunity that most teams are leaving on the table isn't faster answers. It's eliminating the friction that happens before the answer is even possible.
Imagine the difference it would make if an agent opened a ticket and immediately had full context without switching a single tool? What would it mean if the system could recognize that this ticket looks like three others from last week and surface that pattern automatically? That’s real intelligence.
This is also where I've had to rethink what success looks like in support. Traditionally, success means the agent found a knowledge article, referenced a source, and resolved the ticket. But finding the information isn't the same as resolving the problem.
There's still the work of applying that information to a specific customer's specific situation… and that work is where judgment, context, and the right tooling matter most.
EMBED: Clip 10 — "The Unit of Value Is the Case Result"
How to start solving the 80% problem
If you're a support leader reading this, here's what I'd suggest as a starting point:
Run your own version of a shadowing session
You don't need a formal process. Block two hours and sit beside a senior agent. Not to evaluate them, but to watch, observe and ask questions. Ask them to narrate what they're doing and why. Count how many tools they open before they write their first response. Notice where they pause, where they search, where they have to ask someone else.
That observation alone will tell you more about where AI can have a real impact than any vendor demo.
Map your diagnosis friction points
Once you've done that observation, list out the specific friction points you saw.
- Context switching between which tools?
- Knowledge gaps in which product areas?
- SME dependencies that could be replaced with better documentation?
- New products without enablement coverage?
These are specific, addressable, and importantly, measurable problems. You can track how long agents spend in the diagnosis phase before and after an intervention.
Redefine what AI adoption means for your team
Before you run any AI pilot, agree on what success actually looks like at the ticket level. Not usage metrics. Not how many times agents opened the tool. The question to answer is: did AI meaningfully change what happened on this case?
If you can't answer that question, you don't yet have the right measurement framework. And that's worth solving before you invest in the technology.
Don’t let the debt compound quietly
The diagnosis step creates a compounding effect, racking up debt that accumulates in the background, invisible on any dashboard until the damage is already done.
Every ticket that takes longer than it should because context was missing, every escalation that happened because an agent couldn't find the right information fast enough, every knowledge gap that generated ten repetitive tickets before anyone wrote a document to address it—that's debt. And it will manifest as a team that's perpetually underwater, a ticket queue that never shrinks, and a group of talented, hardworking agents who still somehow can't get ahead.
The good news is that it's solvable. But solving it requires being honest about where the time actually goes and building AI for the part of the workflow where it can make the biggest difference.
And that starts with diagnosis.


