Back to all articles
Coaching
Sales Management
Scale
Compliance

Coaching at Scale: What Happens When You Can't Listen to Every Call

TrainBox Team
5 min read

Every sales leader knows that coaching drives performance. The research is clear. The experience confirms it. Regular, specific feedback on real conversations makes reps better.

The problem is math. A manager with ten direct reports can't listen to every call. A director overseeing fifty reps has even less visibility. Coaching becomes a sampling exercise at best, and the samples are rarely representative.

So what happens to the conversations no one hears?

The visibility gap

In most sales organisations, managers review a fraction of their team's conversations. Maybe they join a few ride-alongs per quarter. Maybe they listen to a handful of recorded calls. Maybe they rely on what reps self-report.

This creates an illusion of oversight. Managers think they know how their team is performing. But they're seeing a curated slice: the conversations reps choose to highlight, the ride-alongs where everyone's on their best behaviour.

The everyday conversations, the ones that happen when no one's watching, are invisible. And that's where most of the selling actually occurs.

In life sciences, this gap has compliance implications. A rep might handle ten HCP conversations a week. The manager might observe one per month. What happens in the other thirty-nine?

Why traditional approaches don't scale

The obvious solution is to listen to more calls. But time is finite. A manager who spends their entire week reviewing recordings isn't doing the other parts of their job. And call reviews are slow: a thirty-minute conversation takes thirty minutes to review.

Some organisations try to solve this with technology. Conversation intelligence tools can flag keywords, measure talk ratios, and identify calls that might need attention. This helps prioritise, but it doesn't replace human coaching. The tool might tell you a rep talked too much, but it can't tell them how to talk less while still delivering the message.

Peer coaching is another option. Reps review each other's calls and provide feedback. This can work, but quality varies. Without training, peer feedback tends toward the superficial. And reps may not feel comfortable giving honest criticism to colleagues they work with daily.

The fundamental constraint remains: human coaching is high-value but low-scale. You can't have a manager personally develop every skill for every rep.

A different approach

What if you could separate the volume problem from the depth problem?

The volume problem is repetition. Reps need many opportunities to practise and get feedback. They need to try, fail, adjust, and try again. This is where traditional coaching breaks down. There's simply not enough manager time.

The depth problem is nuance. Some feedback requires human judgment. Understanding why a conversation went wrong. Helping a rep think through a complex situation. Providing encouragement when someone is struggling.

AI roleplay tools address the volume problem. Reps can practise scenarios as often as they need, getting immediate feedback on their performance. They can work on specific skills, like objection handling or compliance-sensitive messaging, without waiting for a manager's calendar to open up.

This frees managers to focus on the depth problem. Instead of running basic practice sessions, they can spend their coaching time on the conversations that actually need human insight. They can review practice data to see where each rep is struggling, then provide targeted coaching on those specific areas.

The combination scales in a way that pure human coaching can't.

Making it work

Shifting to this model requires more than adding an AI tool. It requires rethinking how coaching happens.

Define what managers should focus on. If AI handles basic practice, what should managers spend their coaching time on? High-stakes situations. Complex deals. Reps who are struggling. Career development conversations. Be explicit about where human coaching adds the most value.

Connect practice to performance. Use data from AI practice sessions to inform coaching conversations. If a rep consistently struggles with pricing objections in practice, that's what the manager should focus on. If they've mastered the basics, push them toward more advanced scenarios.

Maintain quality standards. AI feedback is consistent but not always complete. Managers should periodically review AI practice sessions to ensure the feedback aligns with organisational standards. Calibrate the system based on what good actually looks like in your context.

Don't abandon observation. Managers should still observe real conversations. But the purpose shifts from basic skill assessment to deeper coaching. When you've already seen how a rep performs in practice, the ride-along becomes an opportunity to see how they translate those skills to real situations.

The opportunity

Coaching doesn't have to be a trade-off between quality and scale. With the right approach, you can give every rep the repetition they need to improve while focusing manager time on the coaching that requires human judgment.

The conversations no one hears don't have to be invisible. They just need a different kind of support.


TrainBox helps life science teams practise real conversations so they're ready when it matters.

Ready to transform your training?

See how TrainBox helps life science teams practise real conversations so they're ready when it matters.

Get in touch