Coaching Practice

Progress tracking for coaching groups: what to measure and how

Most group coaches feel they should be tracking progress more rigorously and don't know what to track. The honest answer is that "client progress" is three different signals running on three different clocks, and most programmes mix them — which is why so many coaches end up with intuition instead of evidence about what is actually working in their cohorts.

Published 27 April 20267 min readCoaching Practice

TL;DR

Group coaches need to track three distinct signals, not one. Behaviour (did members do the thing) is measured weekly with a short check-in poll. Individual goal progress is reviewed three times — intake, midpoint, and close — by having each member reread their own written goal and report against it. Group outcome — the single sentence that defines why the cohort exists — is assessed only at the end. Show each member their own data privately, and celebrate completed milestones publicly. Skip relative ranking; it accelerates drop-out from the people the programme is meant to help most.

What does "progress" actually mean in a coaching group?

The 2023 ICF Global Coaching Study reports that there are now around 109,200 certified coach practitioners worldwide, and a clear majority of professional coaching engagements include a tracking or measurement component. What the study does not report — because the answer is unsettled — is what coaches actually measure. The phrase "client progress" carries at least three meanings, and they do not move at the same pace.

The first meaning is behaviour: did the member actually do the thing this week — write the journal entry, walk the 30 minutes, send the difficult email. Behaviour is observable and high-frequency. It is the data that arrives every week regardless of how the member is feeling about the programme.

The second meaning is individual goal progress: at the start of the cohort, each member wrote down a goal in their own words. Are they closer to it now than they were? This is much slower-moving. It changes meaningfully over weeks, not days.

The third meaning is group outcome: the single sentence the manager wrote at the start of the cohort that explains why the group exists at all — "by the end of 10 weeks, every member will have a written professional development plan and one peer they trust to challenge them." This is binary at the end of the programme: yes or no.

Conflating these is the most common diagnostic problem in group coaching. A coach worried that "the group isn't progressing" usually means one of these three things specifically — and the intervention for each is different.

What four things should you measure in every coaching group?

Across coaches we work with inside Bitir, four metrics show up almost universally as the most useful. None of them require sophisticated tools. All of them can run from a weekly check-in poll plus the goal cards each member writes at intake.

1. Weekly check-in response rate. The percentage of members who answered this week's check-in. This is the single most useful number in any coaching group. It is a leading indicator of every other metric. Cohorts that hold above 80% response rate by week three almost always finish above 75% completion; cohorts that drop below 50% response rate by week three almost always finish below 45%.

2. Per-member behavioural compliance. Of the focus actions for this week (the daily walk, the journal entry, the difficult conversation), what proportion did the member complete? This is private to the member and the manager. The trend across weeks matters more than any individual week's number.

3. Self-rated movement on the individual goal. At week one, each member rates how close they are to their goal on a 1–10 scale. They re-rate at the midpoint and at the close. The shape of the trend — flat, slow climb, sudden jump — tells you more than the final number.

4. Programme completion rate. The proportion of original members who finish the cohort. This is the headline number for cohort retention. It is also the number that improves the most when the previous three are tracked openly.

How often should you check progress in a coaching group?

The cadence question is where most coaches over-engineer their measurement. Different signals need different clocks.

Behaviour and check-in response rate are weekly. They have to be. The whole point of weekly cadence — discussed in detail in our weekly check-in templates guide — is to give members and the manager fresh signal often enough to course-correct before patterns calcify.

Individual goal progress is reviewed three times only: at intake, at the structural midpoint, and at the close. The midpoint review is critical because the week-five drop-off in 8–12 week cohorts is real and well-documented; we cover its mechanics in how to structure a coaching cohort from day one. A self-rated 1–10 score at the midpoint, plus three sentences from the member on what has actually changed, is enough.

Group outcome is assessed once, at the end. The cohort either delivered on its single sentence or it didn't. Asking before the close produces noise; the answer changes weekly until the last fortnight.

One pattern to avoid: daily check-ins for every member in every group. They sound disciplined but produce check-in fatigue, and by week four you are typically getting fewer responses than a well-designed weekly cadence would have given you. There are exceptions — habit-formation programmes specifically benefit from daily logs — but the default for coaching groups is weekly.

How do you show progress to members without making it feel like surveillance?

This is the part most coaches get wrong. They track diligently for themselves, send the data to a private spreadsheet, and never close the loop with the people the data is about. The 2002 American Psychologist meta-analysis by Locke and Latham on a quarter-century of goal-setting research found that the act of seeing your own progress against your own goal is what changes behaviour — not the manager seeing it for you.

Three rules carry most of the weight.

Show each member their own data, in private, by default. A member's seven-day check-in pattern, their compliance against their own focus action, their self-rated movement on their own goal — visible to them, visible to the manager, not visible to other members. Inside Bitir this is the default behaviour of the goal card: members see their own; the manager sees everyone's.

Celebrate completed milestones publicly, without ranking. When a member hits a streak — eight weekly check-ins in a row, the assignment completed every week of the programme so far — post a celebration card visible to the group. Do not, in the same place, show who is in third place or last place. Coaches we speak to often say things like: "the moment I added a leaderboard to my fitness cohort, I lost two members who had been the slowest to start, and they were the ones I most wanted to keep." Public celebration without relative ranking is the difference between encouragement and performance pressure.

Make the midpoint review a structured conversation, not a report. At week five or six, each member rereads their original goal aloud to the group (or posts it as text), states their self-rated 1–10 progress, and names one change they will make in the second half. The data is the prompt, not the verdict.

How do you measure something soft, like confidence or wellbeing?

Soft outcomes are not unmeasurable. They are unmeasurable on a single week. The trend across six or eight weekly self-ratings of "how confident did you feel this week, 1–10" is far more reliable than any individual reading. Three flat weeks followed by three rising weeks tells you something that a single point cannot.

For more rigour, the Warwick-Edinburgh Mental Wellbeing Scale publishes a free seven-item short form (SWEMWBS) used widely in NHS-commissioned wellbeing programmes. Asking it at intake and again at the close gives a defensible, comparable score. Resist the temptation to ask it weekly; the instrument was not designed for that frequency and the noise overwhelms the signal.

One opinion we hold strongly: do not invent a custom psychometric instrument. Coaches sometimes write their own ten-question wellbeing scale and feel it is more bespoke. It is also unvalidated, unreliable, and not comparable across cohorts. Use a published instrument or use a single-item self-rating; do not invent a third option.

What does this look like in a real cohort?

Marianne Whitcombe is a confidence coach in Cardiff who runs 6-week cohorts for women in the first 12 months of self-employment. Her cohorts are 9 members, drawn from networking events across South Wales. Each costs £240 per place; she runs five cohorts a year.

Her tracking setup takes about 15 minutes per member at intake and roughly 20 minutes from her each week thereafter.

At intake, each member writes a one-sentence individual goal and rates themselves 1–10 on confidence as it relates to that goal. The group has a shared one-sentence outcome — "by the end of six weeks, every member will have run one paid client conversation she would previously have avoided." Both go into Bitir as goal cards, the individual goal private to the member, the group goal pinned for everyone.

Each Monday she sends a five-question check-in poll: did you do this week's focus action, confidence rating 1–10, one win, one drag, one ask. It takes members about 90 seconds. Compliance across five cohorts has averaged 86% — well above the 80% threshold that predicts strong completion.

At week three (the midpoint of a 6-week cohort), every member rereads her own goal in a 45-minute group call, restates it, restates her current 1–10 self-rating, and names one change for the back half. Marianne logs each member's midpoint number. By the end she has three readings — week one, week three, week six — for every member on every cohort she has run since 2023.

Her completion rate across the last three cohorts is 89%. Her per-cohort confidence rating moves on average from a week-one mean of 4.6 to a week-six mean of 7.8. Neither of those numbers is a vanity metric; both come straight from the same poll members fill in for themselves.

Questions about progress tracking in coaching groups

How do coaches track client progress in a group programme?

By measuring three distinct signals on different cadences. Behaviour (did the member do the thing) is tracked weekly. Individual goal progress is reviewed at intake, midpoint, and close — each member rereads their original written goal and reports against it. Group outcome — the one-sentence reason the cohort exists — is assessed only at the end. Conflating the three is why coaches feel they don't know what's working.

What is the single most useful metric in a coaching group?

Weekly check-in response rate. It is a leading indicator of every other metric in the programme. Cohorts holding above 80% response rate by week three almost always finish above 75% completion; cohorts that drop below 50% response rate by week three almost always finish below 45%.

Should you tell members where they rank compared to the group?

Almost never. Public ranking turns a coaching group into a leaderboard, and the members who would benefit most from the programme drop out fastest under social comparison. Show each member their own progress against their own goal, in private; celebrate completed milestones publicly without numerical ranking.

How do you measure something soft like confidence or wellbeing in a coaching group?

Use a simple 1–10 self-rating asked at the same time each week — the trend over six weeks is more useful than any single reading. For more rigour, use a published validated instrument such as the Warwick-Edinburgh Mental Wellbeing Scale (SWEMWBS) at intake and end.

When is it too late to start tracking progress in a cohort?

After week one. Without a week-one baseline you cannot show change, only state. Coaches who try to retrofit tracking from week three onwards almost always abandon it because they have nothing to compare against.

Track progress your members can actually see

Bitir gives every member a private goal card, a weekly check-in poll, and a celebrations feed. The data lives where the work happens.

Start Your Group