Scrum Master Interview Questions
25 questions — 5 easy · 11 medium · 9 hard
Agile Framework
(3)| Aspect | Scrum | Kanban |
|---|---|---|
| Cadence | Fixed sprints (1-4 weeks) | Continuous flow |
| Roles | Scrum Master, Product Owner, Developers | No prescribed roles |
| Work limits | Sprint backlog (committed per sprint) | WIP limits per column |
| Changes | Discouraged during sprint | Allowed anytime |
| Ceremonies | Sprint Planning, Daily Scrum, Review, Retro | No required ceremonies |
| Board | Reset each sprint | Persistent, continuous |
| Metrics | Velocity, sprint burndown | Lead time, cycle time, throughput |
| Planning | Sprint-based estimation | Just-in-time, pull-based |
When to use Scrum: Teams that benefit from structure, predictable delivery cadence, and regular planning cycles.
When to use Kanban: Support/ops teams, teams with unpredictable work, or when continuous delivery is more important than sprint commitments.
Scrumban combines both: sprint cadence with WIP limits, or Kanban flow with regular retrospectives. Works well for teams transitioning between methodologies.
Follow-up
Follow-up: Have you ever combined elements of both? How did that work?
Key things to listen for:
- Team ownership — the team creates and maintains the DoD, not just the SM
- Living document — DoD evolves as the team matures
- Consistent enforcement — DoD is applied to every story, not selectively
What a good DoD includes:
- Code written and peer-reviewed
- Unit tests written and passing
- Integration tests passing
- No known critical bugs
- Documentation updated (if applicable)
- Deployed to staging environment
- Acceptance criteria met and verified
- Product Owner has reviewed and accepted
Creating the DoD:
- Discuss as a team what "done" means for your context
- Start with a minimal DoD and expand as the team matures
- Write it down and make it visible (posted on the board or wiki)
- Review and update it during retrospectives
Maintaining it:
- Reference the DoD during sprint planning ("Can we realistically meet DoD for all these items?")
- Use it during sprint review as the acceptance checklist
- If items frequently don't meet DoD, discuss why in retro
- Never compromise DoD under pressure — incomplete work creates hidden debt
Handling disagreements: Refer back to the written DoD. If the DoD is unclear, improve the wording in the next retro. The PO has final say on acceptance criteria; the team has final say on technical quality standards.
Follow-up
Follow-up: How do you handle disagreements about whether something meets the Definition of Done?
Key things to listen for:
- Practical experience — has dealt with real scaling challenges, not just theoretical knowledge
- Communication focus — understands that scaling is primarily a coordination problem
- Framework awareness — knows frameworks exist but doesn't blindly follow them
Common challenges:
- Cross-team dependencies — teams waiting on each other, integration conflicts
- Consistent practices — different teams using different standards or definitions of done
- Shared codebase — merge conflicts, architectural decisions affecting multiple teams
- Communication overhead — more teams = exponentially more communication channels
- Prioritization — competing backlogs and conflicting priorities
- Alignment — keeping all teams moving toward the same product vision
Scaling frameworks:
- SAFe — most structured, heavy process, works for large enterprises
- LeSS — minimal framework, extends Scrum to multiple teams
- Nexus — Scrum.org's scaling framework, focuses on integration
- Spotify Model — squads, tribes, chapters, guilds (not a framework, an organizational model)
Practical solutions:
- Scrum of Scrums — representatives from each team sync regularly
- Shared backlog refinement for cross-team items
- Common Definition of Done across teams
- Architectural guilds for cross-cutting technical decisions
- Joint sprint reviews to see the whole product increment
Follow-up
Follow-up: Have you worked with any scaling frameworks? What was your experience?
Team Facilitation
(5)Key things to listen for:
- Preparation — works with PO before planning to ensure backlog is refined
- Team involvement — the team decides what they can commit to, not the SM or PO
- Clear sprint goal — focuses on outcomes, not just a list of stories
Good approach:
Before planning:
- Ensure backlog items are refined (clear acceptance criteria, estimated)
- PO prioritizes the backlog based on business value
- Review team capacity (vacations, meetings, on-call)
During planning:
- PO presents the sprint goal and top priority items
- Team discusses each item, asks questions, clarifies scope
- Team selects items they can commit to based on capacity and velocity
- Break items into tasks if needed
- Agree on a clear sprint goal
Time-box: 2 hours per sprint week (e.g., 4 hours for a 2-week sprint)
Over-committing: Review historical velocity, encourage the team to leave buffer, discuss why it keeps happening (optimism bias, unclear stories, unplanned work).
Under-committing: Stretch goals, review if stories are too large, check if the team is sandbagging or if there are hidden impediments.
Follow-up
Follow-up: What do you do when the team consistently over-commits or under-commits?
Key things to listen for:
- Proactive detection — doesn't wait for problems to be reported
- Servant leadership — removes blockers so the team can focus on work
- Escalation awareness — knows when to escalate vs. resolve directly
- Systemic thinking — addresses root causes, not just symptoms
Detection methods:
- Daily standup ("What's blocking you?")
- One-on-ones with team members
- Observing team dynamics and energy levels
- Monitoring board for stuck items (long cycle time)
- Retrospective feedback
Resolution approach:
- Log the impediment — make it visible (impediment board or backlog)
- Assess impact — how many people are affected? Is it blocking the sprint goal?
- Classify — can the team resolve it, or does it need external help?
- Act immediately — for high-impact blockers, drop other work
- Escalate when needed — involve management for organizational impediments
- Follow up — verify the impediment is truly resolved
Common impediment types:
- External dependencies (waiting on another team)
- Unclear requirements (needs PO clarification)
- Technical blockers (environment issues, access permissions)
- Organizational (slow approval processes, meeting overload)
- Team dynamics (conflicts, communication breakdowns)
Follow-up
Follow-up: What was the most difficult impediment you had to resolve?
Key things to listen for:
- Empathy — understands why the PO changes priorities (market pressure, stakeholder demands)
- Structured pushback — doesn't just say yes or no, but facilitates a conversation
- Process as a shield — uses Scrum framework to protect the team while staying flexible
- Data-driven — shows the impact of frequent changes with metrics
Good approach:
- Understand the root cause — why are priorities changing? Is the market volatile, or is there a planning problem?
- Make the cost visible — "Changing this mid-sprint means X won't get done. Is that acceptable?"
- Shorter sprints — if change is constant, reduce sprint length to 1 week for faster feedback
- Buffer capacity — reserve 10-20% of sprint capacity for unplanned work
- Backlog refinement — better refinement reduces surprises
- Sprint goal focus — "Does this change align with our sprint goal? If not, it waits."
Mid-sprint scope change:
- Scrum Guide says sprint scope should not change during a sprint
- Facilitate a conversation: "We can add this, but what do we remove?"
- If the change invalidates the sprint goal, consider canceling the sprint (rare but valid)
- Track scope changes to show the pattern in retrospectives
- Work with the PO to improve upstream planning
Follow-up
Follow-up: What if the PO insists on changing the sprint scope mid-sprint?
Key things to listen for:
- Empathy first — understands and validates the resistance before trying to change it
- Lead by example — shows value through results, not by forcing compliance
- Patience — recognizes that change takes time
- Adaptability — willing to adjust the approach, not dogmatic about specific practices
Understanding resistance sources:
- Bad past experience with agile ("We tried it, it didn't work")
- Fear of losing autonomy or control
- Doesn't see the value ("This is just more meetings")
- Comfort with current way of working
- Imposed without team input ("Management decided we're doing Scrum")
Good approach:
- Listen first — understand why they're resistant (1-on-1 conversations)
- Start small — introduce one practice at a time, not everything at once
- Show value quickly — pick a practice with immediate visible benefit (e.g., daily standup to improve communication)
- Involve the team — let them shape how practices are implemented
- Focus on problems, not practices — "We have a communication gap" not "We need daily standups"
- Celebrate wins — highlight when agile practices produce positive results
- Be flexible — adapt ceremonies to fit the team's culture (e.g., async standups for distributed teams)
Handling open hostility:
- Private conversation — understand their specific concerns
- Acknowledge valid points — maybe some ceremonies do need adjustment
- Agree on a trial period — "Let's try it for 2 sprints and evaluate"
- If behavior is disruptive, address it as a team dynamic issue, not an agile issue
Follow-up
Follow-up: How do you handle a team member who is openly hostile toward Scrum ceremonies?
Key things to listen for:
- Realism — acknowledges the onboarding tax and plans for it honestly
- Team involvement — uses onboarding as a team activity, not just an SM task
- Stakeholder transparency — communicates impact proactively
Onboarding approach:
Before they join:
- Prepare a "day one" guide (repo access, dev environment setup, key contacts, team norms)
- Assign an onboarding buddy from the team
- Reserve capacity in the sprint plan — don't plan at full velocity
First sprint:
- Start with a well-understood, scoped story (low risk, good for learning the codebase)
- Pair programming with an experienced team member
- Walk through the full workflow: picking up a story, branching, PR process, DoD
- Include them in all ceremonies immediately — observation before participation is okay
Team ceremonies:
- Facilitate introductions in the team retrospective — let the new person share first impressions
- Include onboarding experience in retrospective: "What would have helped you get started faster?"
Handling reduced velocity:
- Adjust capacity in sprint planning: "We have a new team member starting, so we're planning at 80% capacity"
- Communicate proactively to stakeholders before the sprint: "Expect lower output while we onboard"
- Reframe: "We're investing in capacity — velocity will recover and grow"
- Avoid hiding it — a sudden velocity dip mid-sprint is more alarming than a communicated one
Follow-up
Follow-up: How do you handle the reduced team velocity during onboarding without alarming stakeholders?
Continuous Improvement
(3)Key things to listen for:
- Variety — uses different formats to keep retros fresh
- Psychological safety — creates an environment where honest feedback is safe
- Actionable outcomes — retros produce concrete improvements, not just complaints
- Follow-through — tracks action items and reviews progress
Popular formats:
- Start/Stop/Continue — simple and effective for any team
- Mad/Sad/Glad — emotion-based, good for surfacing team morale
- Sailboat — wind (what helps), anchors (what holds back), rocks (risks), island (goal)
- 4Ls — Liked, Learned, Lacked, Longed for
- Timeline — plot events on a timeline and discuss highs/lows
- Dot voting — identify top issues democratically
Keeping retros engaging:
- Rotate formats every 2-3 sprints
- Use online tools (Miro, FunRetro) for remote teams
- Start with an icebreaker or energizer
- Timebox discussions to keep energy up
- Celebrate wins, not just problems
- Occasionally let a team member facilitate
Ensuring follow-through:
- Limit action items to 2-3 per retro (focus over quantity)
- Assign an owner and deadline to each action item
- Review previous action items at the start of each retro
- Add action items to the sprint backlog so they get tracked
Follow-up
Follow-up: How do you ensure action items from retros are actually implemented?
Key things to listen for:
- Metrics as tools, not targets — uses metrics for insight, not performance evaluation
- Team-level metrics — never measures individual developer velocity
- Trend over absolute — focuses on trends over time, not sprint-to-sprint numbers
- Transparency — shares metrics openly and explains their purpose
Good approach:
- Educate stakeholders — velocity is a planning tool, not a productivity measure
- Use ranges — "Our velocity is typically 30-40 points" rather than "We do 35 points"
- Focus on flow metrics:
- Cycle time (how long items take from start to done)
- Throughput (items completed per sprint)
- WIP (work in progress at any time)
- Never compare teams — different teams estimate differently
- Celebrate improvement — highlight reduced cycle time or fewer bugs, not higher velocity
When management asks to increase velocity:
- Explain that velocity measures estimation accuracy, not productivity
- Inflating points doesn't deliver more value
- Focus conversation on: What outcomes do you want? More features? Faster delivery? Better quality?
- Suggest: "Instead of increasing velocity, let's reduce cycle time and remove impediments"
Warning signs of metric abuse: Teams inflating story points, avoiding complex work, rushing quality, or feeling anxious about sprint commitments.
Follow-up
Follow-up: How do you respond when management asks you to increase the team's velocity?
Key things to listen for:
- Pattern recognition — can identify common dysfunctions
- Root cause thinking — looks beyond symptoms to underlying issues
- Constructive approach — addresses problems without blame
Common anti-patterns:
Ceremony anti-patterns:
- Daily standup becomes a status report to the manager
- Retros produce no action items or same issues repeat
- Sprint review is a demo, not a feedback session
- Planning is dictated by PO/manager, not collaborative
Team anti-patterns:
- "Mini-waterfalls" within sprints (all dev, then all testing)
- Hero culture (one person does all the hard work)
- No real collaboration (everyone works in silos)
- Fear of raising issues or disagreeing
Process anti-patterns:
- Velocity used as a performance metric
- Stories carry over every sprint
- No working software at end of sprint
- Scrum Master assigns tasks (should be self-organizing)
- "Agile in name only" — same waterfall process with new terminology
Addressing anti-patterns:
- Observe and gather data (don't act on assumptions)
- Raise it in a retrospective as a discussion topic, not an accusation
- Use questions: "I've noticed X happening — what do you think is causing it?"
- Let the team own the solution
- Experiment and iterate — try a change for one sprint and evaluate
Follow-up
Follow-up: How do you address an anti-pattern without making the team feel criticized?
Scaling
(3)Key things to listen for:
- Framework trade-offs — understands that more structure comes with more overhead
- Context-driven selection — matches the framework to the organization, not the other way around
- Experience — ideally has used at least one in practice
SAFe (Scaled Agile Framework):
- Most prescriptive and comprehensive
- Introduces Agile Release Trains (ARTs), Program Increments (PIs), and multiple roles (RTE, Product Manager)
- Strong portfolio-level governance, budgeting, and roadmaps
- Best for: Large enterprises with compliance needs, waterfall roots, or many inter-dependent teams (50+ people)
- Drawback: Heavy ceremony load, can feel bureaucratic, risk of becoming "SAFe washing"
LeSS (Large-Scale Scrum):
- Minimalist — extends standard Scrum with as little additional process as possible
- One Product Owner, one Product Backlog, multiple teams sharing the same sprint cadence
- Strong on organizational design; encourages removing specialist silos
- Best for: Organizations willing to restructure around feature teams (5-8 teams)
- Drawback: Requires significant organizational change and trust to work
Nexus:
- Scrum.org's scaling guide, sits between LeSS and SAFe in complexity
- Adds a Nexus Integration Team responsible for cross-team coordination and integration
- Keeps most of standard Scrum intact
- Best for: 3-9 teams working on a single product; pragmatic extension of Scrum
- Drawback: Less guidance at portfolio level compared to SAFe
Selection criteria:
- Organization size and number of teams
- Willingness to restructure (LeSS requires it, SAFe doesn't)
- Regulatory/compliance needs (SAFe has HIPAA, CMMI alignment)
- Existing Scrum maturity
- Management appetite for change
Follow-up
Follow-up: What organizational conditions would make you choose one over the other?
Key things to listen for:
- Proactive visibility — makes dependencies visible before they become blockers
- Cross-team relationships — builds trust between teams, not just process
- Escalation path — knows when to involve leadership or a Release Train Engineer
Dependency management approaches:
Make dependencies visible early:
- Use a dependency board or risk board visible to all teams
- Identify dependencies during backlog refinement and sprint planning
- Mark dependent stories with explicit links and expected delivery dates
Cross-team ceremonies:
- Scrum of Scrums — representative from each team meets 2-3x per week
- PO Sync — Product Owners align on shared priorities and trade-offs
- Joint sprint planning — teams planning together when major cross-team work is involved
- System demo — integrated demo at the end of each PI (in SAFe context)
API-first and interface contracts — agree on interfaces early so teams can work in parallel
"Team API" concept — each team publishes what they can deliver and when, others plan around it
When a dependency blocks a sprint goal:
- Raise it immediately in Scrum of Scrums
- Negotiate urgency with the providing team's SM and PO
- Escalate to leadership if it's a business priority conflict
- Adjust the sprint goal rather than silently miss it
- Retrospect on why the dependency wasn't caught earlier
Follow-up
Follow-up: What happens when a dependency blocks the sprint goal of another team?
Key things to listen for:
- PI Planning understanding — knows it's a big-room planning event, not just a longer sprint planning
- Preparation discipline — success at PI Planning starts weeks before the event
- Team-level vs. train-level thinking — the team must think beyond its own sprint into cross-team commitments
What a Program Increment (PI) is:
- A fixed timebox of 8-12 weeks consisting of 4-5 development sprints + 1 Innovation & Planning (IP) sprint
- The entire Agile Release Train (ART) aligns on the same PI objectives
- At the end of each PI, the ART delivers a working, tested system increment
PI Planning (the event):
- 2-day event (can be remote or in-person) where all ART teams plan the upcoming PI together
- Output: team PI objectives, program board with dependencies and risks, draft sprint plans for all iterations
Preparing the team:
- Feature readiness — ensure Business Owners and Product Management have prioritized features 2 weeks before
- Capacity planning — identify team availability (vacations, hiring plans) for the full PI
- Architecture briefing — system architect presents enablers and architectural runway
- Team backlog — PO refines stories for at least Sprint 1-2 before the event
- Train the team — if it's the first PI, run a workshop explaining the format, outputs expected, and how dependencies are tracked
Common first-time pitfalls:
- Over-committing in the excitement of big-room energy
- Not identifying cross-team dependencies until mid-PI
- Treating PI objectives as a waterfall contract rather than a commitment with uncertainty
- Teams not coordinating with the System Team (CI/CD pipeline, environments)
- Ignoring the IP sprint (treating it as extra development time)
Follow-up
Follow-up: What are the biggest pitfalls teams fall into during PI Planning for the first time?
Metrics
(4)Key things to listen for:
- Precise definitions — doesn't conflate the two
- Practical application — uses them to diagnose bottlenecks, not just report numbers
- Stakeholder communication — can translate flow metrics into business language
Definitions:
- Lead time — total time from when a request is created (added to the backlog) to when it is delivered. Includes wait time in the backlog.
- Cycle time — time from when work actively starts (moved to In Progress) to when it is done. Excludes queue time.
Example:
- A feature is added to the backlog on Monday
- Development starts the following Monday (7 days of waiting)
- Development and review take 3 days
- Lead time = 10 days, Cycle time = 3 days
How to use them:
| Metric | What it reveals | What to do |
|---|---|---|
| Long lead time | Backlog is too large, prioritization issues | Backlog grooming, WIP limits |
| Long cycle time | Stories are too large, blockers, lack of focus | Split stories, remove impediments |
| Lead > Cycle (large gap) | Work sits in queue too long | Reduce batch size, pull sooner |
For stakeholders: "Lead time tells us how long customers wait for something they've asked for. Cycle time tells us how efficiently the team works once they pick something up. We want both to shrink — but for different reasons."
Practical tools: Jira's Control Chart and Cumulative Flow Diagram (CFD) visualize both metrics over time.
Follow-up
Follow-up: How do you explain these metrics to non-technical stakeholders?
Key things to listen for:
- Metrics skepticism — understands that charts can be gamed or misleading
- Qualitative + quantitative — combines data with team observations
- Early warning instinct — proactively checks in rather than waiting for the chart to show problems
Why a burndown can look good but mask problems:
- Story point inflation — team estimates higher to make the chart look comfortable
- Closing stories prematurely — marking items done before they meet DoD
- Hidden technical debt — features ship but are fragile, slow, or hard to maintain
- Unplanned work absorbed quietly — team works weekends to compensate, burndown looks smooth
- Single contributor — one person carries the sprint, others are blocked but stories still close
- Late-breaking scope creep — PO quietly adds small items that inflate the total and don't show in burndown
Signs to look for beyond the chart:
- Daily standup energy and body language
- Number of items in "In Progress" simultaneously (high WIP)
- Bug rate during the sprint
- Team members skipping retro action items
- Team saying "we'll clean it up next sprint" frequently
Combining data sources:
- Burndown + cycle time per story (are stories cycling fast or sitting?)
- Burndown + daily standup notes (qualitative mood tracker)
- Burndown + DoD compliance rate
- Burndown + team satisfaction pulse survey (e.g., NPS or weekly 1-5 rating)
Follow-up
Follow-up: What other data sources do you combine with the burndown to get the full picture?
Key things to listen for:
- Holistic view — understands that a high-velocity team can still be burned out or unhappy
- Qualitative + quantitative — combines hard metrics with soft signals
- Trust preservation — measures without creating a surveillance culture
Team health indicators beyond velocity:
Flow metrics:
- Cycle time stability (consistent is healthy; wild variance = chaos)
- WIP level (high WIP = overloaded team)
- Escaped defects (bugs found by users, not the team)
- Technical debt ratio (story points for new features vs. debt repayment)
Team dynamics indicators:
- Retrospective participation rate and quality of conversation
- Daily standup energy (are people present and engaged, or going through the motions?)
- Sick day frequency (burnout signal)
- Turnover and tenure (high turnover = team instability)
- Pair programming or collaboration frequency
Formal tools:
- Team Health Checks (Spotify model) — quick visual vote on dimensions like "Pawns or Players", "Speed", "Fun"
- Weekly mood survey — anonymous 1-5 rating on: clarity, collaboration, confidence, energy
- NPS-style team survey — "Would you recommend this team to a colleague?"
Acting without surveillance:
- Share aggregate results with the full team, not management
- Use results to start conversations, not make decisions
- Let the team decide what to do with the data
- Never tie health data to individual performance reviews
Follow-up
Follow-up: How do you act on the results without making people feel surveilled?
Key things to listen for:
- Visual fluency — can read and explain a CFD without notes
- Actionable interpretation — connects what they see to concrete process changes
- Leading indicator mindset — uses CFD proactively, not just post-mortem
What a Cumulative Flow Diagram (CFD) shows:
- A stacked area chart where each band represents a workflow stage (Backlog, In Progress, Review, Done)
- The vertical distance between bands at any point in time shows the amount of work in that stage
- The horizontal distance shows the approximate cycle time for items
Healthy CFD:
- Bands are roughly parallel and grow smoothly upward
- The "Done" band rises steadily — work is completing consistently
- Bands are thin relative to total flow — no stage accumulates work
Bottleneck signals:
- Widening band — a stage is accumulating more work than it's releasing (e.g., "In Review" grows wide)
- Flat "Done" band — work is not completing; the team may be working but not finishing
- Steep step-changes — batch processing rather than continuous flow
- Narrowing upstream bands — downstream bottleneck is starving upstream stages of pull
High WIP on the CFD:
- The "In Progress" band balloons vertically
- Items take longer to complete (horizontal distance increases)
- Team members are context-switching between too many parallel tasks
Response actions:
- Introduce or tighten WIP limits for the bottlenecked stage
- Swarm on blocked items — pull team members from upstream stages to clear the bottleneck
- Look for root cause: unclear acceptance criteria, missing skills, external dependencies
Follow-up
Follow-up: What does it look like when WIP is too high on the CFD?
Conflict Resolution
(3)Key things to listen for:
- Neutrality — doesn't take sides; acts as a facilitator, not a judge
- Safety preservation — keeps the environment psychologically safe for both parties
- Resolution focus — moves toward a decision without suppressing valid debate
In the moment:
- Acknowledge both perspectives — "I hear two valid points of view here. Let's make sure both are understood."
- Timebox the debate — "We have 5 minutes to explore this before we need to move on."
- Separate the people from the problem — redirect from "I think you're wrong" to "What outcome are we trying to achieve?"
- Ask clarifying questions — help both sides articulate their concerns precisely
- Look for common ground — often disagreements share the same goal but different risk tolerances
When you're not technical enough to judge:
- Don't fake expertise or pick a side
- "I'm not the right person to evaluate the technical trade-offs here — you two are. What process can we use to decide?"
- Suggest a spike (time-boxed investigation) to gather evidence before committing
- Propose majority vote or coin flip if both options are viable and the cost of delay exceeds the cost of the wrong choice
- Timebox the decision: "Let's decide in 10 minutes — what information do we need?"
After the meeting:
- Check in privately with both individuals
- Discuss at the next retro if it's a recurring pattern
- Build a team agreement on how technical decisions are made
Follow-up
Follow-up: What if the disagreement is about technical approach and you are not technical enough to judge?
Key things to listen for:
- Boundary clarity — knows the SM role does not include HR or formal performance management
- Coaching first — tries to understand and support before escalating
- Team protection — balances empathy for the individual with responsibility to the team
Initial approach (SM's domain):
- Private conversation first — "I've noticed you seem less engaged lately. Is everything okay?"
- Assume positive intent — there may be personal issues, unclear expectations, or skill gaps
- Listen actively — is the underperformance due to confusion, burnout, disengagement, or external factors?
- Clarify expectations — ensure the person knows what "good" looks like
- Offer support — pairing, training, reduced load temporarily, mentoring
- Set a check-in cadence — follow up weekly to see if things improve
When it becomes a management issue:
- Behavior persists after coaching conversations
- There are formal deliverable failures (repeated missed DoD, sprint commitments)
- The team formally raises it as a blocker
- The issue involves conduct, not just performance
- Legal, HR, or contractual elements appear
Transition to management:
- Loop in the line manager with context and data: "I've had three coaching conversations with X over 6 weeks. Here's what we tried and what hasn't changed."
- Do not handle it unilaterally or gossip to other team members
- Protect the team by ensuring the situation is being addressed, even if you step back
In retrospectives: Facilitate team norms conversations so expectations are co-created, reducing future conflicts.
Follow-up
Follow-up: At what point does this become a management issue rather than a Scrum Master issue?
Key things to listen for:
- Role clarity — can articulate where SM authority ends and PO authority begins
- Systemic approach — addresses the relationship, not just the incident
- Team protection — prioritizes the team's self-organization above the ego conflict
Authority split in Scrum:
- Product Owner: what gets built (backlog priority, product vision, acceptance criteria)
- Scrum Master: how the team works (process, ceremonies, impediment removal, coaching)
- Developers: how to build it (technical decisions, task breakdown, daily work organization)
When conflict emerges:
- Private conversation — discuss the tension one-on-one with the PO before it becomes public
- Return to first principles — open the Scrum Guide together; roles are not negotiable
- Separate the problem — "The goal we both share is successful delivery. Let's figure out where the confusion is coming from."
- Involve a coach or management — if the relationship is broken, a neutral party is needed
When the PO assigns tasks directly to developers:
- Address it immediately and privately: "When you assign tasks directly, it undermines the team's ability to self-organize. The team needs to pull work based on the sprint goal."
- In the next retrospective, facilitate a discussion about team working agreements
- Create a written team agreement: "All work enters the board through the backlog. The PO adds and prioritizes; the team pulls."
- If it continues, escalate to a shared manager — this is a governance issue, not just a process issue
Follow-up
Follow-up: What if the PO starts assigning tasks directly to developers, bypassing the team's self-organization?
Facilitation
(4)Key things to listen for:
- Purpose clarity — refinement is about readiness, not just estimation
- Shared understanding — the goal is that all team members understand what needs to be built
- Right size and cadence — refinement is continuous, not a single big meeting
What makes refinement effective:
- Regular cadence — typically mid-sprint, so there's always a ready backlog for the next sprint
- Right attendees — PO, developers, SM; invite subject matter experts only when needed
- Definition of ready — each item should have: a clear user story, acceptance criteria, no open external dependencies, and a rough size estimate
- Timeboxed — aim for 10% of sprint capacity (e.g., 4 hours in a 2-week sprint)
- Three horizons — refine items for next sprint in detail, look 1-2 sprints ahead at medium level, and flag anything further out
Techniques used during refinement:
- Story splitting (INVEST criteria)
- Three Amigos (developer, tester, PO discuss each story)
- Acceptance criteria workshops
- Planning poker or T-shirt sizing for estimates
When a story is too vague to estimate:
- Don't force an estimate — inflated estimates are worse than no estimate
- Create a spike: "Let's spend 2 hours researching this and come back with a better understanding"
- Ask the PO for more context or to involve the requester
- Return the story to the backlog with a "needs clarification" flag
- This is a healthy gate — teams that estimate vague stories ship vague features
Follow-up
Follow-up: How do you handle a backlog item that the team refuses to estimate because it's too vague?
Key things to listen for:
- Feedback loop focus — understands the review is an inspection-adaptation event, not a sign-off meeting
- Stakeholder engagement skills — actively works to bring the right people and draw out useful input
- Outcome orientation — drives toward actionable feedback, not applause
Structuring an effective sprint review:
- Set the context — briefly recap the sprint goal and what was attempted vs. completed
- Demo working software — show real functionality in a real (staging) environment, not slides
- Focus on outcomes, not output — "This feature reduces checkout abandonment" not just "we built a new button"
- Invite interaction — let stakeholders try the software themselves when possible
- Ask targeted questions — "Does this solve the problem you described last month?" "What would make this more useful?"
- Capture feedback live — write feedback on a shared board, triage with PO after the meeting
- Close with what's next — brief preview of what's planned for next sprint to maintain engagement
When stakeholders don't show up:
- Investigate why — too long, too frequent, not relevant to them, no agenda?
- Shorten the format or make it async (recorded demo with async feedback form)
- Ensure the right stakeholders are invited, not just management
- Work with PO to demonstrate business value more explicitly
- Consider inviting end users rather than internal stakeholders
Follow-up
Follow-up: What do you do when stakeholders don't show up or disengage during the review?
Key things to listen for:
- Root cause focus — goes deeper than surface-level complaints
- Psychological safety — recurring complaints often mean the team doesn't feel heard
- Facilitation creativity — changes the format to break the cycle
Why the cycle happens:
- Action items from previous retros were never implemented
- The underlying problem is organizational (outside the team's control)
- The team doesn't believe change is possible
- The same vocal people dominate and others disengage
Techniques to break the cycle:
- Five Whys — take one chronic complaint and dig into the root cause together
- Fishbone diagram — map causes and effects visually for a recurring issue
- Control vs. Influence vs. Accept — sort complaints into what the team can control, influence, or must accept
- Retrospective on the retrospective — explicitly discuss: "Why do we keep having this conversation?"
- Focus on one thing — ban the "top 10 problems" list; commit to one meaningful change
- Outcome-first framing — "What does success look like in 4 weeks?" then work backward
Balancing feelings and solutions:
- Start by fully acknowledging the frustration: "This has been a problem for 4 sprints. That's genuinely frustrating."
- Validate before problem-solving: "Before we discuss solutions, let's make sure everyone feels heard."
- Then shift: "We've diagnosed this well. What's one thing we could try differently this sprint?"
Follow-up
Follow-up: How do you balance respecting the team's feelings with moving toward solutions?
Key things to listen for:
- Gradual empowerment — doesn't flip a switch; progressively transfers decision-making
- Safety creation — team won't self-organize if mistakes are punished
- Clear boundaries — self-organization happens within constraints, not without them
Signs a team is NOT self-organizing:
- SM or PO assigns tasks at the daily standup
- Team waits for direction before starting work
- Only one person makes technical decisions
- Team blames the SM when something goes wrong
- No one speaks up in retrospectives unless asked
Coaching toward self-organization:
- Stop answering, start asking — when someone asks "What should I do?", respond with "What options do you see?"
- Pull back visibly — announce: "I'm going to stop jumping in on X. I want the team to own that."
- Celebrate team decisions — when the team decides something without you, acknowledge it explicitly
- Create safe-to-fail experiments — let the team try approaches that might not work, then reflect together
- Working agreements — facilitate the team creating its own rules and holding itself accountable
- Coach the leader — often one person dominates; coach them to ask for input before sharing their own view
Self-organizing vs. unmanaged:
- Self-organizing teams have clear goals (sprint goal, product vision), shared constraints (DoD, capacity), and mutual accountability
- Unmanaged teams have none of the above — no shared direction, no accountability, no coordination
- The SM's job is to create the conditions for self-organization, not disappear entirely
Follow-up
Follow-up: What's the difference between a self-organizing team and an unmanaged team?
Use these questions in your next interview
Import all 25 questions into Intervy with one click. Add scoring rubrics, organize by template, and conduct structured interviews.
Try Intervy Free