Project Lead Interview Questions
25 questions — 4 easy · 13 medium · 8 hard
Leadership
(7)Key things to listen for:
- Early detection — addresses conflicts before they escalate, doesn't avoid them
- Private conversations first — talks to individuals separately before group discussion
- Active listening — seeks to understand each person's perspective and underlying concerns
- Focus on interests, not positions — looks for the root cause, not just the surface disagreement
- Objective criteria — uses data, team agreements, or project goals to guide resolution
Good approach:
- Acknowledge the conflict exists
- Listen to all sides privately
- Identify the underlying issue (technical disagreement, unclear ownership, personal friction)
- Facilitate a conversation focused on shared goals
- Agree on a resolution and follow up
Red flags: Avoids conflict entirely, takes sides immediately, escalates to management without trying to resolve first, or blames individuals publicly.
Follow-up
Follow-up: Can you describe a specific conflict you resolved and the outcome?
Key things to listen for:
- Audience awareness — adapts communication style to the audience
- Business framing — translates technical concepts into business impact
- Visual communication — uses diagrams, analogies, and examples
- Two-way conversation — invites questions and feedback
Good approach:
- Start with the business problem, not the technical solution
- Explain impact in terms stakeholders care about: time, cost, risk, user experience
- Use analogies and simple language — avoid jargon
- Present options with trade-offs, not just one recommendation
- Use visuals — diagrams, timelines, before/after comparisons
- End with a clear recommendation and next steps
Example framing:
- Instead of: "We need to refactor the authentication microservice"
- Say: "Our login system is slowing down as we grow. Investing 2 weeks now will prevent outages that could affect 10,000 users and cost us $X in lost revenue"
Handling pushback: Listen to concerns, provide data to support the recommendation, offer alternatives with clear trade-offs, and be willing to compromise on timing or scope.
Follow-up
Follow-up: How do you handle pushback when stakeholders disagree with a technical recommendation?
Key things to listen for:
- Structured approach — has a plan, not just ad-hoc help
- Growth mindset — believes in people's ability to improve
- Patience and empathy — remembers their own junior days
- Empowerment — guides rather than gives answers
Good approach:
- Set clear expectations — define what success looks like at their level
- Create a growth plan — identify skills to develop with milestones
- Progressive challenge — assign tasks slightly above their current level
- Regular 1-on-1s — weekly check-ins for feedback and support
- Code review as teaching — explain the why, not just the what
- Pair programming — work together on complex problems
- Safe environment — make it okay to ask questions and make mistakes
Balancing with own workload:
- Block dedicated mentoring time in calendar
- Use async communication (documented reviews, written guides)
- Invest in documentation and onboarding materials that scale
- Delegate appropriate tasks that serve as learning opportunities
- Recognize that mentoring is part of the lead's job, not extra work
Red flags: Micromanages, just tells answers without explaining, doesn't make time, or sets unrealistic expectations.
Follow-up
Follow-up: How do you balance mentoring with your own workload?
Key things to listen for:
- Communication structures — establishes regular sync points
- Shared understanding — ensures all teams understand the big picture
- Dependency management — proactively identifies and manages cross-team dependencies
- Conflict resolution — has strategies for when priorities collide
Good approach:
- Align on goals — ensure all teams understand the shared objective
- Map dependencies — identify where teams depend on each other and plan accordingly
- Regular syncs — cross-team standups or weekly alignment meetings (keep them short)
- Shared roadmap — visible timeline showing all teams' work and milestones
- Clear interfaces — define API contracts or integration points early
- Designated liaisons — one person per team responsible for cross-team communication
Handling conflicting priorities:
- Escalate to a shared stakeholder or product owner for prioritization
- Present the trade-offs clearly: "If Team A does X, Team B is blocked for Y days"
- Look for creative solutions — can scope be reduced? Can work be parallelized differently?
- Document decisions and communicate them to all affected teams
Tools: Shared Jira boards, Confluence pages, Slack channels, cross-team demos.
Follow-up
Follow-up: How do you handle situations where teams have conflicting priorities?
Key things to listen for:
- Action-orientation — retros produce specific actions, not just venting
- Psychological safety — creates an environment where people speak honestly
- Follow-through — tracks action items between retros
- Format variety — adapts the format to keep retros fresh and effective
Effective retrospective structure:
- Set the stage — brief check-in, confirm confidentiality norms
- Gather data — what happened? (facts, events, data from the sprint)
- Generate insights — what went well, what didn't, what confused us?
- Decide on actions — pick 1–3 specific, actionable improvements with an owner and deadline
- Close — appreciate contributions, confirm action items
Formats to rotate: Start/Stop/Continue, 4Ls (Liked, Learned, Lacked, Longed For), Mad/Sad/Glad, timeline retrospective, sailboat
When the same issues recur:
- Acknowledge the pattern explicitly: "This is the third retro where we've raised X"
- The action items were likely too vague or didn't have an owner
- Go deeper: what's the root cause behind the symptom? (5 Whys)
- Escalate if it requires authority or resources the team doesn't have
- Consider a dedicated working session just for that problem
Red flags: Retros are cancelled when busy, action items are never reviewed, or the same person dominates while others stay silent.
Follow-up
Follow-up: What do you do when the same issues keep coming up in every retro?
Key things to listen for:
- Courage — willing to address difficult behavior even in a high performer
- Fairness — applies the same standards to everyone regardless of output
- Nuance — understands that impact on team morale is also a performance metric
Why this is hard: Organizations often tolerate toxic behavior from high performers because output is visible and measurable, while cultural damage is diffuse and harder to quantify. Long-term, this leads to attrition of your best team members and a degraded environment.
Good approach:
- Name the behavior specifically — "In the last three sprint reviews, you've interrupted colleagues before they finished their point. This is the pattern I want to address"
- Separate behavior from character — criticize actions, not the person
- Explain the business impact — "This is affecting how other team members participate. I'm hearing that people are self-censoring in your presence"
- Set clear expectations — "I need to see X change within the next 4 weeks"
- Follow through — if behavior doesn't improve, escalate to HR or performance management
- Don't make exceptions — if you let it slide, you signal to the whole team that results justify any behavior
Red flags: Avoids the conversation because "they're too valuable", frames cultural damage as "team members being sensitive", or handles it with a group intervention instead of a private conversation.
Key things to listen for:
- Structured plan — onboarding is not an accident, it has a defined process
- Buddy system — new hire has a designated go-to person
- Early wins — first tasks are scoped to build confidence and context
- Empathy — remembers what it's like to be new to a complex codebase
Good onboarding plan:
Week 1 — Orientation:
- Dev environment setup (should be documented and work out of the box)
- Architecture overview — how the system is structured and why
- Walk through a complete user journey in the codebase
- Meet the team and stakeholders
Week 2 — First contribution:
- First task: a small, well-scoped bug fix or documentation improvement
- Shadowing code reviews to absorb team conventions
- Guided pair programming session on a real ticket
Week 3–4 — Independence:
- Progressively more complex tasks
- First solo feature with regular check-ins
- Retrospective: what was missing from onboarding?
If struggling after month one:
- Have an honest 1-on-1 — is the struggle technical, contextual, or personal?
- Provide specific, constructive feedback with examples
- Adjust task complexity and increase pairing
- Set a 30-day improvement plan with clear milestones
- Consider whether the role is the right fit — both for them and the project
Red flags: No documentation, new hire is thrown into a complex feature alone, team has no time to support onboarding.
Follow-up
Follow-up: What do you do when the new developer is struggling after the first month?
Project Management
(4)Key things to listen for:
- Framework-based thinking — uses a structured method, not just gut feeling
- Stakeholder alignment — involves stakeholders in prioritization decisions
- Impact vs effort analysis — considers value delivered relative to cost
Common frameworks:
- Eisenhower Matrix: Urgent+Important (do first), Important+Not Urgent (schedule), Urgent+Not Important (delegate), Neither (drop)
- MoSCoW: Must have, Should have, Could have, Won't have
- RICE: Reach x Impact x Confidence / Effort
- Value vs Effort matrix: Quick wins, big bets, fill-ins, money pits
Good approach:
- List all competing priorities
- Clarify actual deadlines and consequences of delay
- Assess business impact of each item
- Communicate trade-offs transparently to stakeholders
- Make a decision, document it, and revisit if context changes
Red flags: Says everything is priority #1, cannot articulate trade-offs, makes decisions without data or stakeholder input.
Follow-up
Follow-up: How do you say no to stakeholders when their request cannot fit into the current sprint?
Key things to listen for:
- Proactive identification — doesn't wait for problems to surface
- Systematic approach — has a repeatable process for risk assessment
- Mitigation over reaction — plans for risks before they become issues
Good approach:
- Identify risks — technical (new technology, integration), people (key person dependency, turnover), scope (unclear requirements, scope creep), external (third-party APIs, vendor reliability)
- Assess — probability (low/medium/high) x impact (low/medium/high)
- Prioritize — focus on high-probability, high-impact risks first
- Mitigate — for each top risk, create a specific plan:
- Avoid (change approach)
- Reduce (add spike/prototype)
- Transfer (insurance, SLA)
- Accept (acknowledge and monitor)
- Monitor — review risks regularly in team meetings
Practical examples:
- New technology risk → time-boxed spike/proof of concept
- Key person dependency → knowledge sharing, documentation, pair programming
- Scope creep → clear definition of done, change request process
Follow-up
Follow-up: What was the biggest project risk you identified early, and how did you mitigate it?
Key things to listen for:
- Collaborative estimation — involves the team, not just the lead
- Range-based thinking — gives ranges, not single numbers
- Historical data — references past similar work
- Uncertainty acknowledgment — honest about what is unknown
Common techniques:
- Story points — relative sizing (Fibonacci: 1, 2, 3, 5, 8, 13)
- T-shirt sizing — S, M, L, XL for rough categorization
- Three-point estimate — optimistic + most likely + pessimistic / 3
- Planning poker — team consensus through independent estimates
- Reference stories — compare to completed stories of known size
Good approach:
- Break down the feature into small, well-defined tasks
- Identify unknowns and add a spike task for research
- Estimate as a team (multiple perspectives catch blind spots)
- Add buffer for integration, testing, and code review
- Track actual vs estimated to improve over time
When estimates are wrong:
- Communicate early — as soon as you see a deviation, inform stakeholders
- Analyze why — was the scope unclear, was there an unknown dependency, was the task bigger than expected?
- Adjust the plan — re-prioritize, cut scope, or extend timeline
- Use it as a learning opportunity for future estimates
Follow-up
Follow-up: How do you handle situations where estimates turn out to be significantly wrong?
Key things to listen for:
- Balanced metrics — uses multiple metrics, not just velocity or output
- Outcome over output — measures value delivered, not just stories completed
- Team-level focus — measures team performance, not individual performance
- Uses metrics to improve, not punish
Useful metrics:
Delivery:
- Velocity / throughput (trend over time, not absolute number)
- Cycle time (from start to done)
- Lead time (from request to delivery)
- Deployment frequency
Quality:
- Bug rate (bugs per feature, bugs per release)
- Change failure rate (deployments causing incidents)
- Mean time to recovery (MTTR)
- Code coverage trends (not absolute %)
Health:
- Team satisfaction (surveys, retro feedback)
- Attrition rate
- Knowledge distribution (bus factor)
- Technical debt ratio
Avoiding metric gaming:
- Never use a single metric in isolation
- Focus on trends, not absolute numbers
- Use metrics to spark conversations, not as targets
- Let the team choose which metrics to track
- Regularly review whether metrics are still useful
- Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure"
Follow-up
Follow-up: How do you avoid metrics becoming a source of pressure or gaming?
Technical Strategy
(3)Key things to listen for:
- Pragmatic view — understands tech debt is sometimes a valid trade-off, not always bad
- Intentional decisions — takes on debt deliberately, not accidentally
- Tracking and visibility — documents debt and makes it visible to the team and stakeholders
- Payback plan — has a strategy for paying it down
When tech debt is acceptable:
- Time-to-market pressure with a clear deadline (MVP, competitive window)
- Prototype or proof of concept that will be rewritten
- Short-term trade-off with a scheduled payback plan
- When the cost of perfect solution outweighs the benefit
When to avoid it:
- Core infrastructure or security-critical systems
- When there is no plan to pay it back
- When the team doesn't understand the trade-off being made
Management approach:
- Document it — create tickets for all known tech debt
- Categorize — severity and impact (blocks features, causes bugs, slows development)
- Allocate capacity — reserve 15-20% of each sprint for debt reduction
- Make it visible — include tech debt in backlog alongside features
- Tie to business impact — "This debt is causing X bugs per month and slowing new feature delivery by Y%"
Follow-up
Follow-up: How do you convince stakeholders to allocate time for paying down technical debt?
Key things to listen for:
- Systematic approach — relies on processes and automation, not just individual discipline
- Culture building — creates an environment where quality is valued
- Balance — doesn't sacrifice delivery speed entirely for perfection
Technical practices:
- Code reviews — mandatory peer reviews with clear guidelines
- Automated testing — unit, integration, and E2E tests in CI/CD
- Linting and formatting — automated code style enforcement (ESLint, Prettier)
- CI/CD pipeline — automated checks that block merging if quality gates fail
- Definition of Done — includes tests, documentation, and code review
Cultural practices:
- Pair programming — for complex features and knowledge sharing
- Tech talks — team members share knowledge and best practices
- Blameless post-mortems — learn from mistakes without finger-pointing
- Lead by example — write clean code yourself and review thoroughly
Handling low-quality code:
- Have a private, constructive conversation
- Understand the root cause (lack of knowledge, time pressure, unclear standards)
- Provide specific feedback with examples
- Offer support — pair programming, mentoring, training
- Set clear expectations and follow up regularly
Follow-up
Follow-up: How do you handle a team member who consistently writes low-quality code?
Key things to listen for:
- Strategic thinking — connects the build/buy decision to core competency and competitive advantage
- Total cost of ownership — doesn't compare only license cost to development cost
- Risk awareness — considers vendor stability, lock-in, and data ownership
Build when:
- It is a core differentiator — the capability is what makes your product unique
- No existing solution fits the specific requirements
- You have the resources and expertise to build and maintain it
- Long-term total cost of ownership favors internal development
- Data control or compliance requirements prevent third-party use
Buy/use third-party when:
- The problem is solved well by the market and is not your competitive advantage
- Speed to market is critical
- The vendor has years of investment you would need to replicate
- Your team lacks domain expertise in that area
- Ongoing maintenance burden would distract from core work
Evaluating vendor lock-in:
- Can you export your data in a standard format?
- Is there an open-source alternative you could migrate to?
- What is the switching cost if the vendor raises prices or shuts down?
- Are you building deep integrations that will be hard to replace?
- What happens to your product if the vendor has an outage?
Red flags: Always builds in-house out of pride, or always buys without considering strategic implications.
Follow-up
Follow-up: How do you evaluate vendor lock-in risk when choosing a third-party tool?
Stakeholder Management
(3)Key things to listen for:
- Boundary setting without hostility — maintains structure while staying collaborative
- Root cause investigation — understands why requirements are changing
- Process response — uses process to absorb change rather than fighting it
Why requirements change:
- Stakeholder didn't fully think through the request before raising it
- Business conditions genuinely shifted
- Lack of upfront alignment on goals and acceptance criteria
- Trust issues — stakeholder doesn't feel heard and over-corrects
Good approach:
- Invest in discovery — thorough requirements conversations before the sprint begins reduce mid-sprint changes dramatically
- Define a change process — new requests go to the backlog and are prioritized in the next planning session
- Make trade-offs explicit — "We can swap this in if we swap that out. Here's what we'd drop"
- Sprint goal, not task list — frame the sprint around an outcome; tasks can shift if the goal is preserved
- Regular stakeholder check-ins — short demos or progress updates reduce surprise changes driven by anxiety
- Retrospective — review patterns of change together and agree on a prevention approach
Red flags: Accepts all changes without pushback, blames the stakeholder publicly, or rigidly refuses any mid-sprint change even when business need is genuine.
Follow-up
Follow-up: How do you protect the team from constant interruptions while keeping stakeholders satisfied?
Key things to listen for:
- Honesty over optimism — gives realistic estimates even under pressure, doesn't over-promise
- Early escalation — raises concerns before the deadline, not after
- Data-driven communication — backs timeline projections with facts, not gut feel
- Options framing — presents choices, not just bad news
Setting expectations:
- Avoid point estimates — give ranges ("3–5 weeks") and explain what would push to each end
- State assumptions explicitly — "This estimate assumes no change in scope and no critical team members leaving"
- Break milestones into checkpoints — give leadership visibility at defined intervals, not just at the end
- Document commitments — written summaries prevent misremembering
When a timeline slips:
- Raise the issue as soon as you see it coming — never wait until the deadline to report a miss
- Quantify the slip: "We're ~2 weeks behind. Root cause is X."
- Present options: cut scope, extend timeline, or add resource
- Recommend one option with your reasoning
- Align on the new plan and reset expectations in writing
Red flags: Only surfaces bad news at the deadline, adjusts estimates to match what leadership wants to hear, or cannot explain the basis for their timeline.
Follow-up
Follow-up: What do you do when the timeline you committed to is no longer achievable?
Key things to listen for:
- Non-territorial response — doesn't treat this as a personal affront, focuses on team impact
- Root cause curiosity — asks why this is happening
- Process clarity — establishes clear request channels without making people feel blocked
Why this happens:
- Stakeholder feels the formal process is too slow
- Existing relationship with a developer predates the project structure
- Stakeholder doesn't understand or respect the lead role
- The lead hasn't built enough trust or rapport with the stakeholder
Good approach:
- Understand the developer's experience — are they feeling pressured? Are they agreeing to things they shouldn't?
- Talk to the stakeholder directly, not defensively — "I want to make sure the team can give you the best response. Let me explain how routing requests through me actually helps you"
- Explain the impact on the team — developers getting pulled mid-task causes context switching and delays
- Offer speed — often the workaround exists because the official channel feels slow; fix the channel
- Align with the developer — they need to feel empowered to redirect requests back to you without damaging the relationship
Red flags: Responds aggressively, blames the developer, or does nothing and lets the pattern continue.
Prioritization
(4)Key things to listen for:
- Practical application — has actually used RICE, not just memorized the formula
- Awareness of subjectivity — understands that estimates in RICE can be gamed
- Contextual judgment — knows when to override the score
RICE formula: Score = (Reach × Impact × Confidence) / Effort
- Reach — how many users affected per time period (e.g., users/month)
- Impact — how much it moves the needle (3 = massive, 2 = high, 1 = medium, 0.5 = low, 0.25 = minimal)
- Confidence — how sure you are about reach and impact (100% = high, 80% = medium, 50% = low)
- Effort — person-months of work
Good approach:
- Score every item in the backlog consistently
- Sort by score descending
- Sanity-check the top items — do they feel right?
- Override with judgment when needed (strategic, compliance, or dependency reasons)
- Review scores regularly as new data arrives
Limitations:
- Reach is hard to estimate accurately before launch
- Impact is subjective — different people score it differently
- Doesn't account for strategic importance, dependencies, or technical risk
- Can favor small, safe items over big bets
Alternatives: MoSCoW for deadline-driven work, Kano model for user satisfaction, opportunity scoring for product-market fit.
Follow-up
Follow-up: What are the limitations of RICE, and when would you choose a different framework?
Key things to listen for:
- Audience segmentation — understands that executives, developers, and customers need different views
- Outcome framing — roadmap describes goals and bets, not a feature delivery schedule
- Confidence signaling — communicates uncertainty clearly (now/next/later)
Roadmap formats by audience:
Executives / board:
- Themes and strategic bets, not tasks
- Tied to business objectives (revenue, retention, growth)
- Quarterly horizon with directional 12-month view
Development team:
- Epics and milestones with rough sizing
- 6-week near-term with more detail, fuzzy further out
- Includes technical health work alongside features
Customers / sales:
- Problem-focused language, not solution language
- High-level categories without committed dates (avoid over-promising)
- Use "exploring", "planned", "in progress" language
Handling 12-month commitments:
- Explain the difference between a plan and a commitment
- "We intend to solve X by Q3. The specific solution may evolve as we learn more"
- Use banded dates ("H2") rather than specific months
- Document the assumptions that make the date hold
Red flags: Roadmap is a list of feature names with fixed dates, never updated, or presented identically to all audiences.
Follow-up
Follow-up: How do you handle requests to commit to specific features on the roadmap 12 months out?
Key things to listen for:
- Strategic clarity — has a clear picture of what the product is and isn't trying to do
- Data use — uses usage data, user research, and business metrics to reject ideas
- Diplomatic confidence — can say no without being dismissive
Framework for deciding what not to build:
- Does it serve the core user need? — if it's a distraction from the primary job-to-be-done, challenge it
- Do we have evidence of demand? — a request from one customer is not the same as a pattern across many
- What is the opportunity cost? — building this means not building something else; is that trade-off worth it?
- Can it be solved another way? — integration, configuration, or a partner solution may serve the need without building it
- Would we build it if we had to maintain it forever? — maintenance cost is rarely factored into the initial request
Saying no to senior leadership:
- Never say no to the person, say no to the idea in its current form
- Acknowledge the underlying goal: "I understand we want to grow enterprise accounts"
- Present data that challenges the assumption
- Offer an alternative: "Could we achieve the same goal by improving X instead?"
- If overruled, document your concerns and implement with full effort
Red flags: Never says no, says yes then delivers late, or says no based only on personal preference.
Follow-up
Follow-up: How do you say no to a feature that comes from senior leadership?
Key things to listen for:
- Ruthlessness on Must Haves — keeps the Must Have list genuinely minimal
- Facilitation skill — can run a team through the classification exercise
- Business grounding — classifications are anchored to user needs and business goals, not personal preference
MoSCoW defined:
- Must Have — the product cannot ship without this. If it's missing, the release fails entirely.
- Should Have — important but not critical. Include if time allows.
- Could Have — nice to have. Easy wins only if capacity exists.
- Won't Have (this time) — explicitly out of scope for this release, but may be reconsidered later.
Applying it under deadline pressure:
- Start with the question: "What is the minimum our users need to get value from this release?"
- Challenge every Must Have: "What breaks if we remove this?"
- Estimate effort for all Must Haves — do they fit in the available time?
- If not, something labeled Must Have is actually a Should Have — keep challenging
- Build only Must Haves, test, then layer in Should Haves if time permits
Resolving stakeholder disagreement:
- Reframe the question: "If we shipped without this, would we have to pull the release?"
- Use customer data or user research to ground the conversation
- Agree that Should Haves are committed for the next release — not abandoned
- Document the decision so it isn't re-litigated
Red flags: Must Have list is 80% of the original scope, the Won't Have list is empty, or the framework is applied by the lead alone without team input.
Follow-up
Follow-up: What do you do when stakeholders disagree on what is a Must Have versus a Should Have?
Delivery
(4)Key things to listen for:
- Release as a process, not an event — preparation starts long before the release date
- Communication plan — stakeholders, support, and customers are notified at the right time
- Rollback plan — has a plan for what to do if things go wrong
- Calm under pressure — doesn't panic when issues arise at the last minute
Good release process:
- Feature freeze — stop merging new features 1–2 weeks before release; only bug fixes
- Release branch — create a dedicated branch and apply fixes there
- Staged rollout — release to a small percentage of users first (canary or beta)
- Monitoring and alerts — watch error rates, latency, and business metrics closely post-release
- War room — designated team on-call for first 24–48 hours after release
- Communication plan — what do you tell users, sales, and support, and when?
Critical bug one day before release:
- Assess severity — is it a blocker or can it be mitigated?
- Options: fix and re-test (if small), delay the release, or release with a known limitation and fast-follow patch
- Make a call with stakeholders, don't make it alone
- Document the decision and reasoning
- Never rush a fix without adequate testing under release pressure
Red flags: No rollback plan, no monitoring, or delays release silently without communicating to stakeholders.
Follow-up
Follow-up: What happens when a critical bug is found one day before a planned release?
Key things to listen for:
- Prevention mindset — good upfront scoping prevents most creep
- Change control process — has a clear way to handle requests that come in mid-project
- Collaborative, not adversarial — manages scope without damaging stakeholder relationships
Prevention:
- Invest heavily in requirements clarity before the project starts
- Define explicit acceptance criteria for each deliverable
- Get sign-off on scope in writing before development begins
- Set clear expectations: "Changes after kickoff may impact timeline and budget"
Managing incoming requests:
- Log every request — create a ticket regardless of whether it will be accepted
- Evaluate impact — how much effort is it, and what would need to shift?
- Classify — is it truly new, or was it implied in the original scope?
- Present trade-offs — "We can add this if we drop X or extend the timeline by Y"
- Get explicit approval — never silently absorb scope changes
Legitimate vs. creep:
- Legitimate: new information that changes the fundamental approach, compliance requirements, critical bug discovered during development
- Creep: nice-to-have discovered mid-project, feature requested by one stakeholder without broader agreement, perfectionism over delivery
Red flags: Says yes to everything, scope decisions are made verbally and never documented, team discovers new requirements they had never heard of.
Follow-up
Follow-up: How do you distinguish legitimate scope additions from scope creep?
Key things to listen for:
- Transparency — doesn't hide bad news; surfaces it early
- Appropriate format — matches reporting format and frequency to audience needs
- Signal over noise — status reports communicate what matters, not everything
Tracking mechanisms:
- Sprint burndown or burnup charts
- Milestone tracker (planned vs actual date for each milestone)
- Risk register (updated weekly)
- Team velocity trend
- Blocked items log
Good status report structure:
- RAG status — Red/Amber/Green for overall project health
- Summary sentence — one-line statement of current state
- Milestones — what was completed, what's coming next
- Risks and issues — active blockers, escalations needed
- Decisions needed — what does the reader need to act on?
Reporting when behind:
- Report amber before it turns red — don't wait until the deadline to surface a delay
- State the size of the delay clearly: "We are approximately 1 week behind plan"
- Give the root cause in one sentence
- Present the recovery plan or request a trade-off decision
- Avoid defensive language or excessive explanation — stakeholders want clarity and a path forward
Red flags: Status is always green until it suddenly goes red, reports are full of activity descriptions but no outcome assessment, or status is reported in a format no stakeholder has time to read.
Follow-up
Follow-up: How do you report status when the project is behind schedule?
Key things to listen for:
- Diagnosis first — investigates root cause before applying a fix
- System thinking — recognizes that missed commitments are usually a systemic issue, not individual failure
- Calibration over pressure — fixes the estimation process rather than just adding pressure
Common root causes:
- Over-estimation of velocity (planning too much per sprint)
- Unplanned work consuming capacity (bugs, incidents, interruptions)
- Hidden dependencies not accounted for in planning
- Poor story decomposition (stories too large, acceptance criteria unclear)
- Team morale or motivation issues reducing throughput
- External blockers (waiting on other teams, approvals, data)
Diagnostic approach:
- Review the last 3–5 sprints — where did time actually go?
- Compare committed vs completed — is there a consistent pattern (always 70% done)?
- Talk to individuals — what is blocking them? What is taking longer than expected?
- Check the definition of ready — are stories well-defined before sprint start?
Recovery plan:
- Reduce sprint commitment by 20–30% for 2–3 sprints to rebuild confidence
- Add buffers for unplanned work explicitly in capacity planning
- Improve story decomposition — nothing enters a sprint larger than 3 days of work
- Protect the team from mid-sprint interruptions
Rebuilding stakeholder trust:
- Make smaller, conservative commitments and deliver on them consistently
- Give stakeholders visibility into what's in the sprint before it starts
- Show a trend chart — velocity is recovering, not just a verbal assurance
Red flags: Blames individuals, adds more pressure without fixing the system, or doesn't investigate at all and just tells the team to work harder.
Follow-up
Follow-up: How do you rebuild stakeholder trust after repeated missed commitments?
Use these questions in your next interview
Import all 25 questions into Intervy with one click. Add scoring rubrics, organize by template, and conduct structured interviews.
Try Intervy Free