The best growth reviews don't ask "did the numbers go up?" They ask "did this person run experiments worth running, learn what they set out to learn, and build systems that compound over time?"
How to Write Effective Growth Manager Performance Reviews
Growth manager reviews require honest attribution — and honest attribution is hard. The same improvement in activation rate can look like a win when it’s the result of a disciplined experiment sequence, or it can obscure a miss when it was driven by a one-time promotional lever that won’t repeat. Good reviews distinguish between these two realities, and doing that requires looking past the topline metric to the quality of the work that produced it.
The central challenge is that growth work has lag. An onboarding experiment shipped in Q2 may not show its full impact on retention until Q4. A referral loop built in H1 compounds quietly until it becomes a meaningful acquisition channel in the following year. Reviews that only credit observable outcomes from the current review period will systematically undervalue growth managers who invest in durable systems while rewarding those who optimize for short-term lift. Build in language that recognizes both horizon types.
Experimentation rigor is the clearest differentiator between strong and average growth managers. A strong growth manager runs fewer, better-designed experiments — clear hypotheses, appropriate sample sizes, single-variable tests wherever possible, and honest post-mortems that extract learning even from null results. An average growth manager runs many experiments and celebrates wins without studying losses. Your review should evaluate the quality of the experimental portfolio, not just the win rate.
Finally, strong growth reviews acknowledge the cross-functional nature of the role. Growth managers who work well with product, data engineering, and marketing produce better results not because they’re luckier, but because they can move faster and instrument correctly. Document how the growth manager built and used those relationships — it’s a leading indicator of future trajectory that the metrics don’t capture.
How to Use These Phrases
For Managers
Replace bracketed placeholders with specific metrics, tools, and experiments from the review period. For growth roles, the most credible review language always cites actual numbers — lift percentages, sample sizes, experiment runtimes — even approximate ones. Vague growth review language (“improved conversion”) is nearly meaningless without context.
For Employees
Read the “Exceeds” phrases and use them as a self-assessment checklist: did I design and document rigorous experiments? Did I build compounding systems or just optimize existing ones? Did I attribute results honestly, including the ones driven by factors outside my control? Your answers reveal what to highlight and what to develop.
Rating Level Guide
| Rating | What it means for Growth Manager |
|---|---|
| Exceeds Expectations | Designed and shipped experiments that produced durable, compounding growth; built or improved core growth loops; set a high bar for experimental rigor on the team |
| Meets Expectations | Ran a healthy experiment cadence; delivered against acquisition and retention targets; maintained growth infrastructure effectively |
| Needs Development | Experiment quality inconsistent; attribution methodology unclear; growth work primarily reactive or reliant on one-time levers |
User Acquisition Performance Review Phrases
Exceeds Expectations
- Consistently builds acquisition channels with a compounding architecture — channel investments made this year are designed to produce higher return in year two than year one, distinguishing durable channel development from promotional one-time lifts.
- Proactively aligned acquisition strategy with product positioning by working directly with the product team to ensure that the users being acquired match the user profile the product is designed to serve — reducing downstream churn from mismatched expectations.
- Consistently identifies and develops acquisition channels before they become crowded — the SEO + content partnership channel they built in H1 now contributes 22% of monthly signups at a CAC 40% below the paid average.
- Proactively designed and shipped a referral program using HubSpot and Segment that achieved a viral coefficient of 0.31 within 60 days of launch — the highest referral performance the company has recorded and a direct result of rigorous incentive structure testing.
- Independently rebuilt the paid acquisition measurement stack in Segment, enabling accurate multi-touch attribution for the first time — the team can now make channel investment decisions based on actual CAC data rather than last-click proxies.
- Drives acquisition strategy with genuine channel diversity — the acquisition portfolio now has five channels each contributing more than 10% of volume, reducing concentration risk from the single paid channel that dominated a year ago.
- Demonstrates exceptional ability to find acquisition leverage in product features — identified and instrumented three in-product sharing behaviors that now generate 15% of organic new user volume at zero marginal cost.
Meets Expectations
- Manages paid and organic acquisition channels effectively, delivering against monthly signup targets with appropriate spend efficiency and CAC discipline.
- Monitors channel performance in Amplitude and Mixpanel consistently and makes timely optimization decisions when performance drifts from target.
- Maintains healthy relationships with growth infrastructure providers — Segment pipelines are reliable, Braze journeys are up to date, and data quality issues are caught and resolved promptly.
- Tests new acquisition hypotheses at a reasonable cadence; not every test produces a win, but the experiment queue is always populated and prioritized.
- Documents acquisition channel performance with sufficient rigor that channel attribution is legible to stakeholders and defensible in business reviews.
- Maintains accurate CAC tracking across channels and uses those figures to guide budget allocation decisions rather than relying on volume or raw traffic as proxies for channel health.
Needs Development
- Acquisition strategy would benefit from greater channel diversification — current reliance on a single paid channel creates meaningful risk; a structured exploration of organic and product-led acquisition vectors would reduce that dependency.
- Attribution methodology for acquisition experiments needs strengthening — several wins claimed this period lacked the statistical rigor to distinguish signal from noise, which has reduced cross-functional confidence in growth team reporting.
- Would benefit from investing more in durable acquisition infrastructure; current focus on short-cycle optimizations produces diminishing returns compared to channel-building work with longer payoff horizons.
- Acquisition reporting needs more context for business stakeholders — raw signup counts are reported without the CAC and quality context that would allow leadership to make informed budget decisions.
Conversion Optimization Performance Review Phrases
Exceeds Expectations
- Consistently runs conversion experiments with textbook rigor — clear hypotheses, pre-registered sample sizes, minimum detectable effect calculations, and honest post-mortems regardless of outcome; the team cites this leader's experiment design as the standard they aspire to.
- Proactively identified a friction point in the onboarding flow using Amplitude funnel analysis and Optimizely session data, designed a three-variant test, and shipped a change that improved D1 activation by 14 percentage points — the largest single activation improvement in two years.
- Independently developed a personalization framework that serves different onboarding experiences to different user segments using Braze — conversion lift for the top two segments is 19% and 23% respectively above the control experience.
- Drives conversion rigor across the growth team by requiring all experiments to be documented in a shared log with hypothesis, methodology, results, and learnings — the experiment archive is now a reference resource for product and engineering teams.
Meets Expectations
- Runs a steady cadence of conversion experiments across the funnel — signup, onboarding, and first-value milestones are tested regularly and the team has a clear picture of where friction exists.
- Uses Optimizely and Amplitude effectively to design, instrument, and analyze experiments; results are reported accurately and learnings are applied to future test design.
- Maintains and improves the onboarding experience over time — baseline conversion metrics improve incrementally each quarter and regressions are identified and addressed promptly.
- Documents conversion hypotheses clearly so engineering and design partners can contribute effectively to experiment design without requiring extensive rework.
- Identifies and investigates conversion regressions promptly — when metrics unexpectedly decline, root cause analysis is completed quickly and corrective experiments are prioritized without waiting for the next planning cycle.
Needs Development
- Conversion experiment quality would benefit from greater statistical rigor — several recent tests were called early or run with insufficient sample sizes; investing in proper power analysis before launch would improve the credibility of results.
- Hypothesis quality in conversion testing has been inconsistent — building a stronger habit of connecting test hypotheses to user behavioral data from Amplitude before designing experiments would improve the signal-to-noise ratio of the test portfolio.
- Would benefit from a more systematic approach to learning accumulation — successful and unsuccessful experiments are currently treated similarly, which means the team re-tests disproven hypotheses and misses compounding opportunities.
- Conversion funnel analysis needs more depth — current reporting surfaces top-level conversion rates without the step-level breakdown that would identify where specific user segments experience disproportionate drop-off.
Retention & Engagement Performance Review Phrases
Exceeds Expectations
- Consistently builds retention strategy around behavioral understanding rather than messaging volume — the team's engagement programs are distinguished by the accuracy of their trigger logic and the relevance of their content, not by send frequency or channel breadth.
- Proactively developed a cohort analysis library in Amplitude that gives the team a shared view of how different acquisition cohorts behave over their first 90 days — this infrastructure now informs both acquisition targeting and onboarding design simultaneously.
- Consistently treats retention as a product problem, not a messaging problem — identified the behavioral signals that predict churn 30 days in advance using Amplitude, built a Braze intervention that addresses the root cause, and improved 90-day retention by 8 percentage points.
- Proactively developed a re-engagement program that recovered 12% of churned users over 90 days — the program was designed from scratch, tested rigorously, and is now running as an evergreen campaign at a positive LTV contribution.
- Independently identified that a specific in-product behavior correlated 3.2x more strongly with long-term retention than the company's existing success metric — presented the analysis, drove alignment on redefining the activation milestone, and redesigned onboarding around the new signal.
- Drives engagement strategy with genuine depth — the team's understanding of what "engaged" means for different user segments is now specific, behavioral, and grounded in Amplitude cohort analysis rather than generic activity metrics.
Meets Expectations
- Monitors retention and engagement metrics consistently and launches intervention programs when cohort performance drifts from benchmark; Braze journeys are kept current and lifecycle messaging is relevant to actual user behavior.
- Uses Amplitude cohort analysis to understand retention curves and identify the stages where engagement drops most significantly; findings inform both product and messaging decisions.
- Runs re-engagement experiments at an appropriate cadence — not every campaign produces lift, but the channel is actively managed and learning accumulates over time.
- Maintains accurate retention reporting and communicates trends to stakeholders with enough context for non-growth audiences to understand what is happening and why.
- Connects retention work to product feedback loops — when behavioral data reveals a retention-killing friction point, findings are shared with the product team with a concrete recommendation rather than a raw data dump.
Needs Development
- Retention strategy would benefit from a stronger behavioral foundation — current engagement interventions are primarily message-based; developing a deeper understanding of the in-product behaviors that drive long-term retention would unlock more durable improvement levers.
- Re-engagement program design has been reactive — building a proactive churn prediction model and intervening earlier in the at-risk user journey would meaningfully improve the economics of retention work.
- Would benefit from investing more time in understanding why users churn rather than optimizing the win-back process — root cause analysis from churned user interviews or behavioral data would produce higher-leverage retention investments.
- Retention segmentation needs development — all users currently receive the same lifecycle journeys regardless of their behavior or profile; developing segment-specific retention strategies would improve both intervention relevance and overall retention economics.
Experimentation & Testing Performance Review Phrases
Exceeds Expectations
- Consistently treats failed experiments as high-value learning events — null results are analyzed as rigorously as wins, and the insights from disproven hypotheses have directly informed two successful experiments that would not have been designed without them.
- Proactively introduced a Growth Review meeting where experiment results, learnings, and next-cycle hypotheses are shared across product, engineering, and marketing — the format has meaningfully improved the quality of cross-functional input into the growth roadmap.
- Independently developed a holdout group methodology that has given the team its first reliable baseline for measuring the cumulative effect of growth changes — enabling attribution of long-term metric movements that had previously been unmeasurable.
- Consistently raises the experimental bar across the growth function — introduced a pre-registration process for A/B tests that has reduced false positives, improved reproducibility of results, and given the team confidence to make larger decisions based on experiment outputs.
- Proactively developed a growth experimentation framework in Notion that covers hypothesis writing, test design, instrumentation standards, and result interpretation — the document is referenced by the broader product team and has reduced onboarding time for new growth hires by an estimated 30%.
- Independently recognized that the team's experiment velocity was masking a quality problem — ran fewer experiments in H2 than H1, but the hit rate on significant lifts improved from 18% to 37%, producing more total impact with less wasted engineering time.
- Drives a culture of honest experimental reporting — negative results and null results are documented and shared with the same visibility as wins, which has materially improved the quality of the team's collective mental model of what works.
Meets Expectations
- Runs experiments with appropriate rigor for the team's stage — hypotheses are clear, sample sizes are adequate, and results are interpreted without obvious p-hacking or confirmation bias.
- Manages the experiment backlog actively — tests are prioritized by expected impact and ease of implementation, and the queue is communicated to engineering and product partners in advance.
- Documents experiment results consistently in Optimizely and in team-shared notes — the growth team's test history is accessible and organized, and learnings are applied to future test cycles.
- Recognizes the limits of A/B testing and applies other research methods — qualitative user interviews, session recordings, and cohort analysis — when controlled experiments are not appropriate or feasible.
- Communicates experiment results to non-growth stakeholders in plain language — leaders and product partners can understand what was tested, what was found, and what the team will do differently, without needing to interpret raw statistical output.
Needs Development
- Experimental rigor needs development — several tests this period would benefit from cleaner single-variable design and pre-specified stopping criteria; the current practice of evaluating results at multiple checkpoints inflates false positive rates.
- Experiment prioritization would benefit from a more systematic framework — the current backlog is opportunistic rather than hypothesis-driven, which reduces the expected value of each test cycle.
- Would benefit from a more disciplined approach to learning documentation — the team regularly re-explores previously tested hypotheses because results aren't systematically recorded and shared; building a shared experiment log would compound institutional knowledge.
- Experiment velocity is high but hit rate is low — prioritizing fewer, better-designed tests over a high volume of underpowered tests would produce more reliable signal and more efficient use of engineering capacity.
Revenue Impact Performance Review Phrases
Exceeds Expectations
- Consistently demonstrates that growth strategy and business strategy are the same thing — every growth initiative is traced to a specific revenue or retention goal, and the growth team's roadmap is legible to the CFO without translation.
- Proactively built a growth accounting model that separates new MRR from expansion, contraction, and churn — giving leadership a granular view of which growth levers are performing and which are masking underlying retention problems.
- Consistently connects growth decisions to revenue outcomes rather than vanity metrics — every major initiative is tracked against an LTV-to-CAC model, and the growth team's budget decisions are grounded in unit economics rather than volume alone.
- Proactively identified that the team's highest-volume acquisition channel had a 40% lower 12-month LTV than a smaller channel — shifted budget allocation accordingly, producing a 22% improvement in blended LTV/CAC with flat total spend.
- Independently designed and launched an expansion revenue experiment targeting free-to-paid conversion that produced $340K in incremental ARR in the first quarter — the initiative was sourced, scoped, and shipped by the growth team without product team involvement.
- Drives revenue accountability across growth programs — attribution models, LTV projections, and payback period calculations are standard outputs of growth planning and are used to defend budget requests and prioritization decisions to leadership.
Meets Expectations
- Tracks the revenue implications of growth work consistently — acquisition, activation, and retention metrics are connected to LTV models and the team understands how their work affects business economics.
- Makes growth investment decisions with appropriate attention to payback period and CAC — budget is not chased for volume at the expense of efficiency, and channel economics are reviewed regularly.
- Communicates revenue impact of growth initiatives clearly to business stakeholders — attribution is honest and the team's contribution to revenue goals is legible without overstating causation.
- Flags when growth metrics and revenue metrics diverge — the team does not optimize for acquisition volume when conversion or retention patterns suggest that acquired users have below-average LTV.
- Contributes to growth planning with realistic revenue forecasts grounded in historical cohort behavior rather than optimistic top-of-funnel projections.
Needs Development
- Revenue connection in growth work needs strengthening — initiatives are frequently measured in terms of funnel metrics without a clear line to LTV or revenue impact; developing a habit of attaching economic models to growth experiments would improve prioritization and stakeholder confidence.
- Would benefit from developing a stronger understanding of unit economics — growth investments are currently evaluated primarily on CAC without sufficient attention to LTV or payback period, which can make individually efficient channels look more attractive than they are in portfolio context.
- Attribution practice needs development — revenue contributions from growth programs are currently reported with more certainty than the methodology supports; investing in more rigorous attribution infrastructure via Segment would improve the quality of business decisions built on growth data.
- Revenue forecasting accuracy needs improvement — projections have consistently overestimated growth program impact this period; building a more conservative, data-driven forecasting practice would improve planning reliability and leadership trust.
How Prov Helps Build the Evidence Behind Every Review
Growth managers are data-rich and narrative-poor. The experiments they ran, the hypotheses they validated, the growth loops they built — all of it lives in dashboards, Notion docs, and Optimizely logs that are scattered across tools and nearly impossible to synthesize at review time. When the review cycle opens, most growth managers spend more time reconstructing what they did than articulating why it mattered.
Prov solves this by capturing wins at the moment they happen. A growth manager who logs the result of a significant experiment, a channel bet that paid off, or a cross-functional win as it happens builds a rich, timestamped record of their work over the year. Prov transforms those rough inputs into polished accomplishment statements, extracts the underlying skills and impact patterns, and keeps them organized and ready to reference. When review season arrives, the evidence is already there — no reconstruction required.
Ready to Track Your Wins?
Stop forgetting your achievements. Download Prov and start building your career story today.
Download Free on iOS No credit card required