Growth managers face a unique self-assessment problem: the same experiment can look like a hero story or a rounding error depending on who set the attribution model. The challenge is claiming appropriate credit for real work without pretending the causal chain is cleaner than it actually is.
Why Self-Assessments Are Hard for Growth Managers
Growth work is fundamentally attribution-dependent. A 15% improvement in trial-to-paid conversion sounds decisive until someone points out that pricing changed the same week, the sales team ran a promotion, and the onboarding email sequence was also updated. Being honest about the causal complexity while still claiming credit for your contribution is a genuinely difficult communication task — and most growth managers err too far toward either false precision or excessive hedging.
There’s also a compounding time problem. Growth experiments have lag. A retention initiative you launched in Q2 shows up in your churn data in Q4. A top-of-funnel campaign you ran in January affects revenue in April. Self-assessment deadlines rarely align with the natural feedback loop of growth work, which means your most impactful projects are often still maturing when the review form is due.
Finally, growth managers often own metrics they don’t control. You can run the perfect email re-engagement campaign and still miss your retention target because a competitor launched a better product. The self-assessment challenge is distinguishing between the quality of your work — your hypotheses, your test design, your iteration speed — and the outcomes, which are always partially out of your hands.
The goal: write phrases that are specific about your contribution, honest about attribution, and show systematic thinking — not just a list of metrics that may or may not be yours to claim.
How to Structure Your Self-Assessment
The Three-Part Formula
What I did → Impact it had → What I learned or what’s next
In growth, “what I did” is the experiment, program, or channel work. “Impact it had” is the metric movement, with honest attribution language. “What’s next” shows that you’re building on learning rather than just reporting results.
Phrases That Signal Seniority
| Instead of this | Write this |
|---|---|
| "I ran experiments" | "I ran 14 A/B tests in the activation funnel using Optimizely, achieving a 73% test-to-decision rate and surfacing three statistically significant improvements that contributed to a 19% lift in 7-day activation" |
| "Retention improved" | "The re-engagement email sequence I designed in Braze reduced 30-day churn by 8 percentage points for the at-risk cohort I targeted, as measured in a holdout test that isolated the campaign's effect from concurrent product changes" |
| "I drove acquisition growth" | "I restructured our paid search strategy on Google Ads, shifting budget from broad brand terms to high-intent non-brand keywords, and improved CAC by 22% while growing volume by 15% — a combination our previous approach hadn't achieved in four quarters of trying" |
| "I want to improve my analytics skills" | "I am deepening my Amplitude expertise this half to move from dashboard reading to self-serve cohort analysis, targeting the ability to run funnel breakdowns independently without data team support by Q3" |
User Acquisition Self-Assessment Phrases
Paid Channels
- "I restructured our Meta paid acquisition strategy around value-based lookalike audiences built from our highest-LTV customer cohort, replacing a broad interest-targeting approach that had been our default for 18 months. The new strategy reduced CAC by 28% and improved first-30-day retention for paid-acquired users by 14 percentage points — suggesting better fit as well as better economics."
- "I built a channel attribution model in Segment that gave us first-touch, last-touch, and linear multi-touch views for the first time, replacing our reliance on platform-reported conversions. The model revealed that our podcast sponsorships were undervalued in platform attribution by approximately 40% — a finding that changed our media mix planning for the following quarter."
- "I negotiated and executed three influencer partnerships that drove 4,200 trial starts at an effective CAC of $31 — 45% below our blended paid acquisition average. I built the tracking infrastructure in Segment and ran a holdout analysis to separate the influencer effect from organic baseline, giving us a defensible number rather than a self-reported one."
- "I reduced paid search wasted spend by $18K per month by implementing a systematic negative keyword review process and tightening match types on our highest-volume campaigns. The savings were reinvested in a non-brand keyword expansion that added 800 incremental trial starts per month in the following quarter."
Organic & SEO
- "I launched a programmatic SEO initiative targeting 2,400 long-tail keyword clusters relevant to our use cases, building a templated page system in partnership with engineering. Within six months, organic traffic to these pages was contributing 1,100 trial starts per month — a channel that did not exist before the initiative and now represents 18% of our total acquisition volume."
- "I developed a content distribution playbook that extended the reach of each content piece to five channels — email, LinkedIn, partner newsletters, community posts, and SEO landing pages — without increasing content production headcount. Average monthly traffic from content grew 60% over the year while content team capacity stayed flat."
Conversion Optimization Self-Assessment Phrases
Funnel Optimization
- "I conducted a systematic funnel analysis in Amplitude that identified the sign-up-to-activation step as our highest-dropout point, with 61% of new registrations never completing the first meaningful action. I ran five sequential tests on the activation flow over 10 weeks, and the winning combination — a reduced required setup, a progress indicator, and a contextual tooltip — improved activation from 39% to 58% as measured in a controlled experiment."
- "I redesigned the pricing page based on qualitative research from 20 user interviews and quantitative analysis of scroll depth and click behavior from Hotjar. The redesign test showed a 23% increase in plan selection clicks and an 11% increase in trial-to-paid conversion for visitors who reached the page — statistically significant at 95% confidence across a 3-week test window."
- "I identified that mobile visitors were converting at 40% the rate of desktop visitors on our landing pages, despite representing 55% of traffic. I ran a mobile-specific redesign test focused on load time and form length, and improved mobile conversion by 34% — narrowing the gap to a 20% delta, which I've flagged as a remaining opportunity for next quarter."
Onboarding & Activation
- "I redesigned our new user onboarding sequence in Braze using behavioral triggers instead of time-based sends, sending messages when users hit or skipped key activation milestones rather than on a fixed schedule. The behavioral sequence achieved a 41% 7-day activation rate versus 28% for the time-based control — a 46% relative improvement across a cohort of 8,400 new users."
- "I launched an in-app onboarding checklist built with LaunchDarkly feature flags that we A/B tested across new user segments. The checklist improved feature discovery depth — measured by number of distinct features used in the first week — by 2.4x for users who engaged with it, and users who completed the checklist had 3x higher 30-day retention."
Retention & Engagement Self-Assessment Phrases
Re-engagement Programs
- "I built a churn prediction model using 90-day behavioral signals from Amplitude and used it to identify an at-risk cohort of 2,100 users. I designed a targeted re-engagement program in Braze for this cohort and ran it against a holdout control. The program retained 340 users who would otherwise have churned based on holdout comparison — representing approximately $170K in ARR protected at our average ACV."
- "I designed a win-back email sequence for lapsed users that tested three different incentive structures — feature announcement, social proof, and time-limited trial extension. The trial extension variant outperformed the others significantly, with a 12% reactivation rate versus 4% for the control send. I documented the finding and it has since been used as a template by the enterprise team for their own win-back campaigns."
- "I implemented an early warning system using Mixpanel cohort analysis that flags users who show declining engagement patterns in week two — the period our data showed was most predictive of 90-day churn. The system now triggers automated interventions two weeks earlier than our previous reactive approach, and our 90-day retention has improved by 6 percentage points in the two quarters since deployment."
Engagement Depth
- "I identified through Amplitude analysis that users who engaged with our collaboration features within the first 14 days had 4x higher 12-month retention than those who did not, despite collaboration being a secondary feature in our onboarding. I proposed and got product alignment on a redesigned onboarding that introduces collaboration as a primary step, and the in-progress A/B test is showing early positive signals."
- "I launched a weekly digest email program that surfaces personalized usage insights to active users. Open rates stabilized at 38% — well above our email program average of 22% — and users who open the digest have 25% higher 30-day feature engagement. The program now reaches 14,000 active users weekly."
Experimentation & Testing Self-Assessment Phrases
Test Design & Rigor
- "I implemented a centralized experimentation log in Notion that documents every test hypothesis, design, result, and learning across the growth team. Before this, we had tested similar ideas multiple times without knowing it. The log has improved our test velocity by reducing the setup time for new experiments and has become the institutional memory for what we've learned about our users."
- "I introduced minimum detectable effect calculations into our experiment design process, ensuring we only launch tests that our traffic volume can power to statistical significance within a reasonable timeframe. This change eliminated three low-power tests that would have run for months without reaching a conclusion, and freed that traffic for higher-confidence experiments."
- "I ran a multivariate test in Optimizely on our homepage headline, subheadline, and CTA button across three variants each — 27 combinations — and used a Bayesian statistical model to identify a winning combination with 90% confidence in three weeks rather than the eight weeks a sequential A/B approach would have required. The winning combination improved homepage trial starts by 17%."
Learning Velocity
- "I increased our team's experiment velocity from an average of 3 tests per month to 8 by building a reusable experiment template in LaunchDarkly and streamlining the review and approval process. More tests with faster iteration cycles have compounded our learning rate — our last quarter produced more statistically significant learnings than the previous two quarters combined."
- "I established a monthly experimentation review where the growth team presents learnings — including failed tests — to the broader product and marketing organization. Three insights from growth experiments have since influenced product roadmap decisions, creating a feedback loop between growth learnings and product investment."
Product-Led Growth Self-Assessment Phrases
Viral & Referral Mechanics
- "I designed and launched our referral program, including the incentive structure, referral tracking via Segment, and email delivery via HubSpot. In the first 90 days, the program generated 1,800 new trial starts with a K-factor of 0.18 — meaning roughly 1 in 5 new users is now referred by an existing user, a channel that costs 70% less than our paid average CAC."
- "I identified a natural sharing behavior in our power users — they were manually exporting and sharing their outputs with colleagues — and built a collaborative sharing feature that formalized and tracked this behavior. Shares increased 4x after the feature launched, and each share was converting to a trial at a 31% rate based on first 60 days of data."
Freemium & Trial Optimization
- "I ran a pricing experiment testing three different free tier limits to find the threshold that maximized trial-to-paid conversion without sacrificing activation. The winning configuration — more generous limits with a hard wall at a specific feature — improved trial-to-paid conversion by 19% while holding free-tier activation steady. The test added an estimated $240K ARR on an annualized basis."
- "I redesigned our upgrade prompts to be contextual — triggered by specific behaviors at the moment of value realization — rather than time-based or usage-threshold-based. The contextual prompts converted at 2.8x the rate of the previous generic prompts, with users reporting in post-upgrade surveys that they understood exactly why they were upgrading."
Revenue & Business Impact Self-Assessment Phrases
Revenue Attribution
- "I built a revenue attribution dashboard in Tableau that gives the leadership team a weekly view of pipeline and revenue by acquisition channel, controlling for cohort age and plan type. For the first time, we can see that our enterprise customers acquired through content have 40% higher 12-month expansion revenue than those acquired through paid — a finding that is reshaping our enterprise content investment."
- "I identified that our mid-market cohort acquired in Q1 was on track to expand at 130% net revenue retention — well above our 110% blended average — based on engagement signals visible in Amplitude. I flagged this to the account management team four months before renewal, allowing them to prioritize these accounts for proactive expansion conversations. Three of the five accounts I flagged have already expanded."
Growth Program Impact
- "The growth programs I owned this year — referral, re-engagement, onboarding optimization, and paid channel restructuring — contributed an estimated $1.1M in incremental ARR based on controlled holdout comparisons and conservative attribution. I document each program's contribution separately and am cautious about summing them due to overlap effects, but individually each program's causal estimate is defensible."
- "I reduced our overall blended CAC by 18% year-over-year while growing acquisition volume by 31%, improving our CAC payback period from 14 months to 9 months. This was achieved through a combination of channel mix optimization, landing page improvements, and the referral program launch rather than any single initiative."
How Prov Helps Growth Managers Track Their Wins
Growth managers run dozens of experiments per quarter, and the learning from each one — including the failed tests — is genuinely valuable evidence in a self-assessment. But growth work is fast-paced, and without a system for capturing wins as they happen, the retrospective effort required to reconstruct an experiment’s hypothesis, design, result, and business impact from memory is enormous.
Prov captures those wins in 30 seconds when you close a test, hit a metric milestone, or make a decision that changes a program direction. Over a quarter, it builds the documented record that turns a growth role’s inherent attribution complexity into a clear, credible narrative for your next review. Download Prov free on iOS.
Ready to Track Your Wins?
Stop forgetting your achievements. Download Prov and start building your career story today.
Download Free on iOS No credit card required