Engineering directors are measured on org health, delivery reliability, and technical quality — none of which show up in a single commit. Document the systemic changes you drove.
The Unique Challenge of Writing Engineering Director Accomplishments
The further you are from the code, the harder it is to point to your work. A senior engineer has merged commits. An engineering manager can point to team delivery. But as a Director of Engineering or VP of Engineering, your fingerprints are on the org chart, the hiring process, the culture norms, and the technical strategy — and none of that shows up in a single quarterly metric or a feature launch announcement.
The result is that many strong engineering directors undersell themselves at review time. They know they've changed something fundamental — the way the org ships, the quality of engineers it attracts, the reliability of its systems — but they struggle to translate that into language that resonates with a CEO, a board, or an executive team that thinks in revenue, risk, and headcount.
This post gives you 75+ examples organized around the six areas that define excellent Director and VP Engineering performance. The examples are written at organizational scope, not individual contribution scope. Each one points to a system you changed, a problem you diagnosed at scale, or a team outcome that wouldn't have happened without your specific decisions. That's the territory you should be documenting.
Read through, find the examples closest to your actual work, and fill in your specific numbers and context. The structure of each example — the problem, the decision, the organizational change, the outcome — is the part to keep.
What gets you promoted are documented accomplishments with measurable impact.
Director of Engineering Accomplishment Categories
| Competency | What Reviewers Look For |
|---|---|
| Engineering Delivery & Execution | Does your org ship reliably and learn from when it doesn't? |
| Technical Strategy & Architecture | Do you set sustainable technical direction at organizational scale? |
| Team Building & Org Health | Do you attract, retain, and develop great engineers at scale? |
| Engineering Culture & Excellence | Do you raise the bar for the whole org, not just your teams? |
| Cross-functional & Executive Leadership | Do you lead credibly beyond engineering? |
| Business Impact & Cost Efficiency | Do you connect engineering decisions to outcomes the business cares about? |
1. Engineering Delivery & Execution
Velocity & Reliability
- When I joined, the org was consistently missing quarterly commitments — average delivery rate of 54% of committed scope. I ran a delivery retrospective across 6 teams and found 3 systemic causes: underestimated cross-team dependencies, late-breaking scope changes, and no escalation protocol for blockers. I introduced dependency mapping in planning, a scope freeze policy, and a weekly blocker review. Delivery rate improved to 79% in 2 quarters and 87% in 4 quarters.
- Sprint velocity was inconsistent across 8 teams with no org-level visibility into why. I built a lightweight delivery metrics dashboard — points committed vs. delivered, blockers logged, and reason-for-miss codes. Within 2 quarters, we had enough data to identify the top 3 systemic blockers across the org. Two were cross-team dependency issues I was able to resolve structurally. One was a resourcing gap I used the data to escalate to the CEO. All 3 were addressed within 6 months.
- Our deployment frequency was once every 3 weeks — a competitive disadvantage in a market that was moving fast. I chartered a platform team to build CI/CD improvements, unblocked 4 of the 6 release gates that required manual approval, and introduced feature flags for safer incremental releases. Deployment frequency reached weekly within 4 months and twice weekly within 8. Time to ship a validated idea from conception dropped from 47 days to 18 days.
- The org was planning in 6-month chunks with no mid-cycle adjustment process. When priorities shifted — which they did twice in the year — teams had no framework for deciding what to drop. I introduced 6-week planning cycles with an explicit "what do we stop to make room for this" step built into every adjustment. Teams reported clearer direction. Unplanned interruptions as a share of total capacity dropped from 31% to 14%.
- Cross-team launches were consistently delayed because no one owned the coordination layer. I created a Technical Program Management function — 2 senior TPMs embedded at the org level who owned cross-team launch plans, dependency tracking, and escalation. Time-to-launch on projects with 3+ team dependencies dropped by 34% in the first year.
- Discovered that 3 of our 9 teams had no documented on-call runbooks, leading to 2–3× longer incident resolution times when engineers other than the original author were paged. I mandated runbook completion as a pre-release requirement and ran a 6-week runbook sprint for existing services. Mean time to resolve dropped from 47 minutes to 19 minutes org-wide. On-call burden, measured in hours per engineer per week, decreased by 28%.
Quality & Incident Reduction
- P0 incidents were occurring at a rate of 3–4 per month. I introduced a structured incident review process — every P0 required a post-mortem within 72 hours with root cause analysis and at least 2 action items with owners. I personally reviewed the first 8 post-mortems and identified a pattern: 5 of 8 were caused by missing observability. I funded an observability sprint. P0 incidents dropped to fewer than 1 per month within 6 months.
- Our test coverage was inconsistent — some services had 80%+ coverage, others had under 20%, and no one was accountable for the distribution. I introduced coverage thresholds as part of the CI pipeline for new code and ran a coverage improvement initiative for the 5 most critical services with the lowest coverage. Critical service coverage improved from an average of 24% to 71% over 2 quarters. Defect escape rate for those services dropped by 61%.
- We were experiencing a pattern of regressions in shared infrastructure — changes by one team breaking services owned by other teams. I introduced a contract testing framework: every service with external consumers had to define and publish a test contract, and any change that broke a contract was blocked in CI. Regression incidents caused by contract violations dropped from 6 per quarter to 1 in the 2 quarters following rollout.
- Bug backlog had grown to 400+ open issues with no prioritization system. Engineers were picking bugs based on personal interest rather than customer impact. I introduced a severity-based triage process — a 30-minute weekly session where a rotating engineer reviewed and tagged the top-volume bugs by customer impact. Top 20% of bugs by impact got SLA targets. Backlog grew no larger, and critical customer-facing bugs dropped from 47 open to 8 within 2 quarters.
- QA was a bottleneck — we had 4 QA engineers covering 9 product teams, and the handoff was causing 2-week delays in every release cycle. I worked with QA leadership to shift to a quality-embedded model: QA engineers assigned to specific feature teams rather than pooled, and engineers trained on basic testing practices through a 4-session internal curriculum. Release cycle time decreased by 1.5 weeks. QA lead time shrank from 12 days to 3 days.
- Our error budget model was theoretical — teams knew they had error budgets but there were no consequences for breaching them and no benefits for staying within them. I connected error budgets to release velocity: teams under error budget could ship on the standard 2-week cycle; teams over error budget moved to a 1-release-per-month policy until the budget was restored. Within 2 quarters, all 9 teams were within budget for the first time.
2. Technical Strategy & Architecture
Architecture Direction
- The organization had 9 teams making independent architectural decisions with no shared principles and no coordination mechanism. I established an Architecture Review Board — 5 senior engineers from different teams meeting biweekly, with a mandate to set org-wide architectural standards and review major decisions. In the first year, the ARB produced 7 Architecture Decision Records that became org standards. Cross-team compatibility issues dropped by 40%.
- I identified a strategic architectural risk: our monolith was causing deployment coupling that was forcing all 9 teams to coordinate every release. I built the case for a phased decomposition strategy — not a microservices rewrite, but a domain-driven extraction of the 3 highest-traffic, most-independently-deployed services. Presented to the CTO and CEO, got 6 months of focused investment approved. After extraction, those 3 domains deployed independently at 3× the previous frequency.
- We had no API versioning strategy — any breaking API change required coordinating all consumers simultaneously. I led the design of an API versioning policy, got buy-in from all 9 team leads, and drove adoption over 8 weeks. The next breaking API change deployed without a single coordinated freeze. Time spent on API compatibility coordination dropped by an estimated 60%.
- Recognized that our data layer was becoming a bottleneck — every team was writing directly to a shared database with no ownership model. I facilitated a 3-month data ownership initiative: mapped every table to an owning team, introduced an API layer for cross-team data access, and deprecated direct cross-team database access. Data-related incidents attributed to schema conflicts dropped from 8 per quarter to 1 in the following 2 quarters.
- Our mobile and web products were built on divergent tech stacks maintained by separate teams with no shared component library. I sponsored a design systems initiative — a small team of 3 engineers building shared UI primitives and a shared component library used by both surfaces. Teams that adopted the library reduced front-end implementation time by an estimated 30% per feature and cut cross-surface visual inconsistencies by 70%.
- Introduced a formal RFC process for major technical decisions — any decision with cross-team impact or 3+ months of implementation work required a written proposal circulated for 2-week comment before approval. The first year produced 12 RFCs. 4 were materially changed by the process. 2 were withdrawn. We avoided 2 estimated large-scale rework cycles worth an estimated 10+ engineering weeks each.
Technical Debt & Platform
- Technical debt was a vague concept in our org — teams would cite it in planning but there was no shared understanding of what it meant or how much of it we had. I introduced a debt inventory framework: teams catalogued their top 5 debt items by impact, scored them on a simple 3-factor model, and presented them in a quarterly debt review. I negotiated a 15% capacity allocation for debt reduction with the CPO. Feature velocity, measured by story points per engineer per sprint, improved by 18% over 6 months as the most impactful debt items were addressed.
- Our platform team was a cost center with no clear mandate — they were doing maintenance and responding to requests from product teams but had no product roadmap of their own. I rechartered the platform team with an explicit mission: reduce the time it takes a product engineer to go from merged code to production. I gave them a metric (deploy time, measured from merge to production traffic), a budget, and autonomy. Deploy time decreased from 47 minutes to 8 minutes over 3 quarters.
- Legacy systems were consuming 40% of on-call time. I ran a legacy risk assessment — probability of failure × blast radius × remediation cost — and produced a prioritized list of 12 legacy systems. Presented to the executive team with a 3-year replacement roadmap. Got 2 systems approved for replacement in the current year. After replacement, on-call time for those systems dropped to zero, recovering approximately 0.5 engineer-equivalents of capacity per year.
- We had 7 internal tools in use across the engineering org, 4 of which were overlapping in purpose and maintained by different teams. I ran a tooling consolidation initiative — interviewed 40 engineers about their usage patterns, identified the 2 winners, and sunset 2 tools entirely with clear migration support. Maintenance cost dropped by an estimated 1.5 engineer-equivalents per year, and engineer satisfaction with internal tooling improved by 14 points in our annual survey.
- Security vulnerabilities in dependencies were being handled ad-hoc — engineers updated when they noticed or when a CVE was flagged in Slack. I implemented automated dependency scanning in CI and introduced a vulnerability SLA policy: critical CVEs within 24 hours, high within 1 week, medium within 30 days. Time to remediate critical CVEs dropped from an average of 18 days to 1.2 days. We had zero unpatched critical CVEs for the last 3 quarters of the year.
3. Team Building & Org Health
Hiring & Retention
- Engineering attrition was at 28% annually — significantly above the industry median for our region. I ran exit interviews for every departure for 2 quarters and identified 3 consistent themes: growth path clarity, technical challenge, and manager quality. I addressed each structurally: published a new engineering career ladder, introduced a rotation program for senior engineers, and ran manager calibration sessions. Attrition dropped to 14% in the following year.
- Our interview process was producing low offer acceptance rates — we were making 3.2 offers per hire. I audited the process: shadowed 8 interviews, reviewed candidate feedback, and surveyed declined candidates. The primary issue was a 4-hour technical screen that candidates described as disconnected from the actual work. I redesigned it: a 90-minute practical exercise, a 45-minute architectural discussion, and a 30-minute values interview. Offer acceptance rate improved from 58% to 77%. Time-to-fill dropped from 67 days to 44 days.
- We were losing senior engineer candidates to a competitor consistently offering more equity. Rather than purely counter on equity, I worked with the CEO and HR to redesign the senior engineer offer package: faster equity vesting for the first cliff, a transparent leveling conversation before offer, and a documented autonomy charter for Staff+ engineers. Senior engineer offer acceptance improved from 52% to 71% over 6 months.
- Built the engineering organization from 22 to 54 engineers over 18 months, including 6 engineering managers, 2 staff engineers, and 3 principal engineers. Designed the organizational structure, defined the career ladder for all levels, and ran the hiring pipeline for the leadership layer personally. Maintained a 90-day ramp benchmark — 78% of new hires were contributing independently within 90 days, measured by their first unreviewed-with-EM pull request.
- New engineering hire ramp time was 60–90 days before meaningful contribution — candidates described the first month as chaotic and under-supported. I built a structured onboarding program: a 2-week technical bootcamp covering the architecture, a buddy assignment, a "first real PR" project defined before day 1, and a 30/60/90 day check-in structure for managers. Average time to first meaningful contribution dropped from 68 days to 31 days. New hire satisfaction at 90 days improved from 3.1 to 4.4 out of 5.
- Identified that we were losing engineers to management tracks at other companies because we had no clear IC leadership path. I created the Staff and Principal Engineer levels with formal scope definitions and promotion criteria. Promoted 3 engineers to Staff and 1 to Principal within 18 months. 2 engineers who had been interviewing externally cited the new ladder as the reason they stayed.
Organizational Design
- The org was structured by technical layer — a front-end team, a back-end team, a data team — causing constant handoff friction and unclear ownership. I proposed and executed a reorganization into product-aligned teams with full-stack ownership of their domain. The reorg took 3 months to plan, involved 34 engineers, and required rewriting 8 team charters. Within 2 quarters, handoff-related delays in team delivery retrospectives dropped from 43% of teams citing it to 8%.
- I inherited 6 engineering managers with spans of control ranging from 3 to 14 — one manager was clearly overwhelmed, others had too little to manage. I restructured team sizes to a target span of 6–8, promoted an IC to EM to take one overloaded manager's reports, and worked with HR to redefine the EM role scope. Manager satisfaction scores in our 360 review improved from 3.4 to 4.2 out of 5 at the next cycle.
- After an acquisition, two engineering orgs with different cultures, tools, and processes had to merge. I led the integration over 8 months: mapped technical overlaps, facilitated joint team-building sessions, established a shared engineering handbook, and created a unified career ladder that grandfathered in the acquired team's levels. Voluntary attrition from the acquired team was 11% vs. a typical post-acquisition figure of 30%+ in our industry.
- Our platform and product engineering teams were misaligned — product teams felt the platform was too slow, platform felt product teams were inconsiderate consumers. I introduced a platform-as-product model: the platform team ran a monthly "office hours" session, published a roadmap, and established an SLA for infrastructure requests. Platform team satisfaction scores from product engineers improved from 2.6 to 4.1 out of 5 within 3 quarters.
- I identified that the engineering org had insufficient senior IC mentorship capacity — junior and mid-level engineers had strong managers but limited access to technical mentorship above their level. I created a formal technical mentorship program: 8 senior engineers each paired with 1–2 mentees on a quarterly basis, with defined goals and monthly check-ins. Engineers in the program reported higher growth trajectory confidence. 4 of the first 12 mentees were promoted within 18 months.
4. Engineering Culture & Excellence
Engineering Practices
- Code review culture was inconsistent — some teams had thorough, educational reviews; others had rubber-stamp approvals that were missing obvious bugs. I introduced a code review standard: minimum 2 reviewers on any change over 50 lines, documented review checklist, and a monthly review quality retrospective. PR comment quality, measured by percentage of PRs with substantive feedback, improved from 38% to 71%. Defect escape rate dropped 29% over the following 2 quarters.
- Documentation was an afterthought — teams would write docs when asked but there was no system for keeping them current. I introduced a "doc owner" model: every service had a named owner responsible for documentation accuracy, and doc freshness was a check in our quarterly team health reviews. Stale documentation (docs not updated in 6+ months for an actively changing service) dropped from 64% of services to 18%.
- Post-incident learning was shallow — teams wrote post-mortems but the findings weren't circulated and the action items weren't tracked. I introduced an Engineering Learning Newsletter: biweekly digest of post-mortem highlights and action item status, sent to all 54 engineers. Within 3 months, patterns across teams started showing up — 2 org-wide improvements were initiated based on signals that previously would have been siloed in a single team's retro.
- Our on-call culture was creating burnout — engineers were being paged 5–8 times per week and there was no expectation of recovery time. I introduced an on-call health policy: maximum 5 pages per week before an automatic week of reduced-on-call rotation; post-on-call half-day recovery for any week with 10+ pages; and quarterly on-call load reviews with team leads. On-call-related attrition mentions dropped from 4 in the prior year's exit interviews to 0. Average pages per engineer per week decreased from 6.2 to 2.8.
- Engineers were making technology choices without organizational visibility — we discovered 3 teams had independently adopted 3 different logging frameworks. I introduced a "recommended stack" document — not a mandate, but a documented set of org-preferred tools in each category with the reasoning. Any deviation required a short written justification. New service tech divergence decreased by 60%. Onboarding time for engineers switching teams dropped because tooling was more familiar.
- Our performance review process had no engineering-specific criteria — engineers were being evaluated on generic competencies that didn't reflect what great engineering actually looked like at our org. I wrote an engineering-specific competency framework for all 5 levels, with concrete behavioral examples for each level. Managers reported feeling significantly more confident in review calibration. The percentage of engineers who said their performance feedback was actionable improved from 41% to 68%.
Diversity, Inclusion & Psychological Safety
- Our engineering candidate pipeline was 91% homogeneous on gender — not because of intentional filtering, but because of where we were sourcing. I partnered with recruiting to diversify sourcing: added 4 specialized job boards, partnered with 2 coding bootcamps with diverse graduate profiles, and rewrote job descriptions to remove gender-coded language. Within 2 hiring cycles, our pipeline representation improved from 9% to 26% women-identified candidates. First-year retention of underrepresented hires was 92%, above org average.
- Anonymous engagement surveys showed that 34% of engineers were uncomfortable raising concerns about technical decisions in team meetings. I restructured team rituals to create more psychological safety: introduced async-first RFC comments before any in-meeting discussion, added an explicit "minority opinion" section to every architecture decision record, and made "I changed my mind based on this feedback" moments publicly visible and praised. The safety metric improved to 61% comfortable within 2 quarters.
- Our senior engineering voice was dominated by a small number of long-tenured employees — newer engineers, especially those from underrepresented groups, were rarely contributing to architectural discussions. I introduced structured turn-taking in design reviews, a pre-read comment period before any in-person meeting, and a "most interesting challenge I faced this week" segment in team standup. Participation diversity in architecture discussions increased measurably. 3 significant design improvements were proposed by engineers who had previously never spoken in those forums.
- Identified a pattern in promotion rates: engineers from underrepresented groups were being promoted at 70% the rate of majority-group peers despite comparable performance ratings. I ran a promotion process audit with HR, found the gap was in visibility — certain engineers were not being nominated for promotion tracks because their managers weren't aware of their work. I introduced a nomination-based model where any senior engineer could nominate a colleague for promotion review. The promotion rate gap closed to within 5 points in the following cycle.
- Feedback mechanisms in engineering were sparse — engineers were getting formal feedback twice a year at review time. I introduced a monthly lightweight feedback ritual for all engineers: a 15-minute structured 1:1 add-on where managers shared one strength and one development area with concrete examples. Within 2 cycles, engineers reported feeling more informed about their performance. Surprise ratings in annual reviews (engineers rating their final score as "unexpected") dropped from 28% to 7%.
5. Cross-functional & Executive Leadership
Exec & Board Communication
- Engineering was absent from the board's mental model of company progress — the board deck had no engineering representation beyond headcount. I proposed and delivered a quarterly Engineering Health Report: delivery rate, system reliability, key technical risk, and 3-month forward look. The board chair asked for it to be a standing agenda item. Two board members used it to refer strategic engineering partnerships — creating pipeline I could not have generated independently.
- The CEO had limited visibility into technical risk and was making strategic commitments without understanding engineering implications. I introduced a monthly "technical risk briefing" — a 30-minute executive session with 3 slides: the risk, the probability, and the mitigation option. After 3 sessions, the CEO started routing major commercial discussions through me before close. We avoided 2 commitments that Engineering could not have met in the proposed timeline.
- I needed to make the case for a $2.1M platform investment to a board that was skeptical of infrastructure spending. I built a "cost of inaction" model: current incident frequency × customer-impact-cost + estimated feature velocity loss from the current architecture × 3-year projection. The model showed the platform investment had an NPV of $4.8M at our current scale. Approved in the first presentation.
- Engineering was consistently late to strategic conversations — by the time we were involved, commercial and product decisions had already been made that constrained our options. I established a formal "engineering input checkpoint" for any initiative with more than 3 months of implementation scope: a required 1-hour technical feasibility session before any commitment was made externally. We prevented 2 commitments with unrealistic timelines in the first 6 months.
- The company was considering an acquisition target. I was asked to lead the technical due diligence. I ran a 3-week assessment: architecture review, codebase quality audit, infrastructure cost analysis, team interviews, and a 60-page technical findings report. My assessment identified a significant technical debt obligation not reflected in the deal model. The acquisition price was renegotiated down by $3M based on my findings.
Product & Business Partnership
- The relationship between Engineering and Product was characterized by mutual frustration — Engineering felt Product changed priorities without understanding the cost; Product felt Engineering was opaque about what was hard and why. I proposed a joint planning process: Engineering and Product co-owned the quarterly capacity allocation, with explicit "this is what we're not building in order to build this" decisions made together. Mutual satisfaction in cross-functional surveys improved from 3.1 to 4.2 out of 5 within 2 quarters.
- Product was consistently surprised by late technical feasibility blockers — features would be designed and specced before Engineering was consulted, and then reworked. I introduced an engineering feasibility review as a required step in the discovery phase: a 45-minute session with PM, design, and a senior engineer before any spec was written. Late-stage scope changes due to feasibility issues dropped from 3–4 per quarter to 0 in the 2 quarters following adoption.
- Sales was making feature promises on roadmap items Engineering had not committed to. I partnered with the VP of Sales to build a "sales-to-engineering translation" process: Sales flagged any prospect requests as part of the deal-close discussion; I reviewed weekly and provided a 1-sentence confidence rating (committed, considering, not planned) within 48 hours. Deal-related escalations that surprised Engineering dropped from 6 per quarter to 1.
- We had a major enterprise customer threatening to leave over perceived product stagnation. I joined the account review with the Customer Success and Sales leads. I prepared a technical roadmap narrative — not features, but the capabilities we were building and why they would solve the customer's stated problems. The customer extended their contract by 2 years. CS leadership asked me to join their top 5 enterprise reviews going forward.
- The Finance team was modeling engineering costs without accurate data — their models overestimated infrastructure costs and underestimated labor costs in ways that distorted hiring decisions. I worked with our Finance BP for a quarter to build an accurate engineering cost model: headcount by level, infra spend by team, fully-loaded cost per engineer-week. The model became the standard for all engineering-related financial planning. It surfaced a $400K annual infrastructure overspend that we addressed over the following year.
6. Business Impact & Cost Efficiency
Cost & Infrastructure Efficiency
- Cloud infrastructure costs had grown 68% year-over-year with no corresponding growth in users or features. I ran a cost attribution exercise — broke down spend by team and service for the first time. Found that 3 underutilized services were consuming 22% of total spend. Two were deprecated. One was right-sized. Annual infrastructure savings: $340K. I introduced monthly cost reviews as an ongoing practice, and infrastructure cost growth rate is now tracked as a team metric.
- We had no culture of cost consciousness in engineering — decisions about infrastructure were made for performance or convenience without cost consideration. I introduced infrastructure cost as a visible metric: each team's monthly cloud spend was published on the engineering dashboard and reviewed in monthly team health checks. Within 6 months, engineers were proactively proposing cost optimizations. Total org spend on cloud infrastructure decreased by 19% year-over-year despite user growth of 34%.
- We were paying for 3 separate monitoring tools that had overlapping capability. I ran a consolidation project — interviewed 20 engineers on usage patterns, assessed feature overlap, and migrated the org to a single primary tool with a defined exception process for specialized use cases. Annual SaaS savings: $180K. Monitoring setup time for new services dropped from 4 hours to 45 minutes because there was now one standard process.
- Data warehouse costs were growing at 120% per year due to unoptimized queries and unlimited retention policies. I partnered with the Data team to introduce a tiered retention policy and a query cost budget for each team. Teams over budget received an alert and were required to optimize before adding new queries. Data warehouse costs grew 18% in the following year despite a 3× increase in data volume.
- Our disaster recovery setup had never been tested and was providing false assurance — theoretical RTO of 4 hours, actual untested. I ran a DR drill: failed over the production environment to the backup region during a low-traffic window, measured the actual recovery time (14 hours), and documented every gap. Built a remediation plan, executed it over 2 months, and ran a second drill. Actual RTO dropped from 14 hours to 3.5 hours. I presented both drills to the board as evidence of operational maturity.
- We were spending 22% of engineering capacity on a legacy billing system that was also creating customer-facing errors at a rate of 0.8% of transactions. I built the business case for a rewrite — showed the ongoing maintenance cost, the revenue impact of billing errors, and the 3-month investment required. Got approval, led the project, and shipped the replacement on schedule. Billing error rate dropped to 0.02%. Maintenance capacity recovered: approximately 1 engineer-equivalent per year ongoing.
Revenue-Enabling Engineering
- Engineering was invisible in sales cycles — technical buyers would ask detailed scalability and security questions, and Sales had no credible way to answer them. I created a sales engineering support model: a rotating senior engineer available for technical deep-dives in late-stage deals, plus a security and architecture FAQ document I authored with the team. We ran the model for 2 quarters. Sales-attributed technical wins increased by 18%. One deal team cited the technical session as "the deciding factor."
- Our API was undocumented, which was blocking enterprise customers from building integrations — a requirement for 60% of our target segment. I ran a 6-week documentation sprint: assigned each PM-owned API surface to an engineer owner, built a documentation review process, and published a complete API reference. Within 2 quarters of documentation publication, API-enabled deals increased from 11% to 34% of enterprise pipeline.
- The engineering org had no formal security posture for enterprise sales — large customers were asking about SOC 2 compliance and we couldn't answer. I led the SOC 2 Type II certification project: scoped the controls, assigned owners, ran the audit, and responded to auditor findings. Certification achieved in 9 months. Post-certification, 4 enterprise deals that had been blocked on security review were closed, representing $1.8M in ARR.
- Performance was a recurring objection in enterprise deals — customers wanted SLA guarantees we couldn't credibly make. I ran a performance baseline project: established benchmarks for all critical user flows, identified the 3 worst-performing operations, and ran a performance sprint. P95 response time for the critical path improved from 1,400ms to 340ms. We published a performance commitment in our enterprise terms for the first time. Deal velocity in the enterprise segment improved by 22%.
- Our mobile app had a 2.8-star rating in the App Store — primarily driven by 3 recurring crash types. I ran a crash analysis, assigned engineering owners to each type, and set a 6-week deadline to ship fixes. All 3 crash types were resolved. Rating improved from 2.8 to 4.2 over the following 3 months. App Store conversion for organic downloads increased by 31%, representing an estimated $120K in additional annualized revenue at our conversion rates.
- Identified that our largest customers were hitting performance limits at 80% of stated product capacity, creating expansion risk. I ran a scalability initiative: profiled the bottlenecks, implemented horizontal scaling for the affected services, and load-tested to 3× current peak load. We raised our stated capacity limit by 150%. Expansion ARR from enterprise accounts that had been capacity-constrained increased by $640K in the 2 quarters following the announcement.
How to Adapt These Examples
Plug In Your Numbers
Every example above follows: [Action] + [Specific work] + [Measurable result]. Replace the numbers with yours. If you don't have the exact metric, use the data you do have — even "estimated savings of approximately $X" or "reported by X of Y engineers in the survey" is far more credible than no number at all.
Don't Have Numbers?
Engineering directors often struggle with attribution because their work is structural and the causal chain from decision to outcome is long. Start with what you can measure directly: attrition rate, offer acceptance rate, deployment frequency, incident frequency, on-call load, time-to-hire. These are all within your control to track retroactively if you have access to historical HR and ops data. If you genuinely have no numbers, use relative language: "reduced significantly," "more than halved," "the first time the org had ever X." Absence of measurement is itself a finding worth noting — and fixing before next review cycle.
Match the Level
At Director and VP level, the scope of impact should be organizational, not team-level. The difference is not just scale — it's the type of change. An engineering manager improves one team's delivery. A Director of Engineering changes how delivery works across many teams. If your accomplishments sound like a strong EM's work, push the scope: instead of "my team's deployment frequency improved," write "I established the CI/CD standard that improved deployment frequency across 6 teams." The latter is a Director-level accomplishment. Focus on the systems, processes, frameworks, and cultural norms you changed — not just the outcomes one team achieved.
Start Capturing Wins Before Next Review
The hardest part of performance reviews is remembering what you did 11 months ago. Prov captures your wins in 30 seconds — voice or text — then transforms them into polished statements like the ones above. Download Prov free on iOS.
Ready to Track Your Wins?
Stop forgetting your achievements. Download Prov and start building your career story today.
Download Free on iOS No credit card required