Technical PM · Roadmapping 📋 TPM Lead Professional 🔒 Anonymized
Data Readiness
Scoping, Roadmap
& Execution
Led the full project lifecycle for a data infrastructure readiness initiative — audit facilitation, gap analysis, V1/V2 scoping, roadmap build, and sprint execution — delivering a clean room–ready dataset across five sprints with a 90% average burndown and every MVP workstream delivered on time.
🔒 Anonymized from a real professional engagement
This case study is drawn directly from professional TPM work at an enterprise data platform serving collegiate sports properties. Workstream names, system references, and client details have been genericized to protect proprietary infrastructure. The scope, story point scale, sprint cadence, burndown metrics, and prioritization framework shown here reflect the actual project.
Technical PMRoadmappingAgile / ScrumData InfrastructureIdentity ResolutionClean Room ReadinessV1/V2 PrioritizationSprint Execution
The Problem

The data existed.
The trust in it didn't.

Integrating fan data into a clean room environment — where identity resolution, audience matching, and attribute modeling happen at scale — requires the upstream data to meet a standard that most organizations haven't formally defined, let alone achieved.

The team had data. Ticketing records, customer attributes, digital identifiers, third-party appends. But the data lived across pipelines with inconsistent quality standards, unresolved match logic, fragmented attribute governance, and no shared definition of what “ready” even meant for a clean room integration.

Nobody had audited the current state end to end. Engineering knew individual systems were imperfect. But there was no cross-workstream view of what needed to change, in what order, and what could wait for a v2 refactor without blocking the business deadline.

The project started before there was a project. The first job was facilitating an engineering audit, translating those findings into a scoped backlog, and then building the roadmap that turned a business deadline into a sequenced, deliverable plan.

Gap 01 · No Audit Baseline
No cross-workstream assessment of current data quality existed. Each engineering team had local knowledge of their domain’s issues. No one had the full map.
Gap 02 · Undefined “Ready”
Clean room readiness had no formal definition. Without a clear acceptance standard, engineering couldn't scope toward it and stakeholders couldn't validate against it.
Gap 03 · No V1/V2 Framework
Every identified issue was treated as equally urgent. There was no framework for distinguishing what had to be fixed before the business deadline vs what should be refactored properly in a future sprint.
Gap 04 · No Delivery Ownership
The engineering team had the technical skills to execute the work. What was missing was a PM owner to sequence it, track it, manage dependencies, and hold the burndown accountable across sprints.
My Role

TPM lead —
not the engineer.

📋
What I Led
  • Facilitated the engineering audit — structured the audit process across all identified workstreams, gathered findings from engineering, and consolidated into a cross-workstream gap analysis
  • Translated audit findings into a scoped backlog — converted raw engineering observations into properly defined tickets with acceptance criteria, effort estimates, and dependency flags
  • Built the V1/V2 prioritization framework — defined the decision criteria for what made MVP (business deadline, clean room dependency) vs what was deferred to a v2 refactor
  • Built the phased roadmap — sequenced all V1 workstreams across five sprints on a custom story point scale that normalized team capacity and accommodated all other DSA team work simultaneously
  • Led sprint execution across all five sprints — ran standups, managed blockers, tracked burndown, and adjusted sequencing as dependencies resolved or shifted
  • Delivered all V1 workstreams on time against the business deadline with a 90% average sprint burndown across the project lifecycle
🔬
Scope Boundary

This is the most important clarity on this project: I was the TPM, not the engineer or analyst. The distinction matters because the value I added was organizational and structural, not technical.

  • I did not write the SQL, Python, or pipeline code — engineering executed all technical workstreams. My job was to make sure they were building the right things in the right order.
  • I did not conduct the data audits — I facilitated them. I defined the audit framework, asked the questions, and translated the answers into actionable scope.
  • I did not set the business deadline — I scoped to it. The clean room integration timeline was a business constraint. My job was to fit the necessary work inside it without sacrificing correctness.
  • I did own the V1/V2 call — the decision about what was MVP vs deferred was a product and PM judgment, not a technical one. That’s where project management crossed into product thinking.
Not in scope for this role
Technical development · Data analysis · System architecture design · Clean room platform configuration
The Approach

Audit first.
Then scope. Then ship.

Five stages — the sequence mattered as much as the work. Starting with execution before the audit was complete would have built the wrong things.

01
Audit Facilitation
Structured a cross-workstream engineering audit covering identity matching, data appends, attribute governance, and pipeline health.
02
Gap Analysis
Consolidated audit findings into a cross-workstream gap map — graded by severity, business impact, and clean room dependency.
03
V1/V2 Scoping
Applied the V1/V2 prioritization framework — separating what had to ship before the deadline from what should be refactored properly in a future cycle.
04
Roadmap Build
Sequenced all V1 workstreams across five sprints on a normalized story point scale — respecting team capacity, dependencies, and parallel DSA work.
05
Sprint Execution
Led five delivery sprints — standups, blocker management, burndown tracking, and sequencing adjustments as the project progressed.
Workstream Decomposition

What was scoped.
What made V1.

Every identified workstream from the audit — classified by layer, story point estimate, and V1 vs V2 designation. Orange top bar = V1 MVP. Gray = deferred to V2.

Customer Analytics Update
V1
Identity · Analytics Layer
Update existing customer analytics SQL logic to align with new identity standards and clean room acceptance criteria. Targeted fix — not a full refactor — to meet deadline while preserving existing structure.
~30 pts · Sprint 1–2
RampID Backfill
V1
Identity Resolution · BigQuery
Backfill RampID values across the customer dataset in BigQuery to ensure clean room identity matching has full coverage. Hard dependency for LiveRamp integration — nothing ships without this.
~40 pts · Sprint 1–3
Match Quality Validation
V1
Identity · Match Logic
Identify and exclude records flagged as low-confidence or erroneous matches from the customer match process. Ensures the clean room dataset reflects genuine identity resolution rather than noise from bad match candidates.
~25 pts · Sprint 2
Golden Record Attribute Rollup
V1
Attribute Modeling · Single Source
Consolidate multi-source customer attributes into a single authoritative record per customer ID. Resolve conflicts between source systems using a defined precedence hierarchy — the foundation of consistent attribute modeling downstream.
~35 pts · Sprint 2–3
Data Ingestion Pipeline Rebuild
V1
Pipeline · Ingestion Layer
Rebuild the core data ingestion pipelines identified as fragile or undocumented during the audit. V1 scope targets the pipelines with direct clean room data flow dependency — not a full platform rebuild.
~45 pts · Sprint 3–4
Data Append Pipeline Rebuild
V1
Pipeline · Third-Party Appends
Rebuild the third-party data append pipelines to resolve identified fragility and ensure append quality standards meet clean room input requirements. Scoped to append sources that directly affect identity resolution coverage.
~40 pts · Sprint 4–5
Customer Analytics Refactor
V2
Analytics Layer · Full Refactor
Complete architectural refactor of customer analytics SQL — not a targeted update but a ground-up rebuild to the new standard. Deferred because the V1 update delivers clean room readiness without requiring the full rebuild on the business deadline timeline.
~50+ pts · Future cycle
Digital Notebook Refactor
V2
Analytics · Engineering Tooling
Refactor data science notebooks used for digital analytics to align with updated pipeline outputs and attribute standards. Deferred because notebooks are analytical tooling, not clean room input — can be updated in V2 without blocking the MVP release.
~35+ pts · Future cycle
Additional V2 Workstreams
V2
Platform · Infrastructure
Additional platform improvements, pipeline optimizations, and attribute modeling enhancements identified during the audit as improvements rather than MVP requirements. Captured in the backlog for V2 planning.
~80+ pts · Future cycle
V1 MVP — delivered on deadline
V2 Deferred — backlogged for future cycle
~250 total story points · V1 scope only
Prioritization Framework

The V1 vs V2
decision matrix

The product thinking behind the PM execution. Not every identified issue was equally urgent — the V1/V2 call was made on two dimensions: how critical the workstream was to clean room readiness, and how deep a fix was technically required.

V1 vs V2 Prioritization Matrix
Clean Room Dependency vs Technical Refactor Depth
TPM Decision Framework
V1 MUST-SHIP V2 DEFERRED (HIGH REFACTOR) V1 TARGETED FIX V2 NICE-TO-HAVE LOW TECHNICAL REFACTOR DEPTH HIGH TECHNICAL REFACTOR DEPTH HIGH CLEAN ROOM DEPENDENCY LOW CLEAN ROOM DEPENDENCY RampID Backfill ~40 pts Match Validation ~25 pts Golden Record ~35 pts Customer Analytics Update (V1) ~30 pts Ingestion Pipeline Rebuild ~45 pts Append Pipeline Rebuild ~40 pts Customer Analytics Refactor (V2) ~50+ pts Digital Notebook Refactor ~35+ pts Additional V2 Workstreams ~80+ pts V1 — MVP scope V1 — targeted fix V2 — deferred Circle size ∝ story points · Anonymized workstream names
Sprint Execution

Five sprints.
All V1 workstreams delivered.

Workstream sequencing across five two-week sprints — story point distribution, primary focus per sprint, and the 90% average burndown across the full project lifecycle.

Sprint Roadmap & Burndown
5 Sprints · ~250 Story Points · 90% Avg Burndown · All V1 Delivered
Includes All DSA Team Work
SPRINT 1 SPRINT 2 SPRINT 3 SPRINT 4 SPRINT 5 Customer Analytics Update ~30 pts · kickoff RampID Backfill (start) ~15 pts · BigQuery Sprint 1 total: ~45 pts Burndown: ~88% RampID Backfill (cont.) ~15 pts · BigQuery Match Quality Validation ~25 pts · match logic Golden Record (start) ~15 pts · attr. model Sprint 2 total: ~55 pts Burndown: ~92% RampID Backfill (close) ~10 pts · validation Golden Record (close) ~20 pts · attr. model Ingestion Pipeline (start) ~20 pts · pipeline Sprint 3 total: ~50 pts Burndown: ~90% Ingestion Pipeline (close) ~25 pts · pipeline Append Pipeline (start) ~20 pts · pipeline Sprint 4 total: ~45 pts Burndown: ~91% Append Pipeline (close) ~20 pts · pipeline V1 Integration QA & Validation ~15 pts · acceptance testing CLEAN ROOM READY V1 dataset delivered on deadline Sprint 5 total: ~35 pts Burndown: ~89% SPRINT BURNDOWN OVERVIEW 88% Sprint 1 ~45 pts 92% Sprint 2 ~55 pts 90% Sprint 3 ~50 pts 91% Sprint 4 ~45 pts 89% Sprint 5 ~35 pts 90% avg Burndown reflects all DSA team sprint work, not data readiness workstreams alone · Story points on custom normalized scale
Results

The stat sheet

5 sprints
Full Delivery Lifecycle
Five two-week sprints from audit facilitation to final V1 clean room–ready dataset delivery against the business deadline.
250+ pts
Story Points Delivered
Across all V1 workstreams on a custom normalized story point scale designed to fit the DSA team’s capacity model and two-week sprint cadence.
90%
Avg Sprint Burndown
90% average burndown across five sprints — including all other DSA team work running in parallel, not just data readiness workstreams.
100%
V1 Scope On Time
All MVP workstreams delivered on the business deadline. V2 workstreams captured in the backlog for future cycle planning as designed.
TPM + Product Lens

Key decisions & tradeoffs

Where project management and product thinking overlapped — and where the hardest calls were made.

DecisionRationaleTradeoff
Update in V1, refactor in V2 The single most consequential product decision in the project. Customer analytics could be updated to meet clean room acceptance criteria in V1 — or it could be refactored properly. A full refactor was the right long-term answer. But it was a 50+ point workstream that would have missed the business deadline. The V1 update was 30 points, delivered the same business outcome, and preserved the refactor for a future sprint where it could be done correctly. Technical debt accepted — the V1 update leaves the underlying architecture in a state that still needs a full refactor. This was an explicit, documented tradeoff, not an oversight. The V2 backlog item was written with the context of why it was deferred, not just that it was.
Audit before scoping It would have been faster to start scoping and building immediately. Several workstreams felt obvious before the audit. But without a structured cross-workstream audit, the scope would have been incomplete — some issues wouldn't have surfaced until mid-execution, which is the most expensive time to discover them. The audit phase added two weeks upfront and saved more than that downstream. Cleaner scope, fewer surprises — the audit surfaced dependencies and sequencing constraints that wouldn't have been visible from the top. The RampID backfill, for example, turned out to be a dependency for multiple other workstreams — knowing that before Sprint 1 changed the entire sequence.
Normalize story points to team capacity The DSA team ran this project in parallel with all other sprint work. Using a custom normalized story point scale — rather than absolute hours or standard Fibonacci points — let the roadmap reflect realistic team velocity without artificially separating data readiness work from the rest of the sprint. A 90% burndown across mixed-workload sprints is a more honest metric than a 100% burndown on an isolated project. Harder to explain externally — a custom scale is less immediately legible to stakeholders unfamiliar with the team's velocity model. Tradeoff accepted in exchange for a planning model that actually reflected how the team worked, not one that looked cleaner but misrepresented capacity.
Facilitate the audit, don’t own it The engineering team had the technical depth to assess their own systems. My role was to structure that assessment, ask the right questions, and ensure findings were complete and comparable across workstreams — not to conduct the audits myself. Attempting to own the technical audits would have slowed them down and produced less accurate findings. Facilitating them produced better results faster. Higher-quality findings — domain experts auditing their own systems find more than a PM auditing systems they don't own. The TPM value was in the structure of the audit, the consolidation of findings, and the translation into scope — not in having the most technical knowledge in the room.