Integrating fan data into a clean room environment — where identity resolution, audience matching, and attribute modeling happen at scale — requires the upstream data to meet a standard that most organizations haven't formally defined, let alone achieved.
The team had data. Ticketing records, customer attributes, digital identifiers, third-party appends. But the data lived across pipelines with inconsistent quality standards, unresolved match logic, fragmented attribute governance, and no shared definition of what “ready” even meant for a clean room integration.
Nobody had audited the current state end to end. Engineering knew individual systems were imperfect. But there was no cross-workstream view of what needed to change, in what order, and what could wait for a v2 refactor without blocking the business deadline.
The project started before there was a project. The first job was facilitating an engineering audit, translating those findings into a scoped backlog, and then building the roadmap that turned a business deadline into a sequenced, deliverable plan.
This is the most important clarity on this project: I was the TPM, not the engineer or analyst. The distinction matters because the value I added was organizational and structural, not technical.
Five stages — the sequence mattered as much as the work. Starting with execution before the audit was complete would have built the wrong things.
Every identified workstream from the audit — classified by layer, story point estimate, and V1 vs V2 designation. Orange top bar = V1 MVP. Gray = deferred to V2.
The product thinking behind the PM execution. Not every identified issue was equally urgent — the V1/V2 call was made on two dimensions: how critical the workstream was to clean room readiness, and how deep a fix was technically required.
Workstream sequencing across five two-week sprints — story point distribution, primary focus per sprint, and the 90% average burndown across the full project lifecycle.
Where project management and product thinking overlapped — and where the hardest calls were made.
| Decision | Rationale | Tradeoff |
|---|---|---|
| Update in V1, refactor in V2 | The single most consequential product decision in the project. Customer analytics could be updated to meet clean room acceptance criteria in V1 — or it could be refactored properly. A full refactor was the right long-term answer. But it was a 50+ point workstream that would have missed the business deadline. The V1 update was 30 points, delivered the same business outcome, and preserved the refactor for a future sprint where it could be done correctly. | Technical debt accepted — the V1 update leaves the underlying architecture in a state that still needs a full refactor. This was an explicit, documented tradeoff, not an oversight. The V2 backlog item was written with the context of why it was deferred, not just that it was. |
| Audit before scoping | It would have been faster to start scoping and building immediately. Several workstreams felt obvious before the audit. But without a structured cross-workstream audit, the scope would have been incomplete — some issues wouldn't have surfaced until mid-execution, which is the most expensive time to discover them. The audit phase added two weeks upfront and saved more than that downstream. | Cleaner scope, fewer surprises — the audit surfaced dependencies and sequencing constraints that wouldn't have been visible from the top. The RampID backfill, for example, turned out to be a dependency for multiple other workstreams — knowing that before Sprint 1 changed the entire sequence. |
| Normalize story points to team capacity | The DSA team ran this project in parallel with all other sprint work. Using a custom normalized story point scale — rather than absolute hours or standard Fibonacci points — let the roadmap reflect realistic team velocity without artificially separating data readiness work from the rest of the sprint. A 90% burndown across mixed-workload sprints is a more honest metric than a 100% burndown on an isolated project. | Harder to explain externally — a custom scale is less immediately legible to stakeholders unfamiliar with the team's velocity model. Tradeoff accepted in exchange for a planning model that actually reflected how the team worked, not one that looked cleaner but misrepresented capacity. |
| Facilitate the audit, don’t own it | The engineering team had the technical depth to assess their own systems. My role was to structure that assessment, ask the right questions, and ensure findings were complete and comparable across workstreams — not to conduct the audits myself. Attempting to own the technical audits would have slowed them down and produced less accurate findings. Facilitating them produced better results faster. | Higher-quality findings — domain experts auditing their own systems find more than a PM auditing systems they don't own. The TPM value was in the structure of the audit, the consolidation of findings, and the translation into scope — not in having the most technical knowledge in the room. |