One does not simply rebuild a product

Framing the Stakes: From Acquisition to an Urgent Need for Change

Prakriti sets the stage by tracing Culture Amp’s Performance product from its 2019 acquisition to serving thousands of customers and massive review cycles, while eyeing a $3B market. She explains how front- and back-end monoliths, outdated domain and data models, shared platform concerns, and compounding tech debt created high coupling, low ownership, and a fearful engineering culture. These systemic issues stalled innovation and blocked adoption of shared tooling, making the status quo untenable. The segment positions the talk’s central theme: a high-risk, high-reward rebuild to unlock scale, speed, and market leadership.

Exhausting Alternatives: Diagnosing Why Incremental Fixes Weren’t Enough

Before proposing a rebuild, Prakriti details how the team classified all tech debt, pursued incremental remediation, attempted piecemeal re-architecture inside the monoliths, and ran a tiger-team extraction of a sample domain. Despite creating patterns and documentation, deep entanglement, missing platform foundations, and slow progress made these approaches insufficient to meet market timelines. She reflects on what helped—transparent tech-debt visibility and cross-functional advocacy—and what she’d change, like broaching a rebuild earlier and building trust by acting first and asking forgiveness later. This segment clarifies why a bold reset was necessary to advance the overall strategy.

Securing Buy-In: Aligning Teams, Leadership, and the Board

Prakriti walks through how she earned organization-wide support: mapping a product opportunity radar, running blue-sky architecture sessions, and converging on a plausible target architecture to validate the need for a rebuild. She bolstered credibility with an external advisor, crafted a narrative with a product VP centered on customer value, and proposed a balanced delivery plan that continued critical work while proving the new approach. Lessons include keeping stakeholders close to avoid surprises, slowing down early to go faster later, and avoiding tech-choice whiplash (e.g., pivoting from Hanami to Rails) by creating earlier, active debate forums. This builds the coalition required for a high-visibility, high-stakes rebuild.

Proving the Pattern: Designing and Shipping the First Slice

With buy-in secured, Prakriti proposes shipping one end-to-end “first slice” to validate architecture, domain model, tooling, execution capability, and schedule. She sets clear goals, rebuilds team structures and ways of working, and enforces principles: like-for-like scope (no big UX or new features), freeze the monoliths after the first slice, and invest deliberately in high engineering and design standards. The team selects “Self Reflections” as a high-impact yet isolated domain, establishes a Goldilocks timeframe, assembles an A-team, adopts internal standards, runs internal PR, and ships with low incident rates. A decision-making framework, disciplined trade-offs, strong dependency management, and an early cross-functional domain model paid off, though stress, late estimation, and immature cross-team planning underscored the need for new program frameworks.

Quantifying Progress Without New Features: Metrics That Matter

Because the rebuild was like-for-like, Prakriti defines success through engineering and customer experience metrics instead of typical product KPIs. She highlights LCP as a CX proxy, isolates heavy features for targeted performance tracking, and reports dramatic deploy-time reductions (FE to under 10 minutes, BE to under 5) alongside improved design system adoption, accessibility, and closure of hundreds of UX issues. She notes that “everyone loves metrics” for sustaining buy-in, and wishes they had instrumented key trade-offs and legacy comparisons earlier. This segment shows how disciplined measurement anchored the rebuild’s value to the broader strategy.

Scaling the Rebuild: Planning and Parallelizing the Remaining Slices

Prakriti returns to the opportunity radar and re-runs architecture optioning—this time for the remaining five slices—then freezes the monoliths and aligns four teams to go all-in. She overlays product opportunities, architecture choices, and team structures to surface trade-offs and stakeholders’ optimization criteria, building an execution plan that supports fast iteration and domain ownership. Heavy parallelization enables delivery in three chunks, aided by clear cultural expectations for change and a rare focus across all teams. Despite rising pressure and a relentless timeline, holding the line on priorities gets them to a pre-holiday finish and sets up the final rollout phase.

Executing the Rollout: Migration Realities and Hard-Won Lessons

Prakriti outlines a standard rollout plan (early access to GA by segment with buffers) and then candidly describes how “everything that could go wrong” did, from late-breaking issues to unforeseen environment drift. What worked was automating migration and orchestration, adhering to roll-forward-only principles with no back-communication to V1, and staying bold rather than timid. She advises assuming problems until the last customer is migrated, partnering early with support and account teams, and recognizing that data transformation bugs—unlike product bugs—trigger costly re-migrations. The talk closes with a successful outcome, new foundations enabling faster innovation (including AI), and pragmatic criteria for when a rebuild is the right strategic bet.

I'll try and live up to that overwhelming introduction. No pressure to be funny or entertaining. Thanks, Gretchen.

Okay, one does not simply rebuild a product. I need to start my timer, otherwise I'll just keep going. So, Emma Jackson said to me, a rebuild is never finished, only started. Thanks, friend.

Culture Amp is an employee experience platform. It does a whole bunch of things that you don't care about. I specifically work on the performance product, which helps customers to build high performing teams, eventually driving better business outcomes.

You don't care about that either. Let's get to the good stuff. So this product came into Culture Amp through an acquisition of a series A startup in 2019. That was six years ago.

It was very small then, but since has grown to thousands of customers. In total, these customers launch anywhere between 850 to about 1500 performance review cycles every month, and our largest performance review cycle has about 45,000 employees participating just to give you an idea of the scale of this product.

However, the overall global potential performance market is a whopping $3 billion, and this is where we want to go towards a performance product that can be market leading up to $3 billion.

There were many compounding layered technical obstacles in our way of getting there. The product was two monoliths, a front-end monolith and a back-end monolith. The underlying domain model, data model, back-end models, architecture were all outdated, and the customer needs had evolved a lot. There were also other products that were sharing the same monolithic code base. Shared concerns lived in the monoliths as well that might sound familiar. So things like notifications, tasks, authorizations, platform type concerns.

There was an enormous amount of tech debt, from trying to scale and iterate the product very rapidly post acquisition, as you have to do. And it contained many, many early stage decisions.

Those decisions were right at the time that they were made, but in 2019, that's when they were made. But when I started building the stock, they were not right anymore. There were also eight teams trying to work in these monoliths with terrible, non-existent domain separation. Too few people understood how the entire system worked, how the code base worked, Teams would invest weeks in starting a feature, unable to finish the feature because they would get blocked by other teams working in the same monolith that was unanticipated. There was high coupling and low ownership. This is a real photo, by the way, that I took at an art gallery in Adelaide. Strongly recommend.

The monoliths also could not benefit from any shared tooling or infrastructure at Culture Amp. All of this led to a really fearful engineering culture, very low engagement scores, which obviously matters to us, very low pride in our work, and our teams moving very, very slowly. Basically, what I'm describing is not a high performing culture.

These challenges would have seen us grind to a halt, unable to innovate.

We needed to invest now to ensure that the product would continue to grow in scale and serve our customers for many more years. To cut a long story short, we were stuck in a hole, a hole that maybe looks familiar, hopefully to many of you.

So that's me. I'm Prakriti, Director of Engineering.

I'm here today to talk about a spicy decision and take you through our journey of getting out of this hole via a full product rebuild.

I'm gonna take you through these steps today, and then I'll pause at every step and share some reflections, which I'm hoping will make you feel a bit less alone the next time that you're in a hole going on a similar journey.

Hopefully you'll never be in that hole, but let's be real, you're all going to be in that hole. So the first step in our journey was getting to the point where we even knew that we needed to rebuild. Before any talk of rebuilding, obviously we tried everything else. There was enormous tech debt, like this vague, unnamed burden that our stakeholders didn't appreciate.

We identified and classified every last bit of tech debt, ensuring that senior leadership, exec, cross-functional product partners could all become advocates.

We began prioritizing and fixing the tech debt incrementally, as you do, but it would have taken forever. And even if we had fixed it, it wouldn't have solved our fundamental inability to iterate on the domain and data models. Next, we tried piecemeal re-architecture. So within the monoliths, we identified domains that we could re-architect while keeping the monolith around them still functional. That worked, but it was very, very slow. We needed a bolder and faster move.

In parallel to that, we tried extracting domains out of the monoliths.

Everything was deeply, deeply intertwined. We lacked quick microservice foundations and shared solutions for platform concerns like notifications, emails, authentication, authorization, account data, employee data. We set up a small dedicated Tiger team, which extracted one example domain successfully. They documented it.

They created a pattern for other teams to follow. That's also a real photo.

I took that at Uluru, and I thought it was the perfect metaphor because they were giant Jenga bricks, very, very poor quality, and very high friction. So doing those three things helped us realize that the reality is nothing is going to get us where we needed to get, in time to lead the market. We needed a big reset to achieve that.

So as I said, I'll go through the steps and I'll share a few key reflections at each step. For step one, a few things that went well for us. We had this amorphous statement of, oh no, we have too much tech debt. That turned into a clear classification with prioritization, rough difficulty levels for all of our major tech debt items.

Then we used that to create visibility among leaders - building awareness, but also empathy and understanding. We worked cross-functionally, which meant that even if I wasn't in the room or there was no engineering representation, our peers could still advocate for tech debt.

And finally, exhausting all alternatives before proposing the rebuild, doing it with a high degree of transparency, made our stakeholders more receptive to hearing that we have to start from scratch.

A couple of things I've learned for next time. Firstly, suggest it sooner. That's a real tricky one. If you suggest it too soon, then leaders will assume engineers always want a fresh start, always want a clean slate, want everything to be perfect. But if you leave it too long, then you're wasting valuable time repairing that you could instead be spending rebuilding.

Secondly, establishing trust and asking for forgiveness rather than permission. We did a lot of the groundwork without asking for explicit permission. Now that we knew what had to be done, we had to get buy-in from everybody else, from the teams who would do the work, from design and product peers, from directors and VPs, from exec and from the board.

That's step two. To get buy-in, first we had to show where we want this product to go. This is an opportunity radar showing everything that a rebuild could unlock for our customers. It contains opportunities that were not possible without doing a rebuild.

We started with some blue sky architecture brainstorming.

If there were no constraints, looking at the opportunities radar, knowing where you want the product to go, how would we architect it today? We went really wide in this step, and people came up with many different architecture ideas. Then we converged those into one potential architecture for our rebuild product, aiming to validate the need rather than finalize every little detail. We only wanted to see how different an ideal architecture might look. If it looks very different, it makes our case for the rebuild stronger. So the product opportunity radar architecture sessions helped us to get buy-in from the team who would actually do this work. Everything else we tried before any talk of rebuild, the three things I shared before, those helped us get support from product design and senior leadership. Plus, keeping them close to all this work throughout helped build trust as well. Exec buy-in, not that easy. To get that, first we consulted an external advisor who has successfully completed a rebuild, but more importantly, they have seen several rebuild failure modes.

Then we built a comprehensive exec presentation telling the holistic product story. This last step is probably what got us through to that final 20% of buy-in. We made it very clear that we are not going to halt all customer value for many years.

While we rebuild the first component, the remaining teams will handle the most critical work on performance. Then we would pause everybody for about a year-ish and we would go all in on the rebuild.

The broader performance offering would continue in other ways. So this balanced approach helped us to get exec and board buy-in.

A few things we did well in this step, if I may say so myself. Talking to an external advisor who has a lot of experience under their belt, brought a lot more credibility to our wild idea.

Then a product VP crafted that holistic narrative. He put our customers first in that story rather than over indexing on engineering, even though this was an engineering initiative.

Keeping stakeholders close was great. Being deliberate about who is included in every conversation, ensuring that they're coming with us on the journey so that our final ask is not a surprise.

Instead, by the time we got to the ask, they were super invested. They kind of wanted it as well.

And finally, slowing down at this stage helped us to go much faster later. Couple of mistakes. I think slowing down also cost us. So we worked away from the teams for too long, which created some uncertainty as they were still shipping the regular roadmap. And I wish we had created more active spaces for collectively debating tech choices, failing to get true buy-in on these choices early, led to us pivoting away from Hanami to Rails after we had already written some code, which shook some confidence in this very huge bet. Now we had buy-in, but this was still a big, risky ask.

We proposed, "Let's ship one complete slice to customers, prove out the entire architecture, domain model, tech choices, our capability to execute, and our timeliness end to end." That means the first slice carried immense pressure and visibility.

It was a company-wide bet, so failure of the first slice would have probably killed the entire initiative permanently. And good luck to the next person who tried suggesting a rebuild after that. So these high stakes made it very critical that we get a strong start on the first slice, which brings us to step three. We began by setting goals. That's the magic word, isn't it? You have to set goals and then everyone joins you on your plan. So on the product side, we wanted to produce a domain model that we can iterate, architecture which supports future use cases, and clear ownership for teams during and after the rebuild. That sounded really easy, and I like to live dangerously. So I shared earlier some challenges that our teams were facing. We thought, fuck it, let's also rebuild the teams. We're going to have teams that move really fast, very strong engineering culture, strong ways of working, high engagement, opportunities for everyone for learning, growth and development. And we're gonna do all of this simultaneously in an impossible time frame. Absolute madness. Who came up with this plan? Someone needs to have a talk with her.

To begin, we laid down some ground rules or principles.

We called our first principle  'like for like'. No massive UX uplift allowed. No new features allowed. We will only stand up our existing product on a future-proof code base and domain model. We will use the latest tooling foundations, design system, and high engineering standards. This reduced customer risk and prevented continuously changing goal posts from dragging out the rebuild with a long tail.

Second principle was 'freeze the monoliths after the first slice'. No drift was allowed between the old and the new rebuilt product. no communication allowed back from the new to the old product. We wanted a quick customer migration, so no letting customers languish on the old product for too long. And decommissioning the old one is part of the definition of done. Finally, we resolved to be deliberate in every step we take, investing in setting high engineering standards, high design standards, and building high performing teams as we go. I must have been high because I just keep using the word high all the time. Also, this plan was definitely devised by someone who was high. Don't tell anyone. So we divided the rebuild into six rough domains, and we chose one domain for the first slice called 'Self Reflections'. The name doesn't matter. We picked it for some important reasons, though. Firstly, it is well used by customers, which gives us real customer traffic, which is going to help validate our choices.

Secondly, it was in the worst shape of all the domains because it hadn't been touched in the longest. Third reason is that it was relatively isolated. So it allowed one team to focus on rebuilding it while the other teams continued the critical roadmap work that I mentioned earlier, which was essential to get buy-in before we could freeze all the monoliths.

And then finally, 'self-reflections' had just the right amount of complexity.

So it was a great candidate to help us prove that if we can rebuild this, then next we can rebuild the remaining five domains as well.

We then took a few targeted steps to set up this first slice for success. As I said, the first slice was very critical. It had to succeed.

We created a 'Goldilocks' timeframe - so long enough that we do it right, but also tight enough that some smart trade-offs are forced. Otherwise, we'll just be gold-plating the first slice forever.

We also started with a small A-team of sorts. We took a mix of strong performers, domain experts, specialists from across the org, like design system, front-end tooling, back-end tooling, and we put them together. I feel like this GIF reveals my age. I might drop it next time. What do the young people use these days? I was gonna go with 18, but that's even older. Bad choice.

Then we were firm on adopting all internal standards and establishing informal metrics where standards didn't exist, while still maintaining speed. And finally, we ran a bit of an internal PR campaign because we had to build some excitement. This project could have felt very risky, very ambitious, very ambiguous, and we needed to, I don't know, just get people behind it, light a bit of a fire.

Most of that worked, and we managed to ship the first slice.

We didn't ship anywhere close to the initial target, but we did ship very close to the estimate, the first estimate that actually came from the team. We scaled up as we rolled out to larger and riskier customers with very few incidents.

Now, this was the most crucial step, so I'll share a few more reflections on this one on top of the goals, principles, and the team set up that all worked really well.

First, we created a clear decision-making framework.

It helped us to get clarity on what is the decision, who is the decision maker, who do they need to consult, and an engagement model for senior leadership stakeholders. This prevented bottlenecks. It prevented seeking approval for everything, and it empowered the team to move fast with bold choices.

We had firm trade-off conversations, aggressively protecting the scope, while still taking on informed risks.

That Goldilocks timeframe forced us to think really hard about what's good enough.

Strong exec buy-in led to clear prioritization, which helped us get capacity dedicated across other teams as well, because we had 18 dependencies outside our teams in other teams.

And we invested early in building a new domain model for the entire rebuilt product with cross-functional input early, including key leadership people.

First slice was also a very tough ride, so a lot of lessons learned.

The initial target date was aggressive, and that high visibility of the project created a lot of stress for the team. Establishing momentum was hard for them. They were just thrown into the deep end, with new people, no forming time, no planning time. Refinement and estimation were prioritized too late, which further impacted their momentum and engagement.

And this was the business's biggest cross team initiative ever at that point. So we lacked maturity in dependency planning, managing complex year long multi-team projects. The good news is thanks to this initiative, we actually have frameworks for all those things now.

So with this rebuild being like for like and being fairly invisible to customers, how do you measure success? That brings us to our fourth step. We couldn't measure success in the usual ways like validating hypotheses, product analytics, customer adoption, revenue. None of that was valid for us. We had to define what metrics were important and how we'd measure them to ensure ongoing buy-in.

We decided instead to go with engineering and customer experience metrics. Think back to these goals I shared earlier and the 'like for like' principle. We wanted the new experience to feel similar but better. We also wanted teams to feel pride in their work, to feel like they're high performing, they're delivering high quality products at good speed.

We used Largest Contentful Paint time as a simple proxy for the customer experience metric. There were a few heavy features that dealt with a lot of data. So for those, we measured the performance of those features separately. Assuming that if these are improved, we don't need to check every feature individually. Developer experience side, the front-end deploy time used to be 20 to 40 minutes, now it's under 10. The back-end deploy time was 40 minutes to an hour, and now it's under five.

We also made huge improvements to our design system adoption and accessibility scores. The old product had 560 user experience issues just sitting in the backlog. And through the rebuild, even though it's like for like, we did fix most of those. This is a very easy one. Everyone loves metrics. That's the lesson.

We showed that even though we are building like for like, we're still delivering measurable customer value and business value, even though it's a rebuild. Anything to make that ongoing buying smooth and easy.

Still learned a couple of things though. We struggled to measure some of these.

The old product was really hard to work with. That's why we're rebuilding it, obviously. But I wish that I had invested more in thinking about the metrics up front. And I wish that I had set up specific metrics to instrument the key, really difficult trade-off decisions.

It was a really hard journey with the first slice, but we got there.

We gained confidence that the project is on the right track. We had a massive retro. We paused for celebration and recognition. Not too long of a break, though. We immediately had to pivot to what's next.

One down, five to go. Project's basically done, right?

Everyone's celebrating the success. No, wrong. We had to take everything we learned from the first slice and turn that into a plan now for the remaining five slices.

Plan? What plan? I didn't have a plan.

We had done just enough big picture thinking to validate the need for the rebuild, to get buy-in, and to ship one out of six slices.

Remember when I showed you the product opportunity radar? We went back to that one. We extracted from it potential future opportunities.

It was important to never lose sight of why we are doing this rebuild.

It's very easy to get lost in technical details, but we constantly reminded ourselves of the value the rebuild needs to unlock to be considered a success. More brainstorming time.

We considered multiple potential architecture options again to deliver those opportunities.

But this time, the scope was all five remaining slices. Once again, we included key senior leadership stakeholders. Now we could freeze the monoliths.

The first class was done, and we could put all four teams on the rebuild.

We overlaid potential team structure options on top of each architecture option. That's the crude circles in this screenshot.

Yes, I'm an engineer. I am not known for my design skills. Please don't judge me. We considered multiple stakeholders perspectives on what we should optimize for. that helped to surface unspoken priorities and ensure that we were aligned at least 80% directionally.

Finally, we pulled it all together. I present to you the plan.

Think of it in layers. The bottom layer is the anticipated product roadmap.

The middle layer is architecture options, and the top layer is ideal team structures for fast iteration, domain ownership, and high performance. I only have two hands, but we had three layers. We needed a plan that is going to work across all the three layers. Let's fast forward a few months now. We ended up completing the entire rebuild, all the remaining five slices. We did it in three chunks, and we had heavy parallelization after finishing the first slice because we were able to go all in with four teams. That's real footage of me during this initiative. So some things that served us well in this stage, that overlay method of looking at all success factors simultaneously, product opportunities, architecture, team composition. That was a bit unusual, but it allowed us to see trade-offs clearly and come up with the most acceptable plan.

Having clarity and shared understanding on what each leadership stakeholder is optimizing for separately helped us to find an option that worked across all attributes. We set cultural expectations of frequent change and asked teams for high agility as we go through it. And finally, having all teams focused on one initiative was a rare gift, very few distractions.

So we leveraged that gift as much as possible.

We struggled with a few things as always. Despite setting expectations, a high change environment is always hard, no matter how much you explain to people why it's necessary. Not everyone can be equally resilient. The visibility and high pressure that we expected to decrease after the first slice somehow, I don't know how it's possible, but it actually increased. And the ambitious timeline set by exec and the board required a level of laser focus that I have never seen before. We were constantly holding the line and constantly saying no, including to some very senior people.

As they say, the last 20% is the hardest. With an ambitious end of year target, we actually finished right before Christmas. and then we came back in the new year in January to roll out to all remaining customers. That brings me to my last step, rolling out. We had a plan. We were going to go from early access to general availability, roll out by customer segment. It's pretty standard, small business first, then medium business, and finally enterprise.

A short break between each segment as a buffer in case any issues come up.

It sounds perfect. We've all done rollout plans like this, and I'm sure they've all worked perfectly all the time. Not. So many things happened. Oh my God, everything that could go wrong possibly went wrong.

Some of those things we anticipated, many of which we didn't anticipate.

So what did we learn? A few things that went well: Investing in automating the migration and orchestration.

Huge, huge benefits. Strongly recommend. We'll do that again.

The second one is sticking firm to our principles despite a lot of pressure.

So principles like we will only roll forward, there'll be no communication back from V2 to V1, we really had to stick to those. And finally, being bold. Things going wrong and plans needing to change was much better than if we had come up with an original timid plan, because that would have failed to even get off the ground.

Learned many lessons here. Assume things will go wrong until the very end, even when migrating the last account, when you think you're in the clear, no, Shit's gonna go wrong.

Build a very deep shared understanding and empathy with support, account execs, and coaches early rather than doing it at crisis time. Fixing product bugs was actually very easy and very fast, who knew? But data transformation bugs were hard. Every time we fixed one, we had to re-migrate all the accounts that had been migrated up until that point, delaying the next migration. And finally, the environment when the last customer was being migrated, was quite different from the environment when the first customer was being migrated, which nobody considered.

More real footage of me going through the rollout. So similar to the Obama administration. No, that's a joke. I'm not going there.

All the gray hair was totally worth it. We did succeed.

This massive, ambitious bet eventually succeeded.

We rebuilt both the product and the teams to be high performing as planned.

We are now solving the most important customer problems and were able to do it at very high speed and quality. We're also leveraging these foundations that we created to invest in innovation that previously was not even a part of the opportunity radar. So the AI landscape has changed a lot since we conceived of this rebuild, but it's unlocked all these opportunities that we didn't even know we had at the time. So people say that rebuilds are always a bad idea.

I disagree. I say that if you have a domain model that can't take your product forward, you're stuck with series A decisions that have been band-aided over and over again, your teams are grinding to a halt, you have tight coupling, you have an underperforming product in a huge potential market, and you have already tried everything else, then a rebuild could be your best bet. You could do it with steps that look something like this. You could incorporate the lessons that I shared and the things that we did to set it up for success.

However, if these conditions are not true for you, then I can't help you. You're on your own, and you've wasted your time.

No, I'm just kidding. Thank you.

No, if it's not true, it doesn't necessarily mean that a rebuild is not for you. But you might need to look at the steps that I shared, the things that worked well and didn't work well, and figure out how you need to tweak those for success in your specific conditions. But I mean, yes, it could also mean that a rebuild is not the right step for you. I don't know. I just work here. Thank you.

“A rebuild is never finished, only started.”

A colleague

An employee experience that people love

Screenshot of the Culture Amp website homepage hero section featuring illustrated scenes of people working and collaborating, along with customer brand logos and ratings badges.

Employee performance management platform

Empower your people to drive performance at scale

Make performance reviews fair and effortless. Culture Amp’s performance management solution enables managers and teams to continuously align on expectations, goals, and feedback.

  • Peer feedback
  • Manager feedback
  • Self-reflections

Illustration of three overlapping UI cards representing review inputs: “Peer feedback,” “Manager feedback,” and “Self-reflections,” each with placeholder text and small icons to depict sources of employee performance insights.

2019

Acquired Series A 6 years ago

Thousands

Of customers today

850-1500

Performance review cycles launched each month

$3 billion

Global potential performance market

Technical Obstacles

Technical Obstacles

  1. Underlying domain model and architecture
  2. Other products in the same codebase
  3. Common shared concerns
  4. Enormous tech debt
  5. Early stage decisions and bandaids
Diagram of five concentric circles labeled from 1 at the center to 5 at the outermost ring, indicating layered or nested obstacles corresponding to the numbered list.

Team Obstacles

  • No domain separation
    • High coupling
    • Poor ownership
Photo of a dense web of tangled red threads suspended over a gallery floor, evoking complexity and entanglement as a metaphor for tightly coupled systems.

Team Obstacles

  • No domain separation
  • High coupling
  • Teams moving slow
  • Poor ownership
  • Not on shared tooling and infrastructure or it was not fit for purpose
  • Not a high performance culture
Six simple line icons accompany the bullets: interlocking puzzle pieces, two gears, a sad face, a person holding a flag on a hill, a server rack, and an ascending bar chart with an arrow.
- NOW I'LL PULL MY ARMS OUT WITH MY FACE.

The Simpsons (1989 – current)

Animated still from The Simpsons: Homer is stuck head-first in a muddy hole while Marge, Lisa, and Bart stand nearby reacting; a large elephant stands behind them at night.

— FIRST, I'LL JUST REACH IN AND PULL MY LEGS OUT. The Simpsons (1989–current)
Frame from the animated series The Simpsons: Homer gestures while Marge, Lisa, and Bart react nearby, with a large elephant behind them at night.
  1. Deciding to rebuild
  2. Getting buy in
  3. Kicking off strong
  4. Measuring as you go
  5. Finishing the rest
  6. Rolling out
Roadmap diagram showing a single winding path with six numbered milestones connected in sequence: 1 Deciding to rebuild; 2 Getting buy in; 3 Kicking off strong; 4 Measuring as you go; 5 Finishing the rest; 6 Rolling out.

I'm going on an adventure!

The Hobbit: An Unexpected Journey (2012)
Film still from The Hobbit: An Unexpected Journey (2012) showing a hobbit running through a green hillside village, holding a paper, setting off on a journey.

1

Deciding to Rebuild

Knowing when to burn it all down and avoid sunk cost

We tried 3 approaches first

  • Removing tech debt

    Identify, prioritise, and remove tech debt one step at a time

  • Rearchitecting in places

    Identify and re-architect one domain at a time inside the monoliths

  • Extracting from the monoliths

    Extract domains out from the monoliths one at a time

Three small icons accompany the points: a currency-and-gear icon for removing tech debt, a jigsaw-puzzle icon for rearchitecting in places, and a multi-window/module icon for extracting from the monoliths.

We tried 3 approaches first

Extracting from the monoliths

Extract domains out from the monoliths one at a time.

Photo of a person removing a block from a tall, unstable Jenga tower, representing the risk of extracting pieces from a monolithic system.

SO IT BEGINS

Meme image showing the character Elmo with arms raised in front of large flames, conveying a dramatic or chaotic beginning.

Step 1: Deciding to rebuild

  1. Deciding to rebuild
  2. Getting buy in
  3. Kicking off strong
  4. Measuring as you go
  5. Finishing the rest
  6. Rolling out
Diagram of a winding roadmap showing six numbered steps in sequence, with step 1 emphasized and the remaining steps shown along the path.

Deciding to rebuild

  • Tech debt — clear classification and prioritisation
  • Created visibility among leaders across the org
  • Worked cross-functionally with high transparency
  • Exhausting alternatives transparently before suggesting rebuild
Process roadmap diagram for step 1, with four callout ribbons each accompanied by a small rocket icon. Additional later steps appear in the background as part of the broader roadmap.

Deciding to rebuild

  • Tech debt - clear classification and prioritisation
  • Created visibility among leaders across the org
  • Worked cross functionally with high transparency
  • Exhausting alternatives transparently before suggesting rebuild
  • Don't try for too long
  • Seek forgiveness rather than permission
Process step slide showing callouts connected to the theme. Rocket icons denote action items; lightbulb icons denote lessons or takeaways.

2

Getting buy in

Validating our decision and getting it funded

Composite image of a digital whiteboard with four panels containing numerous small artifacts: flowcharts, block-and-arrow system diagrams, decision trees, a Venn diagram, entity/relationship boxes, user-flow maps, and clusters of sticky notes. The slide conveys a broad brainstorming exploration of architectures and opportunities; individual labels are not legible.

Converging on an architecture and tech

Collage of system architecture diagrams and flowcharts. A large central architecture map shows domain components and data flows; dotted connector lines point to several smaller expanded workflow diagrams labeled as triggers and processes, illustrating how different parts of the system interact.

Getting Exec on board

  • Consulting an external advisor

    Increasing confidence in our decision and seeking advice on direction

  • Exec presentation

    Telling the holistic story of the product including what we can offer our customers in the meantime

Two small icons: a group with a speech bubble symbolizing consultation, and a presenter pointing to an upward-trending chart symbolizing an executive presentation.

Getting buy in

  • Consulting an advisor who has seen rebuild failures and successes
  • Crafting a holistic narrative beyond product / engineering
  • Senior leadership brought on the journey and invested early
  • Slow now to be fast later
Process diagram showing a looping pathway with step 2 highlighted as “Getting buy in,” accompanied by four callout boxes marked with rocket icons that enumerate key actions.

Getting buy in

Highlights

  • Consulting an advisor who has seen rebuild failures and successes
  • Crafting a holistic narrative beyond product / engineering
  • Senior leadership brought on the journey and invested early
  • Slow now to be fast later

Lessons

  • Working under the radar for too long
  • Create more space and time for debating tech choices collectively
Diagram of a journey timeline highlighting stage 2 titled “Getting buy in,” with callouts listing what helped and lessons learned.

3. Kicking Off Strong

A lot riding on the start

Rebuild Goals

Rebuild Goals

  • Rebuild the product
    • Domain model
    • Architecture
    • Ownership
Tree diagram showing “Rebuild Goals” leading to “Rebuild the product,” which branches into three items: Domain model, Architecture, and Ownership.

Rebuild Goals

  • Rebuild the product

    • Domain model
    • Architecture
    • Ownership
  • Rebuild the teams

    • Culture and WOW
    • Engagement
    • L&D growth opportunities
Hierarchy diagram: “Rebuild Goals” branches into two groups—“Rebuild the product” (Domain model, Architecture, Ownership) and “Rebuild the teams” (Culture and WOW, Engagement, L&D growth opportunities).
  • REBUILD THE PRODUCT
  • REBUILD THE TEAMS

US

Did you just take both pills?
Three-panel meme: Top panel shows two open hands offering different colored pills, labeled “Rebuild the product” and “Rebuild the teams.” Middle panel shows a person with the overlaid label “US.” Bottom panel shows another person speaking, captioned, “Did you just take both pills?”

Rebuild Principles

Rebuild Principles

Like for like
90/10 guideline
Freeze and decommission
Definition of done
Holistic and deliberate
High engineering, design standards and ways of working
Three small icons accompany the principles: a generic tool icon for “Like for like,” a trash can for “Freeze and decommission,” and a medal for “Holistic and deliberate.”

Choosing the first slice

  • Usage: Well used by customers
  • Condition: In the worst shape, not touched for the longest
  • Coupling: Relatively most isolated from the other units
  • Complexity: Representative of challenges but not the most complex

Each criterion is paired with a small icon: a pie-chart/medal for Usage, an ambulance for Condition, two interlocking gears for Coupling, and a cluster of clouds for Complexity.

Setting the first slice up for success

  • Goldilocks timeframe commitment
  • Start with a small, strong team
  • Set high standards to build something we’re proud of
  • Internal PR campaign
Diagram with a central circle labeled “Setting the first slice up for success,” connected to four surrounding rectangular callouts: “Goldilocks timeframe commitment,” “Start with a small strong team,” “Set high standards to build something we’re proud of,” and “Internal PR campaign.”
BY YOUR POWERS COMBINED

Captain Planet and the Planeteers (1990–1996)

Animated scene from a 1990s cartoon showing a blond character raising a clenched fist that glows with energy, set against a cloudy sky and trees.

BY YOUR POWERS COMBINED

Captain Planet and the Planeteers (1990–1996)

Animated GIF from the cartoon “Captain Planet and the Planeteers,” showing a Planeteer thrusting a ring-bearing fist forward as if summoning power; the on-screen subtitle reads “BY YOUR POWERS COMBINED.”

Setting the first slice up for success

  • Goldilocks timeframe commitment
  • Start with a small strong team
  • Set high standards to build something we’re proud of
  • Internal PR campaign
Diagram showing a central idea connected to four surrounding boxes. The center reads “Setting the first slice up for success.” The four boxes list tactics: Goldilocks timeframe commitment; Start with a small strong team; Set high standards to build something we’re proud of; Internal PR campaign.

It’s done.

The Lord of the Rings: The Return of the King (2003)
Film still from The Lord of the Rings: The Return of the King showing a close-up of a distressed character against a fiery, molten background.

Kicking off strong

  • Decision‑making framework to empower
  • Strong tradeoffs with informed risks (goldilocks timeframe)
  • Clear exec prioritisation — #1 company priority
  • Investing early in creating new domain model cross‑functionally
Diagram of a roadmap highlighting step 3, labeled “Kicking off strong,” on a curved path. Four callouts with rocket icons branch from this step, listing the practices: decision‑making framework to empower; strong tradeoffs with informed risks (goldilocks timeframe); clear executive prioritisation as the #1 company priority; investing early to create a new cross‑functional domain model. Earlier and later steps on the path are present but de‑emphasized.

Kicking off strong

  • Burnout and stress
  • Creating momentum was difficult
  • Prioritise investing in estimating at the right time
  • Complex dependency management
  • Decision making framework to empower
  • Strong tradeoffs with informed risks (goldilocks timeframe)
  • Clear exec prioritisation - #1 company priority
  • Investing early in creating new domain model cross-functionally
Diagram highlighting step 3 in a multi-step initiative: the central title is surrounded by two groups of callouts—challenges marked with a lightbulb icon and actions/strategies marked with a rocket icon.

4. Measuring success

At every step as we go through

Rebuild Goals

  • Rebuild the product
    • Domain model
    • Architecture
    • Ownership
  • Rebuild the teams
    • Culture and WOW
    • Engagement
    • L&D growth opportunities
Hierarchy diagram illustrating “Rebuild Goals” branching into “Rebuild the product” (Domain model, Architecture, Ownership) and “Rebuild the teams” (Culture and WOW, Engagement, L&D growth opportunities).

Largest Contentful Paint (LCP)

Scale: GOOD, NEEDS IMPROVEMENT, POOR

  • Threshold markers: 2.5 sec, 4.0 sec

Old Self Reflections

  • Min: 2.7 sec
  • Avg: 3.3 sec
  • Max: 4.2 sec

New Self Reflections

  • Min: 600ms
  • Avg: 1.5s
  • Max: 1.9s

Two horizontal benchmark bars compare LCP performance. Both bars are labeled with categories (GOOD, NEEDS IMPROVEMENT, POOR) separated by ticks at 2.5 sec and 4.0 sec. The first bar (Old Self Reflections) shows min/avg/max markers at 2.7s, 3.3s, and 4.2s, spanning from “Needs Improvement” into “Poor.” The second bar (New Self Reflections) shows min/avg/max at 600ms, 1.5s, and 1.9s, all within the “Good” range.

Specific Feature Performance

Performance comparison of selected features: Old Self Reflections vs New Self Reflections
Feature Old Self Reflections New Self Reflections
Filtering employees 4 seconds 60 ms
Employee assignment 8 seconds 2 seconds
Async customer communications 1 minute 50 seconds
Exporting cycles 1+ minute (async email delivery) 8-10 seconds (sync download)

Developer Productivity

  • Old frontend
  • New frontend
  • Old backend
  • New backend

Deploy time

Horizontal bar chart comparing deploy time on a 0–50 scale.

  • Old frontend: approximately 30
  • New frontend: approximately 9–10
  • Old backend: approximately 50
  • New backend: approximately 6–7

Most of 560 UX / UI issues fixed

  • Simplified overall page structure
  • More consistency leveraging design system
  • Uplifted accessibility
  • More responsive pages which work on smaller screens
  • Streamlined workflows with fewer clicks
  • All copy and content reviewed and improved

Image labels

  • CUSTOMERS
  • NEW SELF REFLECTIONS
  • OLD SELF REFLECTIONS
Illustration/meme: a “distracted boyfriend” scene. The man is labeled “CUSTOMERS,” looking toward a woman labeled “NEW SELF REFLECTIONS,” while turning away from another woman labeled “OLD SELF REFLECTIONS,” conveying customer preference for the new version.

Measuring as you go

  • Everyone loves metrics
  • Like-for-like doesn’t mean zero benefits for customers

Roadmap diagram highlighting step 4 of a multi-step journey. A curved path emphasizes “Measuring as you go” with a small hexagon labeled 4. Two callout arrows with rocket icons present the key points: “Everyone loves metrics” and “Like-for-like doesn’t mean zero benefits for customers.” Earlier and later steps appear faintly in the background.

Measuring as you go

  • Hard to measure old product to compare
  • Everyone loves metrics
  • Invest more in instrumenting key tradeoffs
  • Like-for-like doesn’t mean zero benefits for customers

Diagram of a process timeline highlighting step 4, titled “Measuring as you go,” with four callouts summarizing insights and outcomes related to measurement and customer value.

  • Well done on shipping the first slice
  • Now we can relax right
  • Now we can relax right?
Four-panel reaction meme: two people in a grassy field. Top-left shows Person A listening; top-right shows Person B smiling while asking about relaxing. Bottom-left shows Person A looking uncertain; bottom-right shows Person B now serious, repeating the question with a question mark—implying that the celebration is premature and more work remains.

The Plan ™

Reaction meme image showing a person with a puzzled expression, surrounded by multiple question marks to convey confusion.

Potential roadmap items

Potential roadmap items

High

Platform expectations
  • Customise notifications
  • Customise terms
  • Sandbox / masquerade ability
  • 2‑way HRIS sync
  • Roles & permissions
  • Support change in manager
Performance product expectations
  • Complete reporting — usage
  • Support more review types
    • Multi‑lingual reviews
    • Date‑based reviews
    • Goal and competency‑based reviews
    • PIP or coaching plans
Question flexibility
  • Question and template library
  • Question branching
  • Unified cycle
  • Export combined report for all units

Medium

Nominations
  • Limit P&U feedback nominations
  • Nominations exclude based on start date
  • Mass approve nominations
Flexibility to current process
  • Include hard deadlines in addition to soft
  • Edit feedback after submitting
  • Group P&U feedback by question
  • Edit self reflection after submitting
  • Anon P&U feedback
  • Remove 1–2 people from calibration
Better support collaborators and matrix orgs
  • Second manager on all manager tasks
  • Expand existing collaborator role
Data over time
  • Historic upload of perf reviews
  • Trending data over time

Affinity map of roadmap ideas organized into two priority groups (High and Medium). Items are clustered under categories such as Platform expectations, Performance product expectations, Question flexibility, Nominations, Flexibility to current process, Better support collaborators and matrix orgs, and Data over time.

Potential architecture options

Performance Cycle UI monorepo

  • Plan — Container: Next.js UI
  • Capture — Container: Next.js UI
  • Manage — Container: Next.js UI
  • View — Container: Next.js UI

APIs

  • Plan API — Container: Ruby API
  • Capture API — Container: Ruby API
  • Manage API — Container: Ruby API
  • View API — Container: Ruby API

Databases

  • Database — Container: Postgres; Store performance‑related data
  • Database — Container: Postgres; Store performance‑related data
  • Database — Container: Postgres; Store performance‑related data
  • Database — Container: Postgres; Store performance‑related data

Perform Core (software system) — collection of services

Unidirectional data flow

Notes

  • Tradeoff: good
  • Tradeoff: bad

Diagram showing four parallel pipelines for a performance cycle system. A UI monorepo contains four apps—Plan, Capture, Manage, and View—each connecting to its own Ruby API service, which in turn connects to a dedicated Postgres database. A shared Perform Core (collection of services) sits beneath the APIs. An arrow indicates unidirectional data flow. Sticky notes along the bottom list pros and cons (tradeoffs) for this option.

The Plan ™

Releases

  • Potentially EAP Insights?
  • V2 rebuild complete and full Perform v2 EAP started
  • CUSTOMER VALUE (milestones)
  • Technical rebuild complete (v1 decommissioned)

Team Review cycles (aka SR)

  • SR GA
  • MR
  • SR v2 support
  • SR and MR support
  • Top secret project 1

Team Data

  • Data Transformation and publishing
  • P&U or buffer
  • Top secret project 2

Team Post Review (aka Xena)

  • Insights
  • Peer & Upward feedback
  • Calibrations handover
  • Top secret project 3
  • Support v1 shared
  • Insights and Calibrations v2 support

Team Cont Perf (aka Athena)

  • AF
  • Calibrations
  • Top secret project 4
  • Support v1 shared
  • Support v1 solo
Roadmap diagram with five horizontal swimlanes: Releases, Team Review cycles (SR), Team Data, Team Post Review (Xena), and Team Cont Perf (Athena). Time runs left to right with blocks indicating phases and projects. Notable items: SR GA progresses to MR, with SR v2 support continuing; a later segment shows SR and MR support alongside Top secret project 1. Team Data shows Data Transformation and publishing followed by P&U or buffer and then Top secret project 2. The Post Review lane includes Insights, Peer & Upward feedback, a Calibrations handover connector into the Athena lane, and later Top secret project 3; support work spans across lanes. The Athena lane begins with AF, then Calibrations, continues to Top secret project 4, and transitions from Support v1 shared to Support v1 solo. Above the lanes are milestone callouts including Potentially EAP Insights?, V2 rebuild complete and full Perform v2 EAP started, several CUSTOMER VALUE graphics marking benefits, and a final milestone Technical rebuild complete (v1 decommissioned).
  1. Slice 1

    • Self Reflections
  2. Slice 2

    • Manager Reviews
    • Calibrations
    • Insights
  3. Slice 3

    • Peer Feedback
    • Upward Feedback

Looney Tunes

Flow diagram showing three sequential boxes labeled Slice 1, Slice 2, and Slice 3 with arrows indicating progression. Each box lists its activities: Slice 1—Self Reflections; Slice 2—Manager Reviews, Calibrations, Insights; Slice 3—Peer Feedback, Upward Feedback. To the right is a humorous illustration of Bugs Bunny counting stacks of money, attributed to Looney Tunes.

Saving Private Ryan (1998)

Film still depicting a close-up of a weary World War II soldier looking downward, wearing a battle-worn uniform with a bright sky in the background.

Finishing the rest

  • Overlay product roadmap + architecture options + team composition
  • Clarity on what different people are optimising for
  • Culture of expecting frequent change and agility
  • All teams focussed on one high priority initiative
Diagram of a process timeline highlighting the step “Finishing the rest,” with four callout notes (each accompanied by a rocket icon) summarizing success factors. Earlier stages are shown faintly: “Deciding to rebuild,” “Kicking off strong,” and “Measuring as you go.”

Finishing the rest

  • In a high change environment The Plan™ needed high adaptability
  • Pressure didn’t decrease after the first slice
  • Unprecedented laser focus - lots of saying “no”
  • Overlay product roadmap + architecture options + team composition
  • Clarity on what different people are optimising for
  • Culture of expecting frequent change and agility
  • All teams focussed on one high priority initiative
Diagram showing a process/timeline stage labeled “Finishing the rest,” with multiple arrow-shaped callouts. Left-side callouts include lightbulb icons (insights/observations). Right-side callouts include rocket icons (actions/approaches). The stage is marked as step 5 within a broader multi-step journey.

6

Rolling out

It’s not done even though it’s finished

The best-laid plans of mice and men engineering teams often go awry

- Robert Burns

Comic panel depicting a dog sitting calmly at a table while the room burns around them, with a speech bubble saying "THIS IS FINE."

Rolling out

  • Automating migration and orchestration
  • Sticking to agreed-upon principles
  • Being bold with the plan

Earlier steps

  • Deciding to rebuild
  • Kicking off strong
  • Measuring as you go
  • Finishing the rest
Diagram of a process timeline with numbered milestones, culminating in step 6 labeled “Rolling out.” Three callouts with rocket icons highlight key takeaways: automating migration and orchestration; sticking to agreed-upon principles; being bold with the plan. Earlier milestones appear faded: Deciding to rebuild; Kicking off strong; Measuring as you go; Finishing the rest.

Rolling out

  • Assume things will go wrong - even at the very end
  • Build more and more cross functional shared understanding
  • Fixing transformation bugs required re-running migrations
  • Last customer migration env vastly different from first
  • Automating migration and orchestration
  • Sticking to agreed-upon principles
  • Being bold with the plan

Concept diagram of a rollout phase in a migration roadmap, with callouts marked by lightbulb icons (lessons) and rocket icons (actions).

  • Me before the rebuild
  • Me after the rebuild
Side-by-side photos of the same person speaking at formal events. The left panel, labeled "Me before the rebuild," shows the person with darker hair, gesturing while speaking. The right panel, labeled "Me after the rebuild," shows the person with noticeably grayer hair, a more serious expression, and a similar pointing gesture behind a microphone. The images create a before/after comparison implying the rebuild was taxing.

If…

  1. Deciding to rebuild
  2. Getting buy in
  3. Kicking off strong
  4. Measuring as you go
  5. Finishing the rest
  6. Rolling out
Diagram of a process roadmap for a rebuild, shown as a single winding path with six numbered stages: Deciding to rebuild; Getting buy in; Kicking off strong; Measuring as you go; Finishing the rest; Rolling out.

If…

  • Domain model can’t take the product forward
  • Stuck with Series A decisions band-aided over
  • Teams are grinding to a halt
  • Tight coupling
  • Underperforming product in a huge potential market
  1. Deciding to rebuild
  2. Getting buy in
  3. Kicking off strong
  4. Measuring as you go
  5. Finishing the rest
  6. Rolling out
Process flow diagram showing a six-step rebuild journey along a single winding path connecting labeled boxes: 1) Deciding to rebuild, 2) Getting buy in, 3) Kicking off strong, 4) Measuring as you go, 5) Finishing the rest, 6) Rolling out.

If…

  • Domain model can’t take the product forward
  • Stuck with Series A decisions band-aided over
  • Teams are grinding to a halt
  • Tight coupling
  • Underperforming product in a huge potential market
  • You’ve tried everything else

Rebuild steps

  1. Deciding to rebuild
  2. Getting buy in
  3. Kicking off strong
  4. Measuring as you go
  5. Finishing the rest
  6. Rolling out

Process diagram showing a single continuous path with six numbered stages: 1) Deciding to rebuild, 2) Getting buy in, 3) Kicking off strong, 4) Measuring as you go, 5) Finishing the rest, 6) Rolling out.

If…

  • Domain model can’t take the product forward
  • Stuck with Series A decisions band‑aided over
  • Teams are grinding to a halt
  • Tight coupling
  • Underperforming product with a huge market
  • You’ve tried everything else
A large X mark is drawn over a bordered box containing the bullet list, indicating the items are being crossed out or rejected.

If…

  • Domain model can’t take the product forward
  • Stuck with Series A decisions band‑aided over
  • Teams are grinding to a halt
  • Tight coupling
  • Underperforming product with a huge market
  • You’ve tried everything else

Mighty Morphin Power Rangers (1993–1996)

A still image from Mighty Morphin Power Rangers shows the Green Ranger and Red Ranger sparring outdoors. A large red “X” is drawn over the bullet list on the left, signaling rejection of those points.