One does not simply rebuild a product
Framing the Stakes: From Acquisition to an Urgent Need for Change
Prakriti sets the stage by tracing Culture Amp’s Performance product from its 2019 acquisition to serving thousands of customers and massive review cycles, while eyeing a $3B market. She explains how front- and back-end monoliths, outdated domain and data models, shared platform concerns, and compounding tech debt created high coupling, low ownership, and a fearful engineering culture. These systemic issues stalled innovation and blocked adoption of shared tooling, making the status quo untenable. The segment positions the talk’s central theme: a high-risk, high-reward rebuild to unlock scale, speed, and market leadership.
Exhausting Alternatives: Diagnosing Why Incremental Fixes Weren’t Enough
Before proposing a rebuild, Prakriti details how the team classified all tech debt, pursued incremental remediation, attempted piecemeal re-architecture inside the monoliths, and ran a tiger-team extraction of a sample domain. Despite creating patterns and documentation, deep entanglement, missing platform foundations, and slow progress made these approaches insufficient to meet market timelines. She reflects on what helped—transparent tech-debt visibility and cross-functional advocacy—and what she’d change, like broaching a rebuild earlier and building trust by acting first and asking forgiveness later. This segment clarifies why a bold reset was necessary to advance the overall strategy.
Securing Buy-In: Aligning Teams, Leadership, and the Board
Prakriti walks through how she earned organization-wide support: mapping a product opportunity radar, running blue-sky architecture sessions, and converging on a plausible target architecture to validate the need for a rebuild. She bolstered credibility with an external advisor, crafted a narrative with a product VP centered on customer value, and proposed a balanced delivery plan that continued critical work while proving the new approach. Lessons include keeping stakeholders close to avoid surprises, slowing down early to go faster later, and avoiding tech-choice whiplash (e.g., pivoting from Hanami to Rails) by creating earlier, active debate forums. This builds the coalition required for a high-visibility, high-stakes rebuild.
Proving the Pattern: Designing and Shipping the First Slice
With buy-in secured, Prakriti proposes shipping one end-to-end “first slice” to validate architecture, domain model, tooling, execution capability, and schedule. She sets clear goals, rebuilds team structures and ways of working, and enforces principles: like-for-like scope (no big UX or new features), freeze the monoliths after the first slice, and invest deliberately in high engineering and design standards. The team selects “Self Reflections” as a high-impact yet isolated domain, establishes a Goldilocks timeframe, assembles an A-team, adopts internal standards, runs internal PR, and ships with low incident rates. A decision-making framework, disciplined trade-offs, strong dependency management, and an early cross-functional domain model paid off, though stress, late estimation, and immature cross-team planning underscored the need for new program frameworks.
Quantifying Progress Without New Features: Metrics That Matter
Because the rebuild was like-for-like, Prakriti defines success through engineering and customer experience metrics instead of typical product KPIs. She highlights LCP as a CX proxy, isolates heavy features for targeted performance tracking, and reports dramatic deploy-time reductions (FE to under 10 minutes, BE to under 5) alongside improved design system adoption, accessibility, and closure of hundreds of UX issues. She notes that “everyone loves metrics” for sustaining buy-in, and wishes they had instrumented key trade-offs and legacy comparisons earlier. This segment shows how disciplined measurement anchored the rebuild’s value to the broader strategy.
Scaling the Rebuild: Planning and Parallelizing the Remaining Slices
Prakriti returns to the opportunity radar and re-runs architecture optioning—this time for the remaining five slices—then freezes the monoliths and aligns four teams to go all-in. She overlays product opportunities, architecture choices, and team structures to surface trade-offs and stakeholders’ optimization criteria, building an execution plan that supports fast iteration and domain ownership. Heavy parallelization enables delivery in three chunks, aided by clear cultural expectations for change and a rare focus across all teams. Despite rising pressure and a relentless timeline, holding the line on priorities gets them to a pre-holiday finish and sets up the final rollout phase.
Executing the Rollout: Migration Realities and Hard-Won Lessons
Prakriti outlines a standard rollout plan (early access to GA by segment with buffers) and then candidly describes how “everything that could go wrong” did, from late-breaking issues to unforeseen environment drift. What worked was automating migration and orchestration, adhering to roll-forward-only principles with no back-communication to V1, and staying bold rather than timid. She advises assuming problems until the last customer is migrated, partnering early with support and account teams, and recognizing that data transformation bugs—unlike product bugs—trigger costly re-migrations. The talk closes with a successful outcome, new foundations enabling faster innovation (including AI), and pragmatic criteria for when a rebuild is the right strategic bet.
I'll try and live up to that overwhelming introduction. No pressure to be funny or entertaining. Thanks, Gretchen.
Okay, one does not simply rebuild a product. I need to start my timer, otherwise I'll just keep going. So, Emma Jackson said to me, a rebuild is never finished, only started. Thanks, friend.
Culture Amp is an employee experience platform. It does a whole bunch of things that you don't care about. I specifically work on the performance product, which helps customers to build high performing teams, eventually driving better business outcomes.
You don't care about that either. Let's get to the good stuff. So this product came into Culture Amp through an acquisition of a series A startup in 2019. That was six years ago.
It was very small then, but since has grown to thousands of customers. In total, these customers launch anywhere between 850 to about 1500 performance review cycles every month, and our largest performance review cycle has about 45,000 employees participating just to give you an idea of the scale of this product.
However, the overall global potential performance market is a whopping $3 billion, and this is where we want to go towards a performance product that can be market leading up to $3 billion.
There were many compounding layered technical obstacles in our way of getting there. The product was two monoliths, a front-end monolith and a back-end monolith. The underlying domain model, data model, back-end models, architecture were all outdated, and the customer needs had evolved a lot. There were also other products that were sharing the same monolithic code base. Shared concerns lived in the monoliths as well that might sound familiar. So things like notifications, tasks, authorizations, platform type concerns.
There was an enormous amount of tech debt, from trying to scale and iterate the product very rapidly post acquisition, as you have to do. And it contained many, many early stage decisions.
Those decisions were right at the time that they were made, but in 2019, that's when they were made. But when I started building the stock, they were not right anymore. There were also eight teams trying to work in these monoliths with terrible, non-existent domain separation. Too few people understood how the entire system worked, how the code base worked, Teams would invest weeks in starting a feature, unable to finish the feature because they would get blocked by other teams working in the same monolith that was unanticipated. There was high coupling and low ownership. This is a real photo, by the way, that I took at an art gallery in Adelaide. Strongly recommend.
The monoliths also could not benefit from any shared tooling or infrastructure at Culture Amp. All of this led to a really fearful engineering culture, very low engagement scores, which obviously matters to us, very low pride in our work, and our teams moving very, very slowly. Basically, what I'm describing is not a high performing culture.
These challenges would have seen us grind to a halt, unable to innovate.
We needed to invest now to ensure that the product would continue to grow in scale and serve our customers for many more years. To cut a long story short, we were stuck in a hole, a hole that maybe looks familiar, hopefully to many of you.
So that's me. I'm Prakriti, Director of Engineering.
I'm here today to talk about a spicy decision and take you through our journey of getting out of this hole via a full product rebuild.
I'm gonna take you through these steps today, and then I'll pause at every step and share some reflections, which I'm hoping will make you feel a bit less alone the next time that you're in a hole going on a similar journey.
Hopefully you'll never be in that hole, but let's be real, you're all going to be in that hole. So the first step in our journey was getting to the point where we even knew that we needed to rebuild. Before any talk of rebuilding, obviously we tried everything else. There was enormous tech debt, like this vague, unnamed burden that our stakeholders didn't appreciate.
We identified and classified every last bit of tech debt, ensuring that senior leadership, exec, cross-functional product partners could all become advocates.
We began prioritizing and fixing the tech debt incrementally, as you do, but it would have taken forever. And even if we had fixed it, it wouldn't have solved our fundamental inability to iterate on the domain and data models. Next, we tried piecemeal re-architecture. So within the monoliths, we identified domains that we could re-architect while keeping the monolith around them still functional. That worked, but it was very, very slow. We needed a bolder and faster move.
In parallel to that, we tried extracting domains out of the monoliths.
Everything was deeply, deeply intertwined. We lacked quick microservice foundations and shared solutions for platform concerns like notifications, emails, authentication, authorization, account data, employee data. We set up a small dedicated Tiger team, which extracted one example domain successfully. They documented it.
They created a pattern for other teams to follow. That's also a real photo.
I took that at Uluru, and I thought it was the perfect metaphor because they were giant Jenga bricks, very, very poor quality, and very high friction. So doing those three things helped us realize that the reality is nothing is going to get us where we needed to get, in time to lead the market. We needed a big reset to achieve that.
So as I said, I'll go through the steps and I'll share a few key reflections at each step. For step one, a few things that went well for us. We had this amorphous statement of, oh no, we have too much tech debt. That turned into a clear classification with prioritization, rough difficulty levels for all of our major tech debt items.
Then we used that to create visibility among leaders - building awareness, but also empathy and understanding. We worked cross-functionally, which meant that even if I wasn't in the room or there was no engineering representation, our peers could still advocate for tech debt.
And finally, exhausting all alternatives before proposing the rebuild, doing it with a high degree of transparency, made our stakeholders more receptive to hearing that we have to start from scratch.
A couple of things I've learned for next time. Firstly, suggest it sooner. That's a real tricky one. If you suggest it too soon, then leaders will assume engineers always want a fresh start, always want a clean slate, want everything to be perfect. But if you leave it too long, then you're wasting valuable time repairing that you could instead be spending rebuilding.
Secondly, establishing trust and asking for forgiveness rather than permission. We did a lot of the groundwork without asking for explicit permission. Now that we knew what had to be done, we had to get buy-in from everybody else, from the teams who would do the work, from design and product peers, from directors and VPs, from exec and from the board.
That's step two. To get buy-in, first we had to show where we want this product to go. This is an opportunity radar showing everything that a rebuild could unlock for our customers. It contains opportunities that were not possible without doing a rebuild.
We started with some blue sky architecture brainstorming.
If there were no constraints, looking at the opportunities radar, knowing where you want the product to go, how would we architect it today? We went really wide in this step, and people came up with many different architecture ideas. Then we converged those into one potential architecture for our rebuild product, aiming to validate the need rather than finalize every little detail. We only wanted to see how different an ideal architecture might look. If it looks very different, it makes our case for the rebuild stronger. So the product opportunity radar architecture sessions helped us to get buy-in from the team who would actually do this work. Everything else we tried before any talk of rebuild, the three things I shared before, those helped us get support from product design and senior leadership. Plus, keeping them close to all this work throughout helped build trust as well. Exec buy-in, not that easy. To get that, first we consulted an external advisor who has successfully completed a rebuild, but more importantly, they have seen several rebuild failure modes.
Then we built a comprehensive exec presentation telling the holistic product story. This last step is probably what got us through to that final 20% of buy-in. We made it very clear that we are not going to halt all customer value for many years.
While we rebuild the first component, the remaining teams will handle the most critical work on performance. Then we would pause everybody for about a year-ish and we would go all in on the rebuild.
The broader performance offering would continue in other ways. So this balanced approach helped us to get exec and board buy-in.
A few things we did well in this step, if I may say so myself. Talking to an external advisor who has a lot of experience under their belt, brought a lot more credibility to our wild idea.
Then a product VP crafted that holistic narrative. He put our customers first in that story rather than over indexing on engineering, even though this was an engineering initiative.
Keeping stakeholders close was great. Being deliberate about who is included in every conversation, ensuring that they're coming with us on the journey so that our final ask is not a surprise.
Instead, by the time we got to the ask, they were super invested. They kind of wanted it as well.
And finally, slowing down at this stage helped us to go much faster later. Couple of mistakes. I think slowing down also cost us. So we worked away from the teams for too long, which created some uncertainty as they were still shipping the regular roadmap. And I wish we had created more active spaces for collectively debating tech choices, failing to get true buy-in on these choices early, led to us pivoting away from Hanami to Rails after we had already written some code, which shook some confidence in this very huge bet. Now we had buy-in, but this was still a big, risky ask.
We proposed, "Let's ship one complete slice to customers, prove out the entire architecture, domain model, tech choices, our capability to execute, and our timeliness end to end." That means the first slice carried immense pressure and visibility.
It was a company-wide bet, so failure of the first slice would have probably killed the entire initiative permanently. And good luck to the next person who tried suggesting a rebuild after that. So these high stakes made it very critical that we get a strong start on the first slice, which brings us to step three. We began by setting goals. That's the magic word, isn't it? You have to set goals and then everyone joins you on your plan. So on the product side, we wanted to produce a domain model that we can iterate, architecture which supports future use cases, and clear ownership for teams during and after the rebuild. That sounded really easy, and I like to live dangerously. So I shared earlier some challenges that our teams were facing. We thought, fuck it, let's also rebuild the teams. We're going to have teams that move really fast, very strong engineering culture, strong ways of working, high engagement, opportunities for everyone for learning, growth and development. And we're gonna do all of this simultaneously in an impossible time frame. Absolute madness. Who came up with this plan? Someone needs to have a talk with her.
To begin, we laid down some ground rules or principles.
We called our first principle 'like for like'. No massive UX uplift allowed. No new features allowed. We will only stand up our existing product on a future-proof code base and domain model. We will use the latest tooling foundations, design system, and high engineering standards. This reduced customer risk and prevented continuously changing goal posts from dragging out the rebuild with a long tail.
Second principle was 'freeze the monoliths after the first slice'. No drift was allowed between the old and the new rebuilt product. no communication allowed back from the new to the old product. We wanted a quick customer migration, so no letting customers languish on the old product for too long. And decommissioning the old one is part of the definition of done. Finally, we resolved to be deliberate in every step we take, investing in setting high engineering standards, high design standards, and building high performing teams as we go. I must have been high because I just keep using the word high all the time. Also, this plan was definitely devised by someone who was high. Don't tell anyone. So we divided the rebuild into six rough domains, and we chose one domain for the first slice called 'Self Reflections'. The name doesn't matter. We picked it for some important reasons, though. Firstly, it is well used by customers, which gives us real customer traffic, which is going to help validate our choices.
Secondly, it was in the worst shape of all the domains because it hadn't been touched in the longest. Third reason is that it was relatively isolated. So it allowed one team to focus on rebuilding it while the other teams continued the critical roadmap work that I mentioned earlier, which was essential to get buy-in before we could freeze all the monoliths.
And then finally, 'self-reflections' had just the right amount of complexity.
So it was a great candidate to help us prove that if we can rebuild this, then next we can rebuild the remaining five domains as well.
We then took a few targeted steps to set up this first slice for success. As I said, the first slice was very critical. It had to succeed.
We created a 'Goldilocks' timeframe - so long enough that we do it right, but also tight enough that some smart trade-offs are forced. Otherwise, we'll just be gold-plating the first slice forever.
We also started with a small A-team of sorts. We took a mix of strong performers, domain experts, specialists from across the org, like design system, front-end tooling, back-end tooling, and we put them together. I feel like this GIF reveals my age. I might drop it next time. What do the young people use these days? I was gonna go with 18, but that's even older. Bad choice.
Then we were firm on adopting all internal standards and establishing informal metrics where standards didn't exist, while still maintaining speed. And finally, we ran a bit of an internal PR campaign because we had to build some excitement. This project could have felt very risky, very ambitious, very ambiguous, and we needed to, I don't know, just get people behind it, light a bit of a fire.
Most of that worked, and we managed to ship the first slice.
We didn't ship anywhere close to the initial target, but we did ship very close to the estimate, the first estimate that actually came from the team. We scaled up as we rolled out to larger and riskier customers with very few incidents.
Now, this was the most crucial step, so I'll share a few more reflections on this one on top of the goals, principles, and the team set up that all worked really well.
First, we created a clear decision-making framework.
It helped us to get clarity on what is the decision, who is the decision maker, who do they need to consult, and an engagement model for senior leadership stakeholders. This prevented bottlenecks. It prevented seeking approval for everything, and it empowered the team to move fast with bold choices.
We had firm trade-off conversations, aggressively protecting the scope, while still taking on informed risks.
That Goldilocks timeframe forced us to think really hard about what's good enough.
Strong exec buy-in led to clear prioritization, which helped us get capacity dedicated across other teams as well, because we had 18 dependencies outside our teams in other teams.
And we invested early in building a new domain model for the entire rebuilt product with cross-functional input early, including key leadership people.
First slice was also a very tough ride, so a lot of lessons learned.
The initial target date was aggressive, and that high visibility of the project created a lot of stress for the team. Establishing momentum was hard for them. They were just thrown into the deep end, with new people, no forming time, no planning time. Refinement and estimation were prioritized too late, which further impacted their momentum and engagement.
And this was the business's biggest cross team initiative ever at that point. So we lacked maturity in dependency planning, managing complex year long multi-team projects. The good news is thanks to this initiative, we actually have frameworks for all those things now.
So with this rebuild being like for like and being fairly invisible to customers, how do you measure success? That brings us to our fourth step. We couldn't measure success in the usual ways like validating hypotheses, product analytics, customer adoption, revenue. None of that was valid for us. We had to define what metrics were important and how we'd measure them to ensure ongoing buy-in.
We decided instead to go with engineering and customer experience metrics. Think back to these goals I shared earlier and the 'like for like' principle. We wanted the new experience to feel similar but better. We also wanted teams to feel pride in their work, to feel like they're high performing, they're delivering high quality products at good speed.
We used Largest Contentful Paint time as a simple proxy for the customer experience metric. There were a few heavy features that dealt with a lot of data. So for those, we measured the performance of those features separately. Assuming that if these are improved, we don't need to check every feature individually. Developer experience side, the front-end deploy time used to be 20 to 40 minutes, now it's under 10. The back-end deploy time was 40 minutes to an hour, and now it's under five.
We also made huge improvements to our design system adoption and accessibility scores. The old product had 560 user experience issues just sitting in the backlog. And through the rebuild, even though it's like for like, we did fix most of those. This is a very easy one. Everyone loves metrics. That's the lesson.
We showed that even though we are building like for like, we're still delivering measurable customer value and business value, even though it's a rebuild. Anything to make that ongoing buying smooth and easy.
Still learned a couple of things though. We struggled to measure some of these.
The old product was really hard to work with. That's why we're rebuilding it, obviously. But I wish that I had invested more in thinking about the metrics up front. And I wish that I had set up specific metrics to instrument the key, really difficult trade-off decisions.
It was a really hard journey with the first slice, but we got there.
We gained confidence that the project is on the right track. We had a massive retro. We paused for celebration and recognition. Not too long of a break, though. We immediately had to pivot to what's next.
One down, five to go. Project's basically done, right?
Everyone's celebrating the success. No, wrong. We had to take everything we learned from the first slice and turn that into a plan now for the remaining five slices.
Plan? What plan? I didn't have a plan.
We had done just enough big picture thinking to validate the need for the rebuild, to get buy-in, and to ship one out of six slices.
Remember when I showed you the product opportunity radar? We went back to that one. We extracted from it potential future opportunities.
It was important to never lose sight of why we are doing this rebuild.
It's very easy to get lost in technical details, but we constantly reminded ourselves of the value the rebuild needs to unlock to be considered a success. More brainstorming time.
We considered multiple potential architecture options again to deliver those opportunities.
But this time, the scope was all five remaining slices. Once again, we included key senior leadership stakeholders. Now we could freeze the monoliths.
The first class was done, and we could put all four teams on the rebuild.
We overlaid potential team structure options on top of each architecture option. That's the crude circles in this screenshot.
Yes, I'm an engineer. I am not known for my design skills. Please don't judge me. We considered multiple stakeholders perspectives on what we should optimize for. that helped to surface unspoken priorities and ensure that we were aligned at least 80% directionally.
Finally, we pulled it all together. I present to you the plan.
Think of it in layers. The bottom layer is the anticipated product roadmap.
The middle layer is architecture options, and the top layer is ideal team structures for fast iteration, domain ownership, and high performance. I only have two hands, but we had three layers. We needed a plan that is going to work across all the three layers. Let's fast forward a few months now. We ended up completing the entire rebuild, all the remaining five slices. We did it in three chunks, and we had heavy parallelization after finishing the first slice because we were able to go all in with four teams. That's real footage of me during this initiative. So some things that served us well in this stage, that overlay method of looking at all success factors simultaneously, product opportunities, architecture, team composition. That was a bit unusual, but it allowed us to see trade-offs clearly and come up with the most acceptable plan.
Having clarity and shared understanding on what each leadership stakeholder is optimizing for separately helped us to find an option that worked across all attributes. We set cultural expectations of frequent change and asked teams for high agility as we go through it. And finally, having all teams focused on one initiative was a rare gift, very few distractions.
So we leveraged that gift as much as possible.
We struggled with a few things as always. Despite setting expectations, a high change environment is always hard, no matter how much you explain to people why it's necessary. Not everyone can be equally resilient. The visibility and high pressure that we expected to decrease after the first slice somehow, I don't know how it's possible, but it actually increased. And the ambitious timeline set by exec and the board required a level of laser focus that I have never seen before. We were constantly holding the line and constantly saying no, including to some very senior people.
As they say, the last 20% is the hardest. With an ambitious end of year target, we actually finished right before Christmas. and then we came back in the new year in January to roll out to all remaining customers. That brings me to my last step, rolling out. We had a plan. We were going to go from early access to general availability, roll out by customer segment. It's pretty standard, small business first, then medium business, and finally enterprise.
A short break between each segment as a buffer in case any issues come up.
It sounds perfect. We've all done rollout plans like this, and I'm sure they've all worked perfectly all the time. Not. So many things happened. Oh my God, everything that could go wrong possibly went wrong.
Some of those things we anticipated, many of which we didn't anticipate.
So what did we learn? A few things that went well: Investing in automating the migration and orchestration.
Huge, huge benefits. Strongly recommend. We'll do that again.
The second one is sticking firm to our principles despite a lot of pressure.
So principles like we will only roll forward, there'll be no communication back from V2 to V1, we really had to stick to those. And finally, being bold. Things going wrong and plans needing to change was much better than if we had come up with an original timid plan, because that would have failed to even get off the ground.
Learned many lessons here. Assume things will go wrong until the very end, even when migrating the last account, when you think you're in the clear, no, Shit's gonna go wrong.
Build a very deep shared understanding and empathy with support, account execs, and coaches early rather than doing it at crisis time. Fixing product bugs was actually very easy and very fast, who knew? But data transformation bugs were hard. Every time we fixed one, we had to re-migrate all the accounts that had been migrated up until that point, delaying the next migration. And finally, the environment when the last customer was being migrated, was quite different from the environment when the first customer was being migrated, which nobody considered.
More real footage of me going through the rollout. So similar to the Obama administration. No, that's a joke. I'm not going there.
All the gray hair was totally worth it. We did succeed.
This massive, ambitious bet eventually succeeded.
We rebuilt both the product and the teams to be high performing as planned.
We are now solving the most important customer problems and were able to do it at very high speed and quality. We're also leveraging these foundations that we created to invest in innovation that previously was not even a part of the opportunity radar. So the AI landscape has changed a lot since we conceived of this rebuild, but it's unlocked all these opportunities that we didn't even know we had at the time. So people say that rebuilds are always a bad idea.
I disagree. I say that if you have a domain model that can't take your product forward, you're stuck with series A decisions that have been band-aided over and over again, your teams are grinding to a halt, you have tight coupling, you have an underperforming product in a huge potential market, and you have already tried everything else, then a rebuild could be your best bet. You could do it with steps that look something like this. You could incorporate the lessons that I shared and the things that we did to set it up for success.
However, if these conditions are not true for you, then I can't help you. You're on your own, and you've wasted your time.
No, I'm just kidding. Thank you.
No, if it's not true, it doesn't necessarily mean that a rebuild is not for you. But you might need to look at the steps that I shared, the things that worked well and didn't work well, and figure out how you need to tweak those for success in your specific conditions. But I mean, yes, it could also mean that a rebuild is not the right step for you. I don't know. I just work here. Thank you.
Against conventional wisdom, we’re rebuilding our product from scratch with an aggressive timeline. It’s 9 years old, a Series A acquisition, has ~2700 customers, the largest at 77k users. The monolith is riddled with tech debt, outdated models, tight coupling, bandaids, and won’t scale to the potential $3b market. That wasn’t hard enough so I’m also rebuilding our engineering culture to create high performing teams.
You’re not supposed to do technical rebuilds but where’s the fun in that? Come hear about why we’re doing one and what I learnt. It’s either the best or worst decision I ever made!







