The first version was leak detection. The idea was sound: use operational data to identify potential water main issues before they become emergencies. The technology worked. The pitch didn't.
It turns out nobody wants to hear about infrastructure failure. There's no emotional hook. Homeowners don't wake up thinking about their water mains. They think about their kids, their pets, whether the water they're drinking is safe.
So we pivoted.
Iteration two: bacteria screening
The second version focused on bacterial contamination. We could flag potential contamination events from water quality data faster than traditional lab testing. It was technically impressive. It was also terrifying.
When you tell a homeowner "we detected potential bacterial contamination in your water supply," the response isn't gratitude. It's panic. And panic doesn't convert to engagement — it converts to anger directed at whoever delivered the bad news.
So we pivoted again.
Iteration three: lab testing
Version three tried to split the difference. Instead of alarming language about contamination, we positioned the product around proactive water quality testing. Think of it as a health checkup for your water.
The problem was speed. Lab testing takes days. By the time results came back, the urgency that drove someone to request a test had faded. Engagement rates were abysmal. People would request a test, forget about it, and never open the results.
We pivoted.
Iteration four: real-time monitoring
The fourth version was closer. Real-time water quality data, updated continuously, accessible through a simple interface. No waiting for lab results. No scary contamination language. Just data.
But data alone isn't compelling. A dashboard showing pH levels and turbidity measurements means nothing to someone who just wants to know if their water is safe to give to their dog.
One more pivot.
Iteration five: personalized risk context
The version that worked combined everything we'd learned. Real-time water quality data, personalized by ZIP code, with risk context tailored to what people actually care about: pets, infants, and personal health.
Instead of "Your water's pH is 7.2," the system says "Water quality in your area is within safe ranges for pets and infants." Instead of contamination alerts, it provides context: "Seasonal runoff in your region typically affects turbidity — here's what that means for your household."
This version processes 50,000+ calls a month. Not because the technology was different from version one. The underlying data pipeline is essentially the same. What changed was the framing, the personalization, and the emotional context.
Why most firms stop at iteration two
Here's the thing about pivoting: it's expensive. Not just in development hours — in organizational patience. Every pivot requires the client to believe that the next version will be better than the last one. Every pivot means explaining to stakeholders why the previous version didn't work.
Most consultancies can't survive this. Their engagement model is built around fixed-scope deliverables. Version one is the deliverable. If it doesn't land, they write a recommendations document and move on. The recommendations might even be good. But nobody's there to execute them.
We survived five pivots because our engagement model is built differently. We don't scope deliverables — we scope outcomes. The outcome was a product that drives customer engagement. It took five tries to get there. The client funded all five because they trusted the process and could see the learning in each iteration.
The lessons
Emotional resonance beats technical accuracy. Every version of the product was technically sound. Only the one that connected emotionally succeeded.
Speed kills slowly. The lab testing version was accurate and trustworthy. It was also too slow for human attention spans. Real-time feedback, even if less precise, wins.
Personalization is the unlock. Generic water quality data is a commodity. Water quality data contextualized for your ZIP code, your pets, your kids — that's a product.
Iteration requires trust. Five pivots only happen when the client trusts the team. That trust is built by embedding, not by presenting. You can't earn pivot-level trust from the outside.
What I'd tell my earlier self
Start with the emotional hook. Always. The technology is table stakes. The question isn't "can we build this?" — it's "will anyone care?"
If I'd started with iteration five's framing and iteration one's technology, we might have gotten there in two pivots instead of five. But honestly, I'm not sure we would have understood why the framing mattered without living through the failures first.
Some lessons you can only learn by building the wrong thing.
PurviewX is embedded AI leadership for industries that weren't built for this era. We don't build pilots — we build systems that run. Start a conversation.