In our complicated world with all of its competing priorities and overburdened schedules, I can understand why people gravitate to cut-and-dry answers to questions. It’s easier if you have a rule you apply more or less blindly rather than taking the time to analyze and tailor an answer to the situation.
At the risk of oversimplifying myself, I think this at least contributes to the belief in certain quarters that you absolutely have to ship something useful to customers all the time. A two week cycle of work that produces nothing deployed to every user is, in this worldview, a failure.
The instinct here is noble. Especially before you know your idea has legs, you don’t want to over-invest in something that no one really cares about. Forcing yourself to get something in people’s hands quickly to get the market reaction can be incredibly helpful.
But this approach applied so blindly can create problems. Sometimes your idea isn’t easy to implement, and so trying to force something to ship quickly means people are judging the shoddy implementation rather than your core idea. Sometimes scrimping and shrinking scope to get out the test balloon gets in your way if you succeed and need to keep pulling on a particular thread.
I think we need a better rule of thumb. In particular, I’d argue what we’re really looking for is regular validation. That certainly could — and eventually must be — validation from the market. Put differently, “shipping to customers.” It could also be something else, though: feedback on a paper prototype, an internal technical milestone, a decision on a key component. Reframing this way offers a better balance between the need for quality and scaling with a need to build a business.
Let’s consider a couple of examples.
On the one hand, I think there are features that should fit into this “as fast as possible” mindset. If you have notional validation for an idea — a specific customer request, synthesis from talking to a user — and the idea can be done quickly with high-fidelity, it shouldn’t be dragged out.
Recently I was talking to a friend who’s thinking about how to expose data from their product to customers with their own teams of data analysts. After digging in, it seemed to me that the right answer was a straightforward CSV export to start.
From a technical perspective, it should be possible to do quickly. And far better to do something that simple when you’re not entirely sure how people want to slice and dice the data export. Once people have the ability to make their own solutions and are using the data, it’s easier to figure out if and which additional bells and whistles need to be added.
Other features demand a more gradual approach. But that doesn’t mean we should wait months and cross our fingers. I’d still want to have internal validation points along the way.
Imagine, for instance, you’re going to set up a caching layer for your data processing system for the first time. Before you can cache a single piece of information, there’s a bunch of background validation that needs to happen.
For example, I’d want to build a small test version of this caching layer to validate it’s actually going to provide a speedup. Or that it will provide a speedup without being so expensive or complicated it defeats the purpose of the project.
There’s no reason to bother setting up, say, a Redis cluster and write a bunch of robust, well-designed caching code if your cache design isn’t fast enough or constantly runs into staleness issues.
We should celebrate, praise, and expect these kinds of validation steps that don’t necessarily deliver something to customers or end users. Even through the “customer first” lens, it’s obviously better not to ship something that doesn’t move the needle.
In the same way I’d want to bin a feature that customers didn’t love, I’d want to stop working on a feature that fails to achieve these points of internal validation. Depending on the product-business case, if something is truly impossible or considerably more difficult than expected, it may not be worth pursuing.
In both cases, we’re still getting validation at regular intervals, even if sometimes it takes a . This is critical: it helps us understand the bounds we’re playing with, whether from a feasibility-cost perspective on the build side, or from a business-product perspective once we get to the step of shipping it and rolling it out to customers.
Of course, eventually you do really have to ship something to customers. And when you’re building software — as opposed to, say, a physical product like an airplane — I do think that should be relatively frequent. Getting something in people’s hands is a win-win. If it’s good, people get something they want and you get more business. If it’s bad, the sooner you know it’s a dud the better.
How often should that happen?
A few years ago, I had a chance to sit down and interview one of the main authors of Envoy, which initially came out of Lyft as a project to handle their load balancing problems.
If you’re unfamiliar, Envoy is what’s called an edge proxy. It allows software and infrastructure teams to knit together their various underlying servers and services into a single, coherent whole. Today, Envoy powers traffic routing for companies like eBay and Netflix.
For another time, I think this provides a great case study in when to build and when to buy. On the surface, it makes no sense that Lyft — a ride share company — built a piece of core infrastructure software.
More relevant here, how do you build something this sophisticated with so much success?
Certainly not by forcing the project to ships something to customers every week. From what the author told me, it took about six weeks to get something adequately good for Lyft’s infrastructure team to trip out their existing solution and replacement with what would become Envoy.
That’s not to say you should only ship features once every six weeks. Rather, six weeks is about the longest shipping something to end users should take. If you can’t produce something useful in that time, it’s probably worth reconsidering your initial scope or the viability of the idea. If you can ship sooner than that, it’s certainly
As with so much, finding this balance is as much an art as a science.
And I think that’s really my observation here. Both “sides” in this debate have a point. Ultimately, we get impact and build a successful business by creating a mutually beneficial trade. We build something that solves someone’s problem well enough they’re willing to pay for it. That can only happen when we ship code and features. But not every solution to an interesting problem can be solved well enough to ship in a fixed, short period of time. By focusing instead on validation — be that internal or external — we get the best of both worlds. We get reassurance we’re heading to a point we can roll something out, while still letting ideas breathe.
Enjoy this? Have an idea for something you’d like a perspective on? Drop me a line: I’d love to hear from you.