> The problem was the process itself, and along with it the blind pursuit of a goal without a deeper understanding how to tackle deeply difficult challenges.
And that is the wrong problem that too many software engineering teams solve. Let's not solve a customer problem, let's solve the problem of solving customer problems and hopefully in year or so we'll get to that.
Iteration speed is an issue for hardware. It becomes an issue for software teams when too much time is spent tooling, generalizing and reinventing wheels.
I point that out because this sort of example pushes the wrong buttons for many software engineers, including myself. We don't need more time spent on meta-problem solving. Much less, in fact.
> Let's not solve a customer problem, let's solve the problem of solving customer problems
That's exactly what causes too much time spent on tooling and reinventing wheels.
Not that tooling is bad, but often you spend two weeks in order to save one hour. And that is the best case scenario, where the extra complexity usually will slow down future development - rather then make it faster.
In the article MacCready strategy was fast iteration, rather then focusing of safety and engineering. Basically the "go fast and break things" motto.
Technical debt is not any sort of problem in and of itself.
It can be accrued naively, which is an antipattern, but when that isn’t the case technical debt is a vehicle for leverage. You get more value from less work.
With all development efforts having a finite lifetime, and many having a particularly short one, wise exercise of technical debt becomes an essential skill for both project managers and developers.
Vilifying technical debt, and even prematurely mitigating it, are just as naive as inadvertently introducing it.
> if your system is meant to endure the test of time (think Google-scale)
While some exceedingly few products have that destiny, it’s precisely the naive bias towards thinking that your project does that leads to wasteful over-engineering.
Products and projects are not the same thing. Most of the Google-scale products that you see leveraged tons of technical debt on projects along the way, some critically valuable and some certainly wasteful.
If you’re adding a feature to one of these products that are already at that scale, then your attitude toward technical debt will be different.
But effectively nobody reading this thread is doing that, and the few that are know who they are.
Most readers here are working on projects with near-term deadlines and development lifetimes measured in a few months or years. As much as judicious exercise of technical debt enables those projects to deliver on their requirements more quickly and for less cost, more is better.
Many of us come into the industry thinking otherwise because we’re drawn to the intellectual purity of clean systems, but that purity just isn’t what actually matters most of the time. It can take a while to accept and internalize that.
Wow, that last paragraph is spot on. I found both your comments to not only be accurate and clear but also rather eloquent.
I've always felt we've misunderstood technical debt. It seems to be regarded as a problem with a lot of negativity surrounding it. Developers seem to fear being accused of introducing technical debt - like it's the worst kind of developer crime. But in reality, there is value in embracing it in early stage projects. As thinkingkong mentions below; 'Its only debt if it sticks around long enough to need to be dealt with.'
>Nobody reading this thread is doing that, and the few that are know who they are.
That's the false dichotomy which earned me my stripes.
I used to think I am the former guy, then I realised I am the latter guy. The only thing that changed is that some time passed.
The hind-sight of my mistaken identity is that the systems I have been working on for 15 years became less, not more maintainable as a result of tolerating technical debt.
They also became Google-scale (which is what we were hoping for, but didn't believe at first).
I am the furthest thing from a purist. I am a filthy rich pragmatist who regrets having low-quality standards.
Perhaps it's the fact that you tolerated the technical debt that got you to google scale. I've seen a few projects where developers aim for (as swatcoder so eloquently put it above) the intellectual purity of clean systems and then watch these projects fail to be delivered.
We'd all like to be a filthy rich pragmatist but I wonder just how much the 'low-quality standards' enabled you to achieve that. Seriously, I would love to be a millionaire with regrets about low quality code instead of being broke with junior developers pointing out my code smells and technical debt.
This article rings very true for me at my current company, the tech lead is always off to add more fancy new tech to the stack, and we still manage to accumulate a ton of tech debt.
Adding a load of automated tests is the first thing we need to do start being able to iterate more quickly. (I view this as both paying down technical debt and solving the right problem). Thankfully that was discussed just the other day in our tech debt meeting (while the tech lead was saying how we need to add a Neo4j database, a Leucene search index and remote procedure calls - he had read an article saying how that is cool again).
Depends on the situation. WordStar, written in assembly language, was better than any word processor written in higher-level languages for the same computers (say, a 2 MHz Z80 with 48 KiB of RAM); perhaps in part this is because Turbo Pascal didn't exist for several years, but even when it did, code written in assembly was still dramatically smaller and faster, and that mattered a lot. (Turbo Pascal was itself written in assembly language.)
I think there's often a sort of cargo cult ay work here. We spend so much time trying to solve the meta problem, how do we get better at solving any problem faster. But very little time trying to find out what is actually the original problem. That problem falls between the chairs of responsibility.
That doesn't seem the point of the article. Whether through the dumbest monolithic code or with the finest higher order algebra abstractions, the point seems to diminish the feedback loop with your target and come to such a dynamic as soon as possible.
> MacCready, decided to get involved. He looked at the problem, how the existing solutions failed, and how people iterated their airplanes. He came to the startling realization that people were solving the wrong problem.
Being able to analyse existing solutions and previous attempts (successful or not) is a luxury that the first people to attempt a solution don't enjoy.
It's much, much easier taking an existing solution and improving it than it is to develop the existing solution.
The Everyday Astronaut had a great interview with Elon Musk after the Starship presentation a few days ago. They spent a bit of time talking about how much progress the team had made in very little time, and Musk's take was that it took them an awful long time to finally ask the right questions - years. Once they had formulated exactly what they were trying to do and what the actual relevant constraints were, finding the right solutions and making very rapid progress was actually not all that complicated.
He talked about the tendency really good engineers have to spend a lot of effort optimising the design of things that shouldn't be there at all.
Finally he banged on hard about always questioning your design constraints. He pointed out that if you get given some design constraints by another team, you should always consider their validity since it;s very unlikely they are optimal. The converse assumption would be that they are always perfect, so looking at it that way there's no reason you should assume they are right from the start.
For anyone who worries about the sunk cost fallacy, bear in mind it was only about a year ago they decided to throw away all the investment they'd put into building a composite body, including a huge main body tool.
I came here to write this same comment as well. I think Elon has great insight how to very quickly build from conception to reality, pruning off undesirable paths before they consume resources.
In that one interview with Tim, he explored very deep concepts about cross team collaboration, engineering process and project management that extend well beyond the rocket science I was expecting to hear about
Identifying the right problem is indeed the golden nugget.
Teams chasing after arbitrarily defined KPI, with no feedback loop in place, or the feedback is 1 year later.
Implementing features based on feedback from 1 customer, then find out the customer wasn't articulating the problem at all.
Projects eager to adopt the latest technology, using every bit of service AWS provide, jumping onto the jargon bandwagon (e.g. blockchain) to solve non-existent problem.
> Implementing features based on feedback from 1 customer, then find out the customer wasn't articulating the problem at all.
What about, ahem, Google Self-Driving Cars :)?
Unpopular opinion around here but here goes:
I feel the dimensions on which actual people (i.e. customers) will evaluate Google's SDCs, when they become generally available, is very different from the dimensions Google is optimizing for, based on what they think people will need in the future once the technology is ready§.
How a person determines the utility of a product or service can be markedly different from how the same person evaluates the utility of that product or service when s/he is confronted with what all potential customers must confront before making a purchase decision: a price tag.
As subtle as the distinction between a person and a customer is, it is an important dose of reality that I think is missing from a lot of arguments that happen here on HN where the title of market leader is undisputedly ceded to Google Waymo when compared to a competitor like Tesla with far far lower-tech (but cheaper) autonomous capabilities.
§ IMHO, I don’t think their current approach to SDC technology using expensive LIDARs fits the problem at hand. Use of high fidelity tech like LIDAR also causes another tendency, it lures engineers into thinking that it will somehow make tractable, a hard problem like the trolley problem [0].
An additional issue is that the designations: L0-L5 are arbitrary constraints themselves, they are not based on problems customers are confronted with.
> Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.
This was not the only problem-framing aspect of MacCready's success: while previous attempts built complex airframes intended to have a low drag coefficient, he recognized that the issue was power dissipation - if the speed was low enough, the drag coefficient was not so critical, and things like wire bracing were acceptable. His initial insight was that "if you triple the size of a hang-glider-size plane and triple its wingspan to 90 ft. while keeping its weight the same, the power needed to fly it goes down by a factor of 3, to about 0.4 horsepower." And that, he knew, was what a trained cyclist could pump out for several minutes at a stretch.
I am not privy to Dr. MacCready's thoughts, but I think it is worth considering how the two inspirations differ. Anyone who has worked on developing something novel has probably realized that there are benefits in fast iteration, and I would guess that the other plane-builders realized how time-consuming their approach was, but could not see a way to achieve fast iteration, because they were locked into a particular notion of what the solution would require.
MacCready's engineering insight, on the other hand, arose from an understanding of the practical implications of aerodynamics so thorough that it was almost instinctive (the same could be said for his invention of the glider speed ring; it is not that he knew something others did not, but he saw how that knowledge could be put to use. Also, in both cases, there's some significant engineering to be done between the idea and the implementation.)
This aerodynamic insight happened to open the door to fast iteration, but, perhaps more importantly, it showed what sort of airplane to build (in fact, the latter was the key to the former, as well as the key to actually achieving the goal.) It is all very well to say that we're going to solve a problem through fast iteration, but that does not tell you what to do next. On the other hand, I imagine that as soon as Dr. MacCready had his aerodynamic insight, his mind was filled with ideas about how to go about it.
I think the "Richard Feynman" algorithm is figuring out a way to get to the desired result for a specific challenge without having to past through all steps needed to solve the general problem.
The example of Feynman cracking safes is perfect - he just used a bunch of special tricks rather than any general method - similarly with quick numeric calculations in his head.[1]
That said, while most hard problems need to framed in a different way not all hard problems yield to the "clever detour into something simple" approach. A lot of math problems are solved by a detour into a bunch, difficult things and then a return to your original "seemingly simple" problem (see Fermat's Last Theorem etc).
Similarly, startups are in the business of iteratively failing and learning about their problem. By reframing the problem, he effectively "startupify'd" the problem.
Solving the wrong problem is a very common thing in software development. That is why I always take time to think (sometimes during three days) before starting a development.
>Solving the wrong problem is a very common thing in software development.
there aren't much right problems in software development to go around, while there are so many people and money that can and have to be put to work, and as result we live in a kind of a golden age where instead of subsistence and survival of a typical stressed&oppressed office drone most of the people in the industry are in a position to work on wrong problems, and to try and to develop more and more of new tech, and the high failure rate of the projects isn't an issue. Which other engineering discipline can allow itself to have such a luxury?
There are two ways of doing a Ph.D. thesis: (1) Finding a problem and solving it; (2) finding a method and then finding a problem it solves. The second is much more likely to be successful.
Yes, linkbait 'you' is a perennial annoyance. At least this one has the good manners to leave a pretty nice title if we simply delete the offending pronoun.
And that is the wrong problem that too many software engineering teams solve. Let's not solve a customer problem, let's solve the problem of solving customer problems and hopefully in year or so we'll get to that.
Iteration speed is an issue for hardware. It becomes an issue for software teams when too much time is spent tooling, generalizing and reinventing wheels.
I point that out because this sort of example pushes the wrong buttons for many software engineers, including myself. We don't need more time spent on meta-problem solving. Much less, in fact.