Hacker News new | past | comments | ask | show | jobs | submit login

So the person that has an opportunity to misunderstand the specification (the developer) is given the task of writing the tests to OK them, even though the test is based on the same misunderstanding?

That's terrifying. After 15 years in the same field I still don't have half the business knowledge as those writing the specs I implement. I would never ever want a nontrivial feature I wrote to hit a customer without manual testing by an expert in the area.




Where do you work that software engineers write code and then hand off their code to non-software-engineer "experts" for testing? That sounds like a really broken process.


I implement a program used by structural engineers. So basically the user is a structural engineer and I'm a software engineer. They typically find nuances of program behavior that I never thought of because I'm not an expert in structural engineering.

I think the same would be true if I made a trading platform, an x-ray machine UI or whatever. When the expert uses it to do what they are experts in, they will invariably find issues (bugs, omissions, simple improvements) that weren't obvious to begin with.

One can argue that if the spec was 100% perfect then I can always test it myself and hopefully even do so with automated tests - but I have never seen a spec like that (perhaps more importantly - if you have in house "end user like testers" for expert software then it's likely more economical to have expert testing and iterate than spend the time on more detailed upfront specs)


For a huge chunk of the industry, the consequences of broken software are limited to annoyed users and lost revenue. In cases like that, the benefits of shipping quickly often outweigh the value of "expert testing". Further, in many cases the software engineers have as much expertise as the customers, making the handoff for testing simply a way of abdicating responsibility for quality.

For your structural engineering example, I'm not sure you couldn't benefit from continuous, automated release. If the only risks are missing "bugs, omissions, simple improvements", you could fix those in the next release (which could be the next day). Delaying valuable features so that the customers can tell you that a tweak would be even better doesn't seem to be a net gain. The only reason to hold the release would be if you're catching dangerous bugs this way.

You could also build new features under a "flighting" system (pick your favorite name; there are several) where you don't expose new features to most customers until they are "baked" with your internal customers and/or customers who've opted into early features. This allows you to release constantly so your customers get bug fixes quickly and features as soon as they're ready without the complexity of separate branches and versions maintained in parallel.


We can't ship often for the same reason all of the big and complex software packages (IDE:s, spreadsheets etc) on your machine don't ship very often. Documentation needs to be produced and specific to a version. The application needs to be a consistent whole with UI changes, file format changes etc not happening too often.

I don't think it will ever be a good practice for large complex apps to change a tiny bit every day (Facebook might be challenging my theory, but their app is relatively simple they don't produce training docs, and most importantly they don't have to have the latest app compatible with all Facebook data from the beginning of time - instead they keep their data on their servers and modify it to the latest programs when necessary)

I agree you could have more feature gating, but large backwards compatible file formats are a complex business already with 10 releases over a decade - I can only imagine what it would be like supporting reference-rich documents with many more releases and the additional complexity of the sender and receiver having to agree on a feature set (unless you make the feature set/flight implicit from the data - but that's a new kind of headache). We already have tons of code in new versions dealing with loading malformed data in old formats because of bugs closed years ago! Every document we wrote we must also be able to read.

Lots of challenges in this area, but they are pretty fun to work with tbh.


> We can't ship often for the same reason all of the big and complex software packages (IDE:s, spreadsheets etc) on your machine don't ship very often. Documentation needs to be produced and specific to a version. The application needs to be a consistent whole with UI changes, file format changes etc not happening too often.

The only reasons Excel and Visual Studio don't update constantly are 1) the update mechanism is too heavy with gigabytes packaged into an MSI, 2) it's easier to sell licenses with big updates, and 3) inertia.

With people migrating to Office 365, I wouldn't be surprised if the ship cadence of Excel (etc.) become more service-like, with frequent feature releases and only big redesigns or massive features getting released as "major version" releases.

(Disclosure: I work for Microsoft. These are my opinions and not based on any inside knowledge of Excel or Visual Studio dev/release processes.)

The issues around UI, documentation, etc are solvable. You can build and release features regularly without changing the UI significantly. The UI as a whole should be consistent but small tweaks are fine and big changes can be built behind a feature flag and left dormant until the next "big" release if necessary.

Documentation doesn't need to be locked to a specific version, or to the extent it does, you could automate that. From version to version, the changes that impact existing documentation are minimal. So your doc system needs to know how to render V1 and V2 and understand the delta between the two. Not free but not overwhelming either.

With all that said, I understand that sometimes "major version" releases and the associated "big bang" testing and signoff and release can make sense. But it's rare that continual release cannot work.

> I don't think it will ever be a good practice for large complex apps to change a tiny bit every day (Facebook might be challenging my theory, but their app is relatively simple they don't produce training docs, and most importantly they don't have to have the latest app compatible with all Facebook data from the beginning of time - instead they keep their data on their servers and modify it to the latest programs when necessary)

Chrome is getting close to this. They release major versions on something like a monthly cadence and smaller updates more often.

Obviously, the closer you get to a service model like Facebook, the easier and more appropriate it will be to ship updates very frequently. Over time, the number of devs involved in projects like this seems to be trending distinctly upwards, though. I wonder in 10 years what percentage of software will be shipped in a way that looks like "shrink-wrapped" software.

> I agree you could have more feature gating, but large backwards compatible file formats are a complex business already with 10 releases over a decade - I can only imagine what it would be like supporting reference-rich documents with many more releases and the additional complexity of the sender and receiver having to agree on a feature set (unless you make the feature set/flight implicit from the data - but that's a new kind of headache). We already have tons of code in new versions dealing with loading malformed data in old formats because of bugs closed years ago! Every document we wrote we must also be able to read.

I would think that binary formats would be pretty stable even if you released very frequently. I don't mean that it would naturally happen. I mean that it should probably be mandated. You shouldn't need to modify the binary format constantly in order for other work to happen. It would be a maintenance nightmare if every little release modified the file format. But this is kind of like shipping a binary client. You wouldn't ship the client as often specifically because of the maintenance cost (namely compatibility testing). Obviously if you couldn't do any work without changing the file format, then this would become problematic. But then I would wonder why your file format is so brittle and so tightly coupled with the rest of the app, and if I wanted to release rapidly, I'd invest first in fixing that.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: