Hacker News new | past | comments | ask | show | jobs | submit login
Evidence-based software engineering: book released (coding-guidelines.com)
290 points by qznc on Nov 12, 2020 | hide | past | favorite | 118 comments



> "In a competitive market for development work and staff, paying people to learn from mistakes that have already been made by many others is an unaffordable luxury; an engineering approach, derived from evidence, is a lot more cost-effective than craft development."

I disagree with this characterization. Yet again, we developers are being told to just follow the program. That programming is not an artistic or craft endeavor that benefits from experience and intuition. It is like working in a factory where coders just bang out widgets on an assembly line. And the self-appointed thinkers will optimize the process for us.

What is at risk by not allowing developers to "learn from mistakes" is autonomy. Striping developers of their autonomy is the primary cause of poor performance, not an inability to execute so-called "best practices"

Attempts to codify the process of software development always fail, because coding is a design process not a manufacturing one. Developers do their jobs in many different ways, many of which are equally effective. There is more than one way to skin a cat -- especially in creative work.

> "The labour of the cognitariate is the means of production of software systems"

This false assumption is at the base of the problem. The work of the compiler (or scp) are the means of software production. Coding is design. Once the design is complete, the results are compiled and copied to their target environments. In software, production is negligible. Which promotes the misconception that developers are producing software. In actuality they are designing software. The difference may seem subtle, but it is crucial.


Sorry to be the killjoy here, but

This is a big mischaracterization of what "software engineering" is attempting. In this case it is an attempt to document what should be learned from experience and intuition. Because, the vast majority of software engineering IS "factory" work because its the grunt work of building what should be a well understood system, with well understood tooling. This doesn't mean that there isn't plenty of space for creative problem solving, particularly when the system is underspeced, it just means that the correct solution most of the time is the boring one.

Most companies don't want "software artists" anymore than they want "artistic bricklayers" or "artistic aerospace engineers". What they want is predictable, maintainable error free software that still works when the "artist" moves on.

That doesn't mean that your not free to upload a ton of artistic software to your github, or have opinions about how something should be designed. It just means that a professional should choose the accepted method over the fun one when given the chance. And why shouldn't they? In very few cases are "software engineers" being paid to work on their own pet projects, the end result is going to be something that the company owns and is responsible for, not the individual working for said company.


One aspect of the problem is that whenever I need to build a particular system, the parts of that future system that are well understood system, with well understood tooling take almost no work - those either are available as part of a well-known framework or can be trivially included from a well-known library.

All the work that remains, all the work that consumes most of the time of programmers is the work where either that parts of the system is not yet well understood, or where well understood tooling is not available.

I'm not seeing the "factory" or "bricklaying" work anywhere, it has been automated or converted to reusable components or frameworks. Whenever I see people spending a lot of time on systems that could be "factory" work, the situation is that it could be "factory" work if and only if the system requirements and details were properly understood, and these people were mostly working and re-working on understanding and misunderstanding what's actually needed - and there's no way around that. Once they've understood it, any "factory" work part is quick and trivial to the point of being irrelevant in metrics.

So my point is everything that we need from software engineering practices is tools and practices to manage the part that isn't "factory" work, the process of understanding the details of the systems that we haven't built yet.


There is a third class of problems in coding that, at least for me, takes most of the time - definitely more than exploratory work. That class is what I call code bureaucracy - the part where you write code that shuffles and transforms data between components, or manages components. The components themselves may be well understood and already built, but they're probably nowhere near each other and have incompatible interfaces.

Note it's not the same as "glue code". Writing glue code is a small subset of code bureaucracy, that happens only at the boundaries. True code bureaucracy begins when you discover that two components that need to communicate are on opposite sides of the system, and you need to carefully route a new path between them through half a project.


You are using "factory produced" code every time you use any optimizing compiler for typical general purpose language.

Compiler optimization phase ordering is NP-hard for every program being optimized and compiler writers cannot abstract that away to reusable components or frameworks.


> Most companies don't want "software artists" anymore than they want "artistic bricklayers"

I recently had brick laid in front of my house. When the bricks arrived, they were not cut with the precision that the landscaper had expected. They were mostly rectangular, but with enough variation in size that laying them naively (assuming they are interchangeable) would result in an ugly, uneven walkway.

The landscaper worked out what order to sort + lay them in to minimize the impact of the uneven sizes. I'm very happy he did; my walkway looks stunning for it.

If you are looking to get landscaping done, live in western Massachusetts, and appreciate someone who takes pride in his work and will take the time to do things right, I have a glowing recommendation for you. (Do be prepared to chase him down about the paperwork on both ends of the transaction; the business side of the business was not his forte). My email is in my profile.

---

I am not sure if that exactly contradicts your point. You said that's not what companies want. As a human, though, I very much appreciate work well done, when the real world inevitably defies our abstractions.


Strongly disagree, software engineering is nothing like factory work, or rather, companies who view it as such and only have "predictable, maintainable error free software" as their objective will have their lunch eaten by disruptive companies who understand software is a creative tool to deliver value and should be treated as such.

You should read

https://iism.org/article/why-are-ceos-failing-software-engin...

> Applying replication management techniques to creative work is a systemic mistake that actually disadvantages companies that are developing software – this is because the skills, workflows and founding principles for discovering new value are almost entirely different from those needed to efficiently replicate value.

> It should be clear that not having an effective creative management system is a huge gap in the value fulfillment cycle. If this gap remains unaddressed, the market will solve the problem by investors, over time, moving their money to entities that are discovering new value at higher rates. So, if you have this kind of systemic void, it’s really in your best interest to address the gap as soon as possible, and I would encourage you to give some serious consideration to empowering an effective creative management system.


How are you going to sell unpredictable software to a customer, and if you do manage that, how are you going to keep that customer without maintainable software?


Ask Facebook? "Move fast and break things" is in a literal sense orthogonal to predictable, maintainable software.

If you are adding value and focusing on your customer's needs, they will put up with a little more unpredictability and extra maintenance.

Again, you should really read that article, it says it better than I can summarize in an HN comment.

Part of its definition of value is that you're addressing unmet need at a rate higher than you are introducing chaos.

So the levels of predictability and maintenance in your software are tradeoffs of other ways to add value (insert "always has been" meme here). They are neither necessary nor sufficient for delivering value.


Facebook changed that in 2014 because it didn't work.


They didn't say anything about unpredictable software. There are things you can do to systemize the software development process but if you go too far you reduce the ability of your programmers to find the optimal solutions. Creating and managing complex systems is an art.


You strongly disagree based on feelings and conjecture?

As engineers, we should know to only use strong words one way or another when overwhelming evidence supports our claims, which is clearly not the case here.


Sorry, where did my feelings come into play here?

I cited an article with a fairly persuasive argument around the value fulfillment cycle of creative work such as software engineering and quoted a bit from it.


> Most companies don't want "software artists" anymore than they want "artistic bricklayers" or "artistic aerospace engineers". What they want is predictable, maintainable error free software that still works when the "artist" moves on.

Well that's a bloody shame because they have to hire actual humans, and we're a smelly, imaginative lot. If they want an iron they should go to Target.


The iron is the shitty no-code, low-code solutions and I imagine it'll follow the same path the iron has. It hasn't changed much, it does the job, but if you want your stuff to look really good you take it to a specialist, AKA a dry cleaner who follows on the facade a similar process that is in reality entirely different.


Exactly I'll keep doing the creative work and let the brick laying happen on UpWork. If they confuse me for one of those then it's their loss and I'm moving on


There are so many falsehoods in this, I don’t know where to start.

1. Adding missing specs is less creative than designing a software system. It’s weird that adding specs to a product requirement is your example of creativity.

2. The correct solution is the boring one? What on earth are you basing this assertion on. Is your worldview so reductionist that you actually think this kind’ve statement contains any true or valuable information?

3. Predictable, maintainable, error-free software is more dependent on choosing/formulating the correct abstractions (which is very much a creative process) than following a checklist of “bricklaying” steps.

4. You seem to be confusing boring with correct and fun/pet with creative. Neither of these are equivalent.

It’s a shame because I assume you hold these opinions out of ignorance, which means you’ve never had the chance to build software in a creative, challenging environment where following a checklist would certainly result in failure. And I sympathize with anyone who hasn’t gotten to experience what’s truly special about this profession: the creative process.


Companies don't want employees at all.

But there is what you want, and there is what you can get.

> It just means that a professional should choose the accepted method over the fun one when given the chance.

Er, but that's rather what's being argued about, what the accepted method is. Funny that everyone making that argument tries to smuggle it in as an assumption.


> What they want is predictable, maintainable error free software that still works when the "artist" moves on.

Producing such software is the essence of the art! Managing the complexity of large software systems as they grow demands the ability to creatively invent elegant abstractions that factor that complexity into manageable chunks. Without that factoring, the complexity grows until no one understands the code -- with entirely predictable, and sadly familiar, effects on maintainability.


> software engineering IS "factory" work because its the grunt work of building what should be a well understood system

So what are you implying here? That we should go the Waterfall route and have 100% clear and finished specs? Good luck with that buddy. You're just shifting the real problem outside of your "software engineering" space.

In my 20 year career, I've never built a "well understood system".

Some things can be built 10 times faster with the right idea. Some features can be supported by a single, simple one. Some screw-ups can take forever to fix. 100% code coverage won't give you bug free software.

Building software is all about the skills of people (developers, analysts, managers, testers, ...). The better the people, and the more power you give them, the better your project will go. If you hire bad people and have to put processes in place so they don't screw up, you're in trouble.


Your post fixates on "artistic or craft endeavour" defined as something frivolous (i.e. "fun" solutions). I disagree with the use of the word artistic (and craft), but the word only appears once. Intuitively, the point is development requires a lot of "design" work that is creative, an logically so; in the same way medicine is.

In your bricklaying example, how does a bricklayer operate without an architect? The equivalent of laying bricks is typing characters.

A developer has to decide what to type, not just type it.


"What they want is predictable, maintainable error free software that still works when the "artist" moves on."

How about fast software?

To get fast software today. You're going to need some artists.


I think the evidence shows that "fast" software is not actually desired by the majority of companies, otherwise they would put more emphasis on using compiled languages, optimisation and reducing technical debt. Sure, everybody will take a speed improvement if it's free but most companies will go for predictable, maintainable and error free software over speed every time.


A lot of that maintainability and lack of errors will come from proper design, having a good design where the complex problem is partitioned into many really simple problems, resulting in easily testable code. And that code is easy to optimize, because it's easy to make changes when you have a decent test support.

Now this software won't be as fast as what's possible, if that is needed the optimizations will turn it ugly again in many cases, but it will be decently fast. This all starts with good design, proper intuition about which algorithms to use, and a good dose of creative problem solving.


I spun up a very simple rest API thatreturned an input parameter, and ran it under load, using asp.net and express js. There wasn't any architecture or design, it was one function. Node/express had 10x less throughput than the asp.net version, with 99% being multiple orders of magnitude larger than the asp.net version.

No amount of clever design will outrun that gap.


Of course you can hide the biggest complexity behind a single function.

What I was getting at is that if I see code with a lot of duplication, often it's not only the code that's duplicated, but the runtime as well. Then you have people using nested for loops where a dictionary would do to look it up.

Things like this, in the same language and same framework, make things on average faster, and simpler.

And if that piece of badly architected code is fragile and breaks on every change, you will stop trying to find even the low hanging fruit of optimizations that usually pay off, because every change affects multiple different files and you have no tests to see quickly if it still works.

Thinking about your reply again, it's actually the perfect example of this. Since it was a one-liner, it was so simple that it was easy to exchange it for a different one liner with a different framework just to see if it's faster. If you had to spend weeks on replacing frameworks you wouldn't have been able to do this low-hanging fruit of an optimization.


You've totally missed my point here - the fact is that no amount of low hanging fruit optimisations in the node app, or architectural improvements, or clean code will _ever_ close that 10x gap. Almost all of the decisions you make after the fact (e.g. DRY, DI, composition, and even in many cases data structures) are meaningless on both a macro and micro level compared to the very basic decisions you make - what does it run on.

[0] https://stackoverflow.com/questions/40070631/v8-javascript-p...


What matters for companies is whether they can accomplish tasks faster than they can do them today. And it's rare that the difference between executing Python or C code is the bottleneck in that process.


"To get fast software today. You're going to need some artists." I don't think that's been true for a couple of decades. Performance issue resolution are now less "The Story of Mel" and more "throw processors, memory and bandwidth at it until it has what we need." Security requirements are more likely to be your limiting factor than raw speed. There are certainly some niche areas where you're working within tight resource limitations (I work in embedded stuff), but Moore's law means we can get MS software to run in an SoC, which means we stopped needing artists a while back.


Sounds like the software systems you are talking about are so mainstream and well understood that it might be simpler to buy something off the shelf. Now that more and more pieces are available as SaaS solution my intuition is that organizations should only focus on those pieces that are unique to the problem they are trying to solve. Rest everything should be acquired as a SaaS solution.


And yet it is exactly in the most, 'boring', 'tool constraint', 'factorized' and 'process goverened' environment of enterprise software projects that the failure rates reach spectacular heights.


The factory work is the sort of things that have been written into framework these days. Learn to use the appropriate one well and it should take away much of the grunt work.

> What they want is predictable, maintainable error free software that still works when the "artist" moves on.

That is where the craft of software engineering comes into play. It's relatively easy to make code that works. It takes a lot of experience to consistently write code that is intuitive for the next developer to understand quickly. Keeping the complexity low while keeping the same functionality is an art.


Because, the vast majority of software engineering IS "factory" work because its the grunt work of building what should be a well understood system, with well understood tooling.

I'm not sure it is, depending on which majority we're talking about.

We size in complexity because that's where the time goes, the extra testing, the rework, the long reviews, the risk.

If there are cookie cutter lessons to be learned software tends to abstract those problems away to the point they're no longer problems.


While I agree with some of your arguments, I think your main mistake is calling it "factory" work - because IMO it isn't a valid analogy (and analogies are powerful! :) ).

"Factory" work implies a far more constrained and monotone environment.

Craft work is more apt, I think.

I mean, to illustrate: Most of the work might be seen as just connecting pipes, but I'll be damned if a highly skilled and experienced plumber isn't worth their weight in gold. :)


Your right, I should have been clearer, both factory and craft imply the wrong thing IMHO.

I think a better term would have been "skilled". Like the bricklayer, or plumber you don't want "creative" so much as reproducible quality which comes from experience. So in that regard craftsman is partially right too. Traditional factory work OTOH had painters, machinist, welders, etc. All of which were highly skilled labor, but their jobs like a blacksmith making nails or plumber brazing pipe frequently were just repetitive application of that skill. One might even say that these painting reproduction shops are in this camp as well despite being "art".

So its the same with software, yes you have a framework, but the act of building out forms using that framework to CRUD up some data somewhere is frequently quite repetitive despite differing colors and button placement.


It seems you have a pet peeve about pet projects. The parent comment didn't say anything about pet projects.


If you pay software engineers to code up a well understood system with well understood tooling you're doing something wrong imo. You should instead pay for a COTS solution. Software engineers solve the parts of your business problems that don't have well understood solutions yet.


Actually, the book agrees with you. There's very little evidence to support having a ridgid set of rules to follow when programming.

There are a few high-level take-aways from the recurring patterns seen in the analysis; these include: • there is little or no evidence for many existing theories of software engineering, • most software has a relatively short lifetime, e.g., source code is deleted, packages are withdrawn or replaced, and software systems cease to be supported. A cost/benefit analysis of an investment intended to reduce future development costs needs to include the possibility that there is no future development; see fig 4.24, fig 5.7, fig 5.52, fig 5.69, fig 6.9, fig 3.31. Changes within the ecosystems in which software is built also has an impact on the viability of existing code; see fig 4.13, fig 4.22, fig 4.59, fig 4.61, fig 11.76, • software developers should not be expected to behave according to this or that mathematical ideal. People come bundled with the collection of cognitive abilities and predilections that enabled their ancestors, nearly all of whom lived in stone-age communities, to reproduce; see chapter 2.


Nice summary. Thanks. I might even skim read the book.

Sounds like a more rigorous version of Alistair Cockburn's post mortem on the non success of CASE tools. Forget the title, but the punch line is: Methodology, processes, and tooling are moot. Talented people are somehow able to make any given system work. He uses phrasing like "people are the first order determinant for success".

The book I want to read compares the folk theories and fads of software development with other democratized fields. Like education reform. How noobs, wannabes, rejects, and grifters fight over cheddar, slak, and fame. Where any consideration of form, fit, function, and ROI is ruthlessly punished and expunged.

Said another way, the only consistent thread in my career is an ever increasing number of people, who are constitutionally incapable of ever shipping product, telling me with complete certainty how I should ship product.


This is good to know. I should jump into the book. I may have gotten the wrong impression from the summary -- the quotes that I shared. I think a scientific effort to look at the many practices that are currently in vogue is very much needed. So many are taken as gospel truths and lorded over people in the name of science-- but they really aren't supported by any rigorous science at all. If this book points that out, than I am all for it.

Especially if the book has more a system thinking approach. Many studies isolate one practice (pair programming, code reviews, etc..) and can show benefits, but they ignore the systems they function in. Apparently opposite approaches, supported by the right personalities and environments can often be equally effective. From your comment, it looks like there may be some analysis like that too.


I like reading about ESE (empircal/evidence-based software engineering) for precisely these reasons. For example, someone found that SOLID programming didn't offer much of a benefit to code readability.

This is my favorite talk on it: https://www.hillelwayne.com/talks/what-we-know-we-dont-know/


I had “lost” that excellent talk. Thank you for posting this.


I didn't get the feeling that the author disagreed that programming has a crafty nature. But since building software also has some engineering aspects, he is pointing out that you (as an organization or project leader) could wait for the graduates that you hire fresh out of uni to mature into accomplished "craftsmen", or you could just observe what has already worked for others, based on evidence and mindlessly apply those patterns. The mileage may vary on your returns from the latter approach, but it will likely be cheaper and will remove some uncertainty from your projections.

Speaking as someone who has observed the kind of massive technical debt you can incur from letting your programmers mature one mistake at a time and ten reinvented wheels later, I would certainly not be too quick to dismiss the proposal.

> "The labour of the cognitariate is the means of production of software systems"

I fail to understand why you would disagree here. That sentence to me means that producing software systems requires brain work. Whether thinking, designing, coding, testing, debugging, reviewing, it's all part of the "labour of the cognitariate". Compiling isn't.


You have a valid point here. Maybe I was too hasty to judge the book by it's summary. From other comments, it sounds like this book takes a pretty reasonable approach.

I am frequently skeptical of efforts to show the "one true way" of programming, because it usually results in some poorly vetted process being forced on my at work. So, I was probably too quick to jump to conclusions.

It actually sounds like this book is good about showing that a lot of our current assumptions about good process are faulty.

The point I was trying to make about the "labour of the cognitariate" was that developer doesn't really produce software, they create the blue-print for the software (source code), and then compilers or interpreters produce the actually software. It may seem like an overly semantic point, but I think it is an important distinction to make. It changes they way you think about software development.


> What is at risk by not allowing developers to "learn from mistakes" is autonomy. Striping developers of their autonomy is the primary cause of poor performance, not an inability to execute so-called "best practices"

I've seen a lot of the opposite. Yes, coding is a design practice, but I've had to clean up a lot of messes resulting from just plain bad design because nobody involved - generally ~25 year olds with very little experience out of school - knew that there were lessons from the past they could learn about what designs would and wouldn't work.

I agree with you that programming is an endeavor that benefits from experience, and wish that people would realize that means they can learn from the experience of others. Sure, intuition is involved too, but one common thing I've seen in shitty code I've had to salvage is that people often don't apply their intuition to "how could this code fail" or "how easy will this be to modify in the future"?

That said... taking a look at this book... I don't see much in the description or table of contents that would teach those folks whose work I'm decrying above much useful about writing good software. It has sections on reliability, project estimation, and development methodology as separate things - plus a lot of non-software-design stuff. But to me, the flow is different - estimation, reliability, and delivery will all suffer if you don't have the right fundamental design skills. You can't get much better at any of those without some deeper underlying changes.

It seems to have a lot of discussion of studies adjacent to software-related things, but I'm not sold on them saying much meaningful about software design.


"and delivery will all suffer if you don't have the right fundamental design skills"

Some people will never get them. And maybe do not have to, if they are not involved in the complex design process(too much), but rather the implementation and testing of well defined small modules.

So I believe it is very important to let people do, what their talents are.


It really depends on the kind of systems and work you're doing. In my previous office, there was an entire group (about 100-150) of programmers whose job was incredibly rote. That is, you could take a novice out of college and get them up to speed in about 1-3 months to do even their most complex work.

However, the other groups were much less factory-ish, though rarely anything truly novel. Only a small cadre of programmers were working on anything that really required novelty and creativity.

It's a spectrum and that has to be understood by all participants in the discussion. Managers want everything to be like that first group, because it's so consistent and predictable. They want to know that a project will take 1000 man-hours and be right 95% of the time. Many programmers want to see themselves part of the last group. The reality is, most of us are in the middle. There are aspects of the job that are almost mechanical, and aspects which require greater creativity or "craft". If we can clarify that for the managers, it can go a long way to ending some of the nonsense.


I recognized this 20 years ago but my recognition was crude. What I noticed early on, in almost every place I worked, was most of what could be called office politics for software devs/engineers boiled down to maneuvering to work on API's or even some place lower on the call stack.


Taking a chess analogy - every new chess player is taught the same basic principles - control the center, develop your pieces and king safety. Also, some endgames are theoretical. A queen and king against a king and pawn on the 7th rank will be solved the same way by everyone including Grandmasters. At GM-level there's also significant opening theory which has to be learnt.

Yet, everyone has fun playing chess and even Grandmasters continue to be mesmerized by it because of the complexity - there is always something new to learn. Also, GMs have access to the top chess engines and can be studying using the same engine but have radically different styles.

They live for the parts of their games that are novel and so do we.

If all of our development was rigidly defined, we would just automate it. We put up with the boring parts of our development because we know there'll be parts of it that we can design.

If you are a hobbyist developer or a student, by all means, explore on your own and learn from your own mistakes. But that is not efficient if you want to get to the forefront. No GM today will refuse to learn opening theory or study the endgames.

We will still create magic and have fun.


I read that sentence as "Failures should be documented. If something failed in the past and the underlying issue why it failed is still there, it's useless to attempt it".

Ex: Using language Y for X was a disaster because of Y didn't have hardware acceleration and we were unable to reach goal Z. Before attempting to use language Y in production again, make sure platform support has improved.


I wonder how many confounding factors there are though. At one point when I had just joined a companyy, a colleague in the process of leaving told me very confidently "at our scale it is not possible to do X anymore". Obviously, we got X working the very next week.

What if using language Y for X was a disaster because of poor programming skills or micromanagers? In my experience it is very rarely as clear cut as "Y does not have hardware acceleration support" and there is not really enough rigour in the software engineering process to really figure out where a failure came from.


I also agree with that.

I think it's more about infusing an engineering mindset where there is an analysis of past failures.


Sometimes software development is just cranking out a widget that you need for some line-of-business app, that isn't going to need to scale, and will never see the cloud, and will just sit there used and unchanged until the tech underneath it changes. There is a lot of software that fits into this category, done by a lot of people. Not everything gets an "Artisanally crafted with <3 by $NAME" footer on it, or is solving some novel problem of mathematics or CS.

> Striping developers of their autonomy is the primary cause of poor performance, not an inability to execute so-called "best practices"

Citation needed for this one, I've worked with more than a couple mid-level and "senior" developers who I would have sworn had never read a single article about best practices for stuff they did every day.

> In software, production is negligible. Which promotes the misconception that developers are producing software. In actuality they are designing software. The difference may seem subtle, but it is crucial.

This I agree with entirely but only for probably the top 20-25% or so of software. (Top as in most creative or most novel, not best).


Many years ago I would 110% agree that developing software was more art than science, and that the average software developer was working in a creative medium. But those days are generally over for the vast majority of software engineering needs. But this was in the days when memory was available in kb, the OS was a thin veneer over the hardware, your choices of languages were more which dialect of an machine language you worked in, and the fact that software worked at all was a miracle.

These days, with virtually unlimited memory, storage, and extremely high software reuse, the industry has been effectively bootstrapped to the point that the average software engineer is more of a tradesman than an artist - a plumber or an electrician, who's job it is to fit standard components into the space provided for by the business logic. While there are elements of creativity to that job, it's no longer a truly creative medium for most jobs. This in a broad sense was the miracle of Java.

That's not to say that there still isn't the room for beautiful, elegant, code. But those tasks are rare, and they tend to target some low-level highly reusable code that the digital plumbers can reuse over and over without any nod to artistry.

If assembly line levels of engineering are ever achievable in software engineering, then it's most likely that software engineering can simply be mostly or entirely automated and the "engineer" disappears. But that's not where the field is yet. When that's achieved (maybe in the near future!), a user of a computer can simply ask the computer to build software to perform some function based on some conversational Q&A and and entire, scalable software stack will appear out of the ether.

"Computer, I need to..." and software will appear that does that.


I read the entirety of a draft of this once, about a year ago, when I was bored one night. I don't want to be overly critical of someone's book, so let me just say that:

A) I don't understand why an entire elementary statistics pseudo-textbook is bolted on at the end, forming the entire back half of the text, and

B) the title interested me because it promised concrete information that would improve my own software development, and while I found many things in the first half of the book, I didn't find this.


The "statistics textbook" part is especially confusing to me -- who is the audience for this, and why is it even here?

He mentions at the beginning of the probability chapter:

"Readers are assumed to have some basic notion of the concepts associated with probabilities, and to have encountered the idea of probability in the form of likelihood of an event occurring"

This doesn't give us any insight into what background is expected here -- do I need to have taken a measure theory based probability course? Or would a calculus-based course suffice? What about a course for non-STEM majors? The word "basic" means different things to different people, and is generally an anti-pattern in mathematical writing. It's meaningless at best, and gatekeeping at worst.

Then he launches into this downright strange rabbit hole about mental models. None of this discussion in this chapter feels properly motivated, the writing is disjointed, and it reads like an unedited, non-proofread stream of consciousness. In particular, the exposition gets too distracted by its own examples.

Don't get me wrong -- many of the paragraphs in isolation are interesting, but taken together it's excruciating to read. It's an impressive project overall, and certainly ambitious, but it falls down due to the writer's lack of empathy for the reader. I suspect the situation would be better if they reduced the scope of the project and kept the writing more focused.


> I don't understand why an entire elementary statistics pseudo-textbook is bolted on at the end, forming the entire back half of the text

It's quite difficult to talk about empirical software engineering without discussing methods, after all papers like [1] were deemed necessary 20 years ago and still the occasional meta-paper is published about correct design of experiments or analyses. As someone who worked in the field it doesn't seem particularly surprising to see some treatment - there are a handful of papers in my former subfield that are oft-cited because they describe a statistical procedure/experiment design consideration, but they also bundle the explanatory stats "for free".

I would hazard a guess that the intent of these chapters is to equip the reader with enough background that they could replicate or run some of the experiments in the book to try to specify findings/experiments to their own organisations. I'd follow that with an assumption that the author felt that chapter 13 needed background, and recursed until they'd finished writing a textbook.

[1] Kitchenham et al. "Preliminary guidelines for empirical research in software engineering" 2001: http://www.ehealthinformation.ca/wp-content/uploads/2014/07/...


Can you expand a bit on point B? What sort of things did you find in this book, and why were you not able to apply them to your own practice of software engineering?


Sure. Chapter 1 is titled "Human Cognition" and reads like notes to an introductory psychology textbook. This is not a bad thing (psychology is fun to read about), but I didn't find any practical guidance about building software.

Chapter 2 is "Cognitive Capitalism." Here, I find a lot of social and economic commentary, but again, no information that will actually assist me in writing software.

The same story plays out for basically every other chapter, until we hit the material about statistics in the second half.

Another way to say it is that the level of abstraction feels a bit off, resulting in the sense that the discussion is somewhat superficial and disconnected from practice. It's like someone wrote a book on gardening that begins with a 200 page discussion of cell biology.


> concrete information that would improve my own software development

That’s what I was looking for when I clicked the link here. Do you have any recommendations for that kind of stuff, evidence-based if possible?


I would suggest the Clean Code/Coder/Architecture series of Robert Martin.


A bigger question is whether anyone would pay attention if it turns out some method has a notable improvement.

I remember reading a bunch of code quality studies a few years back. A bunch of them were DoD studies from the 1980/1990's, and the one thing that stuck out were a couple of studies that looked at style issues with ada/c and noted that the matched brace style significantly lowered a couple different defect types.

Yet today, particularly in the opensource world matched brace style is almost completely non-existant, since its a religious war thing, and some of the early opensource projects were run by people who didn't like it.


On that question, I've quoted this here on HN before. From Donald G Reinertsen's Principles of Product Development Flow:

I used to think that sensible and compelling new ideas would be adopted quickly. Today, I believe this view is hopelessly naive. After all, it took 40 years before we recognized the power of the Toyota Production System.

I read this book on the strength of some recommendation here on Hacker News. It is quite relevant to this general topic:

https://www.goodreads.com/book/show/6278270-the-principles-o...


Thank you for that book recommendation. Over the last year or two, I have independently started to piece together many of the things Reinertsen has collected in that book. Except, of course, Reinertsen has done a much more thorough job and the book points out some glaring holes in my understanding of the topic. I suspect this will be one I'll devour over the next few days.

Brilliant stuff. Thanks again.


I know 40 years seems long, but that's just one generation.

Looking at it from another perspective, that's incredibly quick for a useful idea to spread through the whole human race.

And some of that time is basically pre-internet, or at least widely accessible internet.


By "matched brace style" do you mean:

    while(true) 
    {
        println("hi")
    }


I love matching braces, if for no other reason than because they make transversing code easier. If you ever find yourself at the bottom of an awful function with multiple screenfuls of code, matching braces makes it trivial to jump to the top if that function even when your editor has no special knowledge of that language. No LSP/etc needed. Barebones vanilla Vi is actually a viable editor for Lisp because of this, relative to Vi with languages like Python.


This is one of the reasons that many Lisp developers like Lisp code - local structured editing is trivial because if the braces match, then most of the parser doesn't need to deal with the rest of the file.


> of an awful function with multiple screenfuls of code

If you've got that, you will have deeper and more abundant problems to solve than will be helped by easily matching up braces.


So, what about the claims in Code Complete that codebases with larger functions and methods tended to have lower defect rates? I’m not sure that the prevalent bias in favor of lots of small functions is born out empirically.


I'm unaware of this, I don't have the book, do you have any other references? This genuinely interests me. TIA


There is a similar concept in "A Philosophy of Software Design" by John Ousterhout. Large/deep modules with simple interfaces lead to less complexity than small modules with complex interfaces. It's a bit radical because it conflicts with the obsession we have for low level re-use and composability.


Expecting syntax to solve all your problems probably isn't very realistic. But at least as far as facilitating editor movement goes, matching braces work really well.


'All?' I never said 'all'. Why assume something I didn't say.

I have worked with codebases that have long functions and they bury bugs because long funcs tend to = copy/pasted code.


Your objection doesn't make much sense to me. I made it clear that obscene long functions are not something you want to have. That doesn't detract from the value of braces, since braces become an aid when remedying such a situation. They don't solve it, but they help you solve it. So your objection seems to be that they don't solve the sort of problem that you wouldn't expect syntactical forms to solve.


heh. That reminds me of JavaScript recently. We have optional chaining now, which quite a few developers absolutely love.

Diving deep into data structures is such a code smell, and yet so many people just want to spray the equivalent of Febreze over it. "Maybe with the right syntax for doing this ugly thing people won't notice how ugly this thing I'm doing actually is!"


Looking at the TOC this doesn't feel like evidence-based software engineering. This is simply because there seems to be way too much background in this book. It basically spans 4 topics:

- Cognitive (neuro)psychology

- Economics / finance

- chapter 4 to 7 seem to be about software engineering

- Statistics

Don't get me wrong, I'll take a read, but not because I want to read about evidence-based software engineering. It's much more because it seems that you're connecting a lot of different areas together and I want to see how you do it. The second reason is because I happen to know a lot about these specific areas as well :)


Hi Derek, thank you for writing the book and releasing it. I haven't seen a similar book before for software engineering, so I'm very intrigued and intend to read it.

I feel bad for asking something from a free resource, that took a long time to make and put together, so please ignore it if it's a lot of trouble. Is it possible to publish epub or mobi formats of the book?

That would help ereaders format it better, pdf's are inflexible. If it's not easy, please ignore this request.

Thank you for writing the book.


If you’re looking for something a little more focused I can heartily recommend Andy Oram and Greg Wilson’s “ Making Software: What Really Works, and Why We Believe It” it’s a great collection of literature reviews around a lots of common software engineering practices.


Anytime I see 'evidence based', I can't help but think "my opinion for which I've cherry picked data".


Better than "my opinion for which I've not even attempted to see if the data can be contorted to support", which is still the stage we're at for a lot of our industry since it's such a new field.


My pet peeve with internet forums is when we counter n=1 studies with n=0 opinions


I'm not convinced that's right. Somebody who incorrectly believes their belief is backed by data may be less inclined to change their mind than somebody who recognizes that their belief isn't backed by data and consequently has a good chance of being wrong. In other words, the chance that your belief is correct or incorrect matters, but your inclination to reevaluate your belief is also something to consider. Cherrypicking data to support your beliefs or various forms of inadvertent p-hacking may make people feel quite certain of things that actually aren't true.


Have you checked out the book (PDF here: http://knosof.co.uk/ESEUR/ESEUR.pdf )?

There are 2000 footnotes referencing published studies.

It's not perfect, but hopefully there are enough different views all triangulating on "truth" to provide value.


Do you have any evidence for that? ;-)

The world is a complicated place and the truth is hard to find. Unintentional confirmation bias is a thing. Intentional (at some level) cherry-picking to support your preferred answer is a thing too. But that doesn't mean that nobody ever learns anything from looking at data.

If someone claims to be "evidence-based", then their intention should be given the benefit of the doubt. They may be affected by confirmation bias, but that doesn't mean their conclusions are wholly baseless. You should take a look and see how they're doing before dismissing them out of hand.


I also find it quite strange. I remember my teachers telling that we're making a little bit of progress in software engineering research, but comparing it to the state of research in fields like Math or Physics would be like comparing a fully grown man and a toddler.

In other words, we hardly know anything


We may hardly know anything, but, comparing any field that involves humans to math or physics is an apples to oranges comparison. All it takes to do math research is time, knowledge, writing material, and, preferably, the support of an institution which allows one to do research full time. There are no pesky experiments to run, except possibly inside a computer. Physics can be the same way, but, even when it involves "pesky experiments," those experiments only depend on the physical laws of the universe, and the labor of a fleet of graduate students who have great incentives to see these experiments through. SWE research has either of these properties.


It is a common cognitive approach to argue that things are unknowable, especially if evidence is currently pointing to a truth we wish were not so. Certainly it is very, very hard to get quality evidence in some domains, and software engineering is one of the hard ones. That does not mean it is impossible, and we should never dismiss things out of hand. We should have specific critiques for claims we think are wrong, and explain why.


I completely agree with this -- obviously we have to made decisions and build systems based on the data we observe and collect, but the idea that it's some incorruptible or unbiased exercise is one that I think is both appealing and naïve.

By assuming there's a "right" way to arrive at conclusions, people run the very dangerous risk of believing they've got the One True Answer, and everyone else are Wrong.

I think reality is a lot closer to the idea that there is no objective/rational way to decide what information is relevant and what information isn't, and we have to constantly adjust that dial in everything we think about and do.

There is an article about a group of people who have greatly reduced emotional capacity (not sociopaths, something more severe), and are therefore more or less rendered incapable of making decisions. It's a fascinating read, but I can't find it in my brief searches. Maybe someone else knows what I'm talking about?

Edit: I found the relevant anecdote! From an NPR interview https://www.npr.org/templates/story/story.php?storyId=101334... about a book called "How We Decide":

"And I think one of the best examples of this comes from the work of a neurologist named Antonio Demasio, who in the early 1980s was studying patients who, because of a brain tumor, lost the ability to experience their emotions. So they didn't feel the everyday feelings of fear and pleasure. And you'd think, if you were Plato, that these people would be philosopher-kings, that they would be perfectly rational creatures, they'd make the best set of decisions possible. And instead, what you find is that they are like me in the cereal aisle, that they're pathologically indecisive, that they would spend all day trying to figure out where to eat lunch.

They'd spend five hours choosing between a blue pen or a black pen or a red pen, that all these everyday decisions we take for granted, they couldn't make. And that's because they were missing these subtle, visceral signals that were telling them to just choose the black pen or to eat the tuna fish sandwich or whatever. And then when we're cut off from these emotional signals, the most basic decisions become all but impossible."


It works both ways though.

When the data presented to us suit our current view or agenda, we somehow feel that it's valid. I remember when discussions about "open plans" or "estimates" were still a thing, without fail "evidence based" works such as Peopleware or The Mythical Man Month would be mentioned.

If this book turned out to agree with the view that "scrum is not really more productive", or "less meetings means more productive", or "the interview process is broken", then we'd all be really glad to refer to its data.


I don't, but maybe that's because I've mostly heard in a medical context to distinguish policies based on any evidence at all from policies based on gut feeling, of which there are a lot!


What's the alternative?


Stop pretending that there's an easy empirical answer to everything, and accept the nature of complex domains.


I'm having a hard time deciding if this is the work of a well-meaning but misguided obsessive, or GPT-3 generated pseudo-gibberish.

In any case, this book could use a great deal of editing. If there is wisdom in here, I don't want to dig through hundreds of pages to find it.


I’m glad I’m not the only one. I started trying to skim it and couldn’t figure out what the hell was going on. Granted, I didn’t give it a strong chance, but the title made it seem like something very different than what it is. I think.


"The craft approach has survived because building software systems has been a sellers market, customers have paid what it takes because the potential benefits have been so much greater than the costs."

Gotta disagree here. SW dev has never been a seller's market in the way that eg the property market or job market can be a seller's or buyer's market. There are powerful effects in the market making that a massive simplification. Firstly platform network effects; think of the browser wars for instance. Secondly, open source: a lot can be had for "free". Thirdly, lowest bidder: John Glenn famously commented, when asked what he thought about sat on top of an Apollo rocket, "every component has been built by the lowest bidder". Fourthly, novelty: a significant amount of SW dev is about building things that haven't been built before. Which can be more like a research project than eg putting up a new apartment block, or bridge, or designing and productionising a new car.



So how do you measure performance or success of an engineering department within a bigger corp? Not to evaluate salaries or budgets, but to know, are we moving in the right direction? Are we improving or are we getting worse?

From what I see most metrics on an individual or team level can/will be gamed leading to worse outcomes.

Metrics on higher levels like on the level of the product or even revenue of the company/product depends on many other players outside of the control of the engineering team (product management, operations, QA, regulatory, marketing, sales, etc). This of course it's due to the fact that delivering value to customers is a team effort.

But how can you know/show your department is on the right track?


Personally, having worked for a few big companies in my career, I believe it's absolutely possible for a company to be too big. It might be an impious opinion, but there absolutely comes a time in a company's existence that internal politics begins to undermine the mission of the entire organization.

Of course, I haven't worked everywhere and I recognize that this opinion is based purely on my own experience. But companies that generate billions upon billions in revenue seem to care less and less about functional process improvement and more about short-term profit gains. Because ultimately, the C suite is paid based on stock performance for most large companies, and that becomes the only metric that matters. In that situation, everything falls in line behind it.


In Accelerate the authors explain the construct they use in the State of DevOps reports. It consists of four metrics:

- Deployment frequency

- Lead time for changes

- Failure rate of deployments

- Time to restore service

These collectively capture some sense of "quality and quantity" of changes.

What they don't capture, obviously, are whether all changes are meaningful and valuable to the customer.

However doing well on these metrics is nearly a prerequisite for doing a good job of bringing the voice of the customer into the process, so maybe trying to capture customer value created is step 2.


Why do you want to measure the productivity of the engineers in isolation when their ability to deliver value depends on other people? Too often the answer to that questions boils down to 'ammunition for the blame game'.

If your engineers can't deliver value because of say product, who cares how productive they are? Measure business outcomes - you will get what you measure. If engineering is being measured by the ability to deliver business outcomes but engineers can't because of other people, the software engineers are going to let you know* and it's an important thing to learn about and fix.

*this assumes you have competent managers and engineers who are willing to communicate with those managers, but if you don't have that, this is sort of a moot point.


What I appreciate about this is how it addresses the more managerial aspects, things I've been trying hard to wrap my head around so that I can better approach management with things like data based risk analysis based on existing risk analysis algorithms, and things like how to evaluate certain interal functions efficacy. I will very much be reading this thoroughly.

Now what I really need is a book on how to break through the ceiling to start communicating with the C's/board better (or at all in some companies where managers and C's are blackholes of information...) This is one of my largest career failures I constantly think about... I see failures that are managerial first but have second tier effects of causing technical issues (technical debt being only one), but everyone in the chain fails to know how to voice this to higher ups, who then tend to assume it's all technical or team based and do things like start replacing managers with PHB's.

Sometimes I wish I had gotten an MBA instead just so I could navigate the culuture/speak better. How can an engineer play at the c/board level if he has the desire and aptitude? Any reading recommendations would be appreciated. The older I get the more I regret not "playing politics". Is there a "The Prince - but for SV?"


> how to break through the ceiling to start communicating with the C's/board better

Anecdata and perhaps unfair but from my own observations, once you get to C-level, who is doing the communicating is much more important than how the message is conveyed.

There is definitely a 'club' mentality, usually the way you will get a voice is by building relations with C-levels that will 'adopt' you as their protégé/mentee. Once you have someone in the room who can say "we should listen to this person" then you will have a chance of actually getting info/influence upstream.

But to be build those relations there is a fair degree of 'self-promoting' and being seen as the go-getter in the crowd. Silently doing your work and silently going above and beyond is usually just a recipe for disappointment. Another thing I don't often see discussed is that, as with a lot of other things, luck and timing matters a hell of a lot.


accurate. People struggle to navigate this realm without being disingenuous though. Stay true, but play the relationship game instead of the write code game.


Until we start having a "lines of code" budget, nothing will be fixed.

If you want feature X, how many lines of code will it take? 2000? Oh, sorry, we are already spending 6000 lines of code on other features, so find a way to do it in 500 lines or defer the project.

Yes, just like anything, this can be gamed. But at least it would get product managers thinking about the complexity they are imposing and the effect that will have on other features.


Sounds like way to form incredibly dense codebases.


Where’s the ‘conclusions’ section?


Is it available in a kindle format? mobi, or epub would be amazing!


Honestly sounds very interesting and necessary book.


Wow, what a wonderful pdf.


Is there a physical copy I can buy (couldn't find one) or do I have to build one from Lulu?


> I’m investigating the possibility of a printed version.

That's the 3rd sentence in the linked page.


Have you guys ever tried software engineering engineering techniques that utilize No evidence? The current trend of the programming world is to worship data and science but some of the most powerful programming techniques do not involve data or science.

A book taking another approach to programming: http://www4.di.uminho.pt/~jno/ps/pdbc.pdf

Some of you may have trouble distinguishing what is not evidence based and what is evidence based. Hopefully the following dichotomy can clear up any misunderstandings: Science is evidence based, Math is not evidence based.

In the real world the ideal approach is to utilize both evidence and non-evidence based techniques, but more often then not the non-evidence based techniques are harder to understand and therefore under utilized.


Evidence-base X is the best X humans can muster.. except for Love


I find the use of evidence concerning. Software engineering best practices should be established by highly educated academics (Phds etc).


What are you trying to say? I honestly can't tell.

It sounds like you're saying that PhDs should decide what best practices are, without regard for evidence. If so, that had better be sarcasm, because it's patently absurd.

If your point was something else, could you clarify?


I assumed they were being sarcastic. Perhaps a reference to groups like the SEI (Software Engineering Institute).


That was subtle for most people to pick up on. Congrats!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: