Hacker News new | past | comments | ask | show | jobs | submit login

An engineer can get things done quickly but that doesn't mean it's done correctly and won't break (and someone else will have to fix it).

If something doesn't work then it's not done.

To complicate things more, a feature might seem to work perfectly, until a certain point when you realize that it's all wrong at an architectural level and you have to rewrite everything.

If the requirements have changed then a new piece of work is needed. That doesn't invalidate the previous work; it just means things have changed.




There is a difference between something not working and domething prone to breaking in the future. Changes are fine, but your architecture may be the one that accommodates changes with ease or makes any change extremely painful.


There's a difficult balance between "future-proofing" and "over-engineering". I'm not sure which is actually worse. Given how often I've seen a client happily run a prototype of some code in a production environment for years, I tend to err on the side of thinking future-proofing being a bit of a waste of time. This is especially true if there isn't a clear roadmap that states when something is going to change, and the cost of changing isn't already committed to the project.


IME the best architectural designs are usually the result of heavy retrospective refactoring, not up-front design.

Thus future proofing isn't just a waste of time, it's actively counter productive. The predicted requirements are rarely correct although the extra code remains a liability.


> IME the best architectural designs are usually the result of heavy retrospective refactoring, not up-front design.

Right. And this applies to other disciplines too. "Heavy retrospective refactoring" is exactly what a writer does each time they go back and make edits to a piece. Connections between elements such as characters, themes, writing style, and dialogue are felt out only by emitting draft after draft.

Even this comment went through many edits to get into this shape. These two sentences alone have been rewritten ten times.

In my experience there's also value in imposing a framework once one has done enough emitting of raw material. For code this might be looking for ways to extract common logic out of several similar functions. For writing this might be considering how two characters have "interacted" so far and what other details ought to fill out their relationship (without considering whether or not all those details will manifest overtly in the work.)


This is where experience comes into play though. Not every bit of software is totally unique and built on first principals. If you're working at some place and built similar software at a previous place, part of the up-front design is going to be driven by retrospection on previous projects.


But wait a minute - shouldn't most of one's future-proofing be making sure you CAN easily alter a design later if need be (including better documentation), especially in the ways you can already see change coming?


Cleaning up existing code will make it easier to alter designs later and that's something you should continually be doing as you implement features and fix bugs, but that's not pre-emptive architecture.


As I say elsewhere here, I've killed projects assuming that. Pulling out bricks from the bottom and replacing them is sometimes so hard, you really have a dead project.


Seconded. Premature design is worse than premature optimisation.

http://wiki.c2.com/?YouArentGonnaNeedIt


There may be, and often are, valid business reasons so that it's more important to get X working now in a way that's prone to breaking in the future, instead of spending more time to do it properly; and that choice is not really up to the developer, often they wouldn't have enough information/context to make an informed decision. A developer who always takes the time to ensure that things are done right is (in most industries) doing suboptimal work by definition, since one shouldn't always do so, it's a tradeoff.


I once wasted a helluvalot of money obeying just such an order, and not doing the extra coding I knew would prevent inadvertent huge purchase X. Turned out the boss just didn't know how to do complex logic or how to listen, or think ahead. The break, when it inevitably came, was catastrophic, but had the good effect of removing said boss (I was long gone.)


I agree with this sentiment of weighing short vs long terms needs. However, if you are working in a medium to large place not everyone is benevolently doing things just for the sake of the company. Many times managers demand to sacrifice software quality for their own career climbing even if it will be a liability to the company as a whole.


Prisoner's Dilemma. A very real thing in management stuctures - one historical example, middle managers under pressure from the top who cheated on X-Rays testing the work on a silo for a nuclear plant. The big boys didn't know. Killed the plant, and I think, the company.


That's where the "smart" comes in.

It wouldn't be smart to build something that is un-maintainable or prone to breaking in the future if that's what was needed, would it?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: