Hacker News new | past | comments | ask | show | jobs | submit login

I think the reason there's so much technical debt is largely because the amount it would cost to actually build quality software... is too high. We could not afford it. Like, as a society. Our society is built on crappy software.

I'm not sure that I agree. If by crappy you mean "not formally proven", then sure. Or if you consider floating point crappy, then we disagree on terms.

I think our industry is in a state where 98% of the code produced is just junk: unmaintainable, barely working, no future, career-killing garbage just waiting to fail at the worst time. This is tolerated because software victories are worth (or, at least, valued at) gigantic sums of money: billions of dollars in some cases.

I'm not sure how well we can "afford" it. Do we want to go through another 2000-3? How much use is it to have massive numbers of people writing low-quality code, not because they're incapable but because they're managed specifically to produce shit code quickly in order to meet capriciously changing and often nonsensical "requirements" at high speed? I think it's great for building scam businesses that demo well and then fail horribly when code-quality issues finally become macroscopic business problems and eventually lead to investors losing faith. (Oh, and those failures are all going to happen around the same time.) I'm not sure that it's good for society to produce code this way. So much of the code out there is "totaled": it would cost more to fix or maintain it than to rewrite it from scratch. You can't (or shouldn't) build anything on that.




As a complete derail:

Floating point, as IEEE standard? Beautiful. Elegant. One of my favorite technical standards. Other than the +0/-0 thing, it's perfect.

Floating point, as implemented? Ugh. You've got processors which implement some subset of x87, MMX, SSE, SSE2, SSE4, and AVX, all of which handle floating point slightly differently. Different rounding modes, different precisions, different integer conversions. Calling conventions differ between x32 and x64. Using compiler flags alone on Linux x64, you can make 'printf("%g", 1.2);' print 0. Figuring out the intermediate precision of your computations takes a page-sized flowchart: http://randomascii.files.wordpress.com/2012/03/image6.png

It's a mess.


The "mess" reflects the fact that choices exist, that is, it is the result of the different goals of the producers of compilers or the processors, not of the mentioned standards. What's not standardized can vary.

Compared to the pre-IEEE754 state, the standard was a real success.

Re the article of the picture you link (0) still unless you're building games, and as long as you're compiling using VC your results haven't changed for more then a decade and a half. New versions of the compilers took care to preserve the results. And even VC 6, produced 1998 luckily selected the constants of intermediate calculations that were most reasonable and matched the ones in SSE2 hardware introduced by Intel in 2001.

0) https://randomascii.wordpress.com/2012/03/21/intermediate-fl...


You say "So much of the code out there is "totaled": it would cost more to fix or maintain it than to rewrite it from scratch."

If that's the case, why does such code still exist? If it's still running, then in some sense someone is "maintaining" it, at least to the extent of keeping the server it resides in powered on. In other words, someone obviously finds it cheaper to keep such code running as-is than to rewrite it (or to do more ambitious maintenance on it).

Even crappy horrible buggy code can be useful (in a business sense, or a "makes its users happier than if it didn't exist" sense), as hard as it is for us as developers to admit it.


One example: I used to work for a company offering a security-related product with crippling, fundamental security problems. The flaws covered everything from improper use of cryptography to failure to validate external input, lack of proper authorization handling, and even "features" fundamentally at odds with any widely expected definition of security.

This company continues to survive, and has several large clients. But the liabilities of the current code base are massive. Worse is that the clients aren't aware of the deep technical problems, nor is there any easy way for then to be. In a very real sense, this company is making some money in the short term (I don't believe they are profitable yet) by risking their clients' valuable data.

In general, the concern by the grandparent is that there are projects out there that are producing some revenue, but are essentially zombies. Every incremental feature adds more and more cost, but there's no cost-effective way to remove sprawling complexity. The project will die, taking along with it significant investor money.


Okay, me and you agree that most of the code produced is junk (not everyone in this thread does I think!).

I agree that the junky code is going to bite us eventually.

But what do you think it would take to change things so most of the code produced is not junk? Would it take more programmer hours? More highly skilled programmers? Whatever it would take... would it cost more? A lot more? A lot lot more? I think it would. And I think if this is so, it's got to be taken account in talking about why most code produced is crap.

I do not think it's because most programmers just aren't trying hard enough, or don't know that it's junk. I think it's because most places paying programmers do not give them enough time to produce quality (both in terms of time spent coding and time spent developing their skills). And if say 98% of code produced is junk, and it's because not enough programmer time was spent on them... that's a lot of extra programmer time needed, which is a lot of expense.

The utopian theory of the OP is that with the right tooling, it would not take any more time, or would even take less time, to develop quality software. I think it's a pipe dream.


>>I do not think it's because most programmers just aren't trying hard enough, or don't know that it's junk.

Actually, that's exactly the reason.

Back in 2003 I was a sophomore in college and I took an intro-level CS class. It was taught in Java. Back then we didn't have sites like Stack Overflow, so if you ran into issues during projects you had to find someone who could tell you what you were doing wrong. Often times this person was the TA or the instructor, and those had limited availability in the form of office hours. So it was super easy to get demotivated and give up -- which is indeed what made a lot of wanna-be programmers (including me) switch majors.

Fast-forward ten years. We now have a plethora of resources you can use to teach yourself "programming." While this is good in the sense that more people are trying to enter the profession, it's not so good because when you teach yourself something complex like programming, it is often difficult to know whether you are learning the correct habits and skills. I've been learning Rails for the past five months and I spend a lot of time obsessing about whether the code I write is high quality, but that's only because I've been an engineer for six years and I'm well-aware of the risks of building something overly complex and unmaintainable. In contrast, most people build something, get it to work, and then call it a day. They don't go the extra distance and learn best practices. As a result, the code they produce is junk.


As long as the job of a programmer is to be a business subordinate, it will not change and we'll see crappy code forever.

Mainstream business culture conceives of management as a greater-than relationship. You're a lesser being than your boss, who's a lesser being than his boss, and so on... It also is inhospitable to the sorts of people who are best at technology itself. Finally and related, it conceives of "working for" someone not as (a) working toward that person's benefit, as in a true profession, but (b) being on-call to be micromanaged. The result is that most programmers end up overmanaged, pigeonholed, disempowered, and disengaged. Shitty code results.

If you want to fix code, you have to fix the work environment for programmers. Open allocation is a big step in the right direction, and technical decisions should be made by technical people. Ultimately, we have to stop thinking of "working for" someone as subordination and, instead, as working toward that person's benefit. Otherwise, of course we're going to get shitty code as people desperately scramble (a) up the ladder, or (b) into a comfortable hiding place.


"As long as the job of a programmer is to be a business subordinate, it will not change and we'll see crappy code forever."

Well of course that's the job of the programmer. The programmer is supposed to build something that does something useful. Most of the time, the primary value of the code isn't that it's GOOD, it's that it DOES THE THING. Oh, sure, at the level of (say) the Linux kernel you can almost think of it as code for the sake of code, but you walk back up the chain and you'll find a lot of people contributing indirectly because they want to do THINGS and they find that they need a kernel for those things.

But most programmers aren't at that far of a remove from doing things, they work directly for a company engaged in doing something other than selling code. Management at that company wants things done. They insist upon this at a very high level of abstraction, that of "telling you to do the thing for them." You are a leaky abstraction.


Purpose and status aren't linked.

There are programmers who without direct day-to-day management produce code that is valuable to the business, and programmers who receive comprehensive managerial attention and produce code that costs the business.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: