Hacker News new | past | comments | ask | show | jobs | submit login

While Dijkstra sits in his ivory tower , patiently figuring out the most elegant algorithm , in the startup world as Zuck says is all about moving fast , breaking things in order to find out whether we even need the algorithm in the first place.

When my boss comes to me with the next story , I can't tell him that , "Hey , I think I need two weeks for this as the system is becoming overly complex. I think I need some time to refactor the system into something simpler and more reliable .. "

I've seen an example of this first hand. We had an image editor that allowed users to manipulate and rotate layers using like you have in Photoshop. Instead of using straightforward affine transforms , they came up with some weird , extremely complex implementation I'm yet to get my head around .It isn't even documented as to how it works.

Finally when the time came to make it behave differently , I reimplemented only the new behaviour using affine transforms and simple trig and routed the flow through the old code or the new one using flags.

Not the most elegant solution but that's how it is in real world projects. It's only the output that matters. The internal complexity is not something the customers gives a shit about. They want their feature and they want it now. And they're not even sure what they want.

So the need for rapid iteration to discover what you need to build in the first place , and the quest for the most ideal and mathematically elegant solution will always be in conflict with each other.




Dijkstra did a lot more for moving the programming needle than Mark Zuckerberg. Sure, Mark made more money, but that's not the only yardstick by which we measure achievement (fortunately!).

Without a bunch of people doing basic research none of us would have jobs.


I have great appreciation for people like Dijkstra and Knuth and the work that they do. They write operating systems , compilers and databases.

But coming down from the ivory tower of academia , do you really need to go to the trouble of writing your custom CMS in Haskell after proving it's correctness in Coq ?

As a further example , the succesful company KissMetrics built the first version of it's product in a month and shipped using SQLite (!!) as a DB.

In their own words , they optimized 100 % for shipping speed.

Software doesn't exist in a vacuum , it plugs into a business.

The amount of rigor and research that must go into a piece of software depends on the domain and business requirements. To insist that all software be written to such a high standard is unrealistic and will never happen.

I'm yet another software engineer who (most of the time) writes CRUD apps as quickly as possible so my boss pays me on time. I have no pretensions of being a 10x or being the hero who drags the world out of the dark pits of poorly specified and mathematically inelegant software with his superior intellect.


But coming down from the ivory tower of academia , do you really need to go to the trouble of writing your custom CMS in Haskell after proving it's correctness in Coq ?

Dijkstra was nothing like this. In fact, he was probably one of the more practical academics (if that even makes sense to some people). He advocated top-down, logic-based methods for proving correctness that were based on incrementally transforming a high-level program specification into a final product fit for implementation. He likened it to composing music.

He also deeply studied paradigms that were firmly imperative in nature. Besides his work on semaphores for concurrency control, his deductive system of predicate transformer semantics and the Guarded Command Language built on top are some of the most comprehensive notations for imperative languages. He was also on the team that developed ALGOL 60, so it's quite likely you owe most languages you've used partially to him.

That and your conception of an academic using the analogy of the "Haskell CMS verified by Coq" is largely a straw man, though I can understand why you used it. HN is pretty crazy about these things, so it's easy to get caught up in the thought bubble thinking it's the norm.


Shipping a product with sqlite doesn't contradict with having well structured code.

Also, it's fine to write a prototype to get to market faster. But building further on top of that prototype is dangerous, because it will become harder and harder to make changes to the system and still have it behave correctly. And once the incorrectness of the system gets beyond of what the business will accept, it's very hard not to do a full rewrite, which is often very dangerous as well.

Granted, this is still what often happens ...


> I'm yet another software engineer who (most of the time) writes CRUD apps as quickly as possible so my boss pays me on time. I have no pretensions of being a 10x or being the hero who drags the world out of the dark pits of poorly specified and mathematically inelegant software with his superior intellect.

You could stand to learn something about the Hell of fighting other people's quickly written crap just to get your job done.


It is this approach that leads to the "Full Employment Act of Security Professionals" everywhere.


Part of being a good software engineer is knowing when to do the "wrong" thing (implement something poorly to ship fast). But you must be aware of the trade offs. For early stage companies/products that are struggling for a market it often makes sense to hack things, but at a company that has a large and growing customer base, it rarely does.


There is also tons of examples of companies going bankrupt because they did not thought they would be consequences by shipping a half-baked product. It's just that sometimes you need to prioritize shipping and sometimes you really need a good product quality, it depends on a lot of factors and the business you are in.


Dijkstra doesn't do any work, he's been dead since 2002.


Well sure, but you can find your beliefs in ivory towers too. Also, argument is central to academia or my impression of it anyway.


May you be cursed to spend the rest of your days writing PHP.


I don't think that the user you're answering to is criticising research perse but rather the irony of an academic talking about real world development scenarios.


He addresses this point:

But apparently, many managers create havoc by discouraging thinking and urging their subordinates to "produce" code. Later they complain that 80 percent of their labour force is tied up with "program maintenance", and blame software technology for that sorry state of affairs, instead of themselves. So much for the poor software manager. (All this is well-known, but occasionally needs to be said again.)


When it comes to translations and rotations, most of the algorithms out there use 4x4 matrices, dual quaternions, axis-angle rotations, or Euler angles. There may be some emerging work using geometric algebras.

So when you say "extremely complex" I think quaternions, because i^2 = j^2 = k^2 = ijk = -1 .

I once had the opportunity to refactor a lot of disparate implementations of 3D manipulations into a single math library, and migrate all the various algorithms to quaternion-based calculations. I actually had time to examine and understand a lot of different ways of doing the same thing and validate that they actually were all doing the same thing.

But then we all got canned when the contract was used to wipe someone's backside. I think the former customer tossed all that custom code in the trash and decided to do everything with MATLAB. I hope that's working out well for them. And by that, I mean I hope they are put into a persistent vegetative state during a prison riot.


"And by that, I mean I hope they are put into a persistent vegetative state during a prison riot"

That's a little more invested in your work than i think is healthy.


Ha !

See. It happens everywhere.


> The internal complexity is not something the customer gives a shit about. They want their feature and they want it now.

None of this is false. At some point though, the internal complexity may grow to the point such that you can no longer deliver the features they want right now. What then?


Then the good developers responsible for most of the codebase leave for somewhere else using their experience as CV points, and the firm rotates through large numbers of inexperienced or incompetent engineers (who can't get a better job) as the team inexorably grows to several times its original size, whilst the number of new features dwindles until everybody is doing maintenance on the spaghetti ball, and existing features start breaking slowly.


Wow. I've experienced exactly this process twice already, and I've only seen the inside of about six large companies. I didn't know it was such a common way of things unfolding.


Did you also have the "ticket racket"? This is when the CTO sets up some kind of ticketing system, and uses the increasing number of breakdowns (and useless "fixes" that break again with the next build) as a KPI to justify increasing budget. "We fixed 150 tickets today! My team is working overtime on this, they are really dedicated and mission-driven, their spirit and commitment is amazing." The CEO nods, suitably impressed - he really has a crack tech organization. He's getting a great return on investment on his highest budget department!

Bonus points for inserting a thick PM layer in between to "manage" the rapid expansion of tickets.


Yup. Another possibility (if a company is fortunate enough) they'll be able to snag a dedicated "maintenance developer" willing to deal with the mess for 2-3x typical salary.


I see you've also used BackupExec.


That point is dangerously close. I don't now what to tell them . Not everyone appreciates the benefits of not adding ridiculous features and non stop changes that overcomplicate the system while not adding much value. In fact many of the features aren't even used.

Also when a project is outsourced , then the outsourcing firm is only happy that there are lots of changes as that means more billable hours. They're not going to push back. And neither can the developers working there. This is yet another danger of completely outsourcing your project without any technical oversight.

Hopefully , I'll soon be working on something else . ;).


> I don't now what to tell them .

You have to manage expectations. Not just do whatever it takes to get things working, until they don't work/can't be changed anymore. It all looks the same to the people who haven't taken a look at the plumbing themselves.


    > When my boss comes to me with the next story , I can't tell him
    > that, "Hey , I think I need two weeks for this as the system is
    > becoming overly complex. I think I need some time to refactor
    > the system into something simpler and more reliable .. "

Sounds like you should blame Zuck, not Dijkstra.


Your example is a tad ironic. Why wasn't the original system implemented in the straightforward way? Unless there's a good reason, your example is a pretty effective demonstration of what Dijkstra is warning about.


Yes it is. But on the other hand, the system has worked all these years , thousands of customers used it and got their jobs done. At that time they needed quick prototypes to settle on a good design.


What happens when the cruft becomes so overbearing that you can no longer deliver features? Not only can you no longer deliver features, but you now have to communicate that the state of the system is unworkable and you need X weeks to fix it. Thats a much more uncomfortable position to be in than say "Hey, to enable us to keep developing this system we need to allocate some time to refactoring. If we don't, there is a danger that the system becomes too internally complex too manage and starts to damage the reliability and flexibility of our product."


99% of software won't be around that long.


Sorry, but my experience shows the opposite.

I've worked in multiple jobs where some piece of software was an incredibly complex ball of mud, where things were left without refactoring because no-one currently in the company understands how they work, and no-one can spare the time to do the refactor properly. Where changing a method or property unexpectedly breaks some other, apparently unrelated feature. Needless to say, this made adding new features a pretty risky affair.

This happens all too frequently to believe in your 99% statistic. For example, it happens with some pieces of software at my current job.


It happens all to frequently because the 1% tends to be very long lived. If 99% of babies died in the first year of life, all of your friends would still be more than one year old.


Consider that this essay is 33 years old and remains still relevant today such that we can read it and appreciate the issues.

I wonder if Facebook will remain relevant for that long?


Given that facebook isn't an essay - it provides different value to an essay, under different circumstances, to different people - I'm not sure what you're going with that question.


Of course your boss and customers only care about the output. But if you need to make sure your output is correct or want to make it easier to add new things, a good design is crucial


Part of the domain of application programming is about "agility" because, from the viewpoint of business, if they want something changed you should be able to do it fast.

My experience is that agility is best served by building on top of a well thought-out framework. For instance, shops that use Ruby-on-Rails have answers to most of the common problems that turn up. If you're dealing with pre-framework applications in Perl, PHP, ColdFusion, Java and who knows what, the simple task of adding a field to a form involves touching code and schemas that are all over the place.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: