Hacker Newsnew | past | comments | ask | show | jobs | submit | bigpicture's commentslogin

Prevailing winds bring relatively clean air to most any city east of the Mississippi River especially in the winter time. Without nearby mountains, temperature inversions are pretty rare.

In late summer, air quality drops quite a bit as the wind drops off.


A few years ago I had to update some FORTRAN code written in the 1980s that did bond valuations and some other financial stuff. It hadn't been touched since it was written and was now running out of space because the number of bonds keeps increasing year after year.

I'd never done anything with FORTRAN before, so I spent a couple days reading about it and looking into how it works, fearful that I'd muck everything up trying to fix it.

I open the code and find that it was the most elegantly written software you could wish for, with fantastic comments and structure. I changed two numbers and was done, buying us another 30 years or so of rock-solid service.


A lot of young people think that the people who wrote software 30 years ago were just plain stupid and all wrote crappy code. In reality there were people back then who did a good job and there were people who didn’t. They just used different tools. I can’t wait for the current systems to become 30 years old. Making changes to micro services architectures written in different languages over multiple servers will be a lot of fun.


Necessity is the mother of invention and memory/resource scarcity is the mother of quality code

A lot of junk and spaghetti code gets written (or copied/pasted together) today because there are few hardware-based constraints in much of modern app and system dev

I work with a lot of embedded engineers and have the utmost respect for those that work on safety-critical, legacy systems


I don't believe that hardware constraints have anything to do with code quality. Running your code on a microcontroller will require your algorithms to run in a smaller RAM footprint and potentially come with timing requirements, but do not dictate that you write quality code to do so. You can solve those problems with thousands of lines of uncommented assembly efficiently, but that does not mean it is quality code.

I have worked for a significant time in my career as both an embedded engineer as well as a backend engineer, and in general I find that the backend code is way easier to read, maintain, and extend. Embedded code is seldom properly tested and most of it is written in C where people can abuse a library's contracts or the preprocessor. It is not uncommon to find functions that are over a thousand lines in the embedded world. Compare this to Rails where there are a ton of standards, short and succinct functions, and good support for testing.

I guess it depends on your definition of "quality code". If you mean code that is dependable and will do one thing well on one platform for the rest of time, then embedded code could be considered high quality. I would debate that these items have more to do with the binary than the code though. If you mean code that conveys how a program works well to other programmers, is under test, and follows some standard structure, I would pick modern app development as typically being much higher quality.


I don't know where this got a downvote from because it's correct - it's quite hard to build sensible automatic integration tests for embedded code unless you have the luxury of a full system emulation.

The only thing that resource constraint forces is less code, especially less dependencies, because you run out of space.

The Toyota "unintended acceleration" court case was a flagship example of bad embedded code, that we rarely get to see.


The Toyota computer is like a super computer compared to old mainframes. IIRC the Toyota program had 10k global variables updated by spaghetti C code. So you had a lot of space, enough to shoot yourself with C code. Old mainframe code in contrast often operated on records in a batch fashion, with a pretty clear input and output.

Of course, the antipattern on enterprise code comparable to thousands of global variables in C code, is to have thousands of columns in hundreds, or thousands even, of database tables, all intertwined and used only God knows where.


AFAIR, the were also autogenning code from a Matlab model of the engine, and that's where the 10k globals thing is from. Like yeah there's technically C code that's doing that, but come on.


Interesting, I thought the "unintended acceleration" was actually just floor mats creeping up and holding the accelerator pedal. Gonna have to google some stuff now :-)


https://news.ycombinator.com/item?id=9643204

It's one of those Rashomon situations where I'm not sure we can ever be entirely sure but it seems to have stopped.


Thank you for linking that. The comments in it share some terrifying stories.


That was Toyota's explanation for what was happening. Turned out to be a lie or a hasty conclusion, I don't remember which.


> I don't believe that hardware constraints have anything to do with code quality.

When the punishment for writing code that doesn't work is a long wait, you learn to write small pieces and test individually.


> memory/resource scarcity is the mother of quality code

My experience with Legacy code has been the exact opposite of this. I've seen some really well written and documented legacy code but almost never in resource / performance sensitive areas.

People inevitably seem to accept trade-offs that sacrifice readability and maintainability for efficiency.


Well I recall writing Fortan 77 for a billing system and we had nicely structured code (our team leader was a genius) but we did do some specific stuff eg reading in data in chunks the size of a disk sector.


And it was perfectly normal to do stuff like this.



"resource scarcity is the mother of quality code"

I don't know, your mileage may vary with this one. I have had to make code less readable to make it more efficient more often than I've been forced to find a more elegant way because the clever hack was too slow.


I wish that were true. I’ve worked on a lot of old FORTRAN code bases (including a couple that started life as punch cards) and it just doesn’t bear out. Remember FORTRAN comes from an era we’re people we’re concerned about the overhead of a function call. Programmers also worried about he memory overhead of comments in their editor! Most FORTRAN is a mess of gotos, common blocks, implicit types and other relics that should be left to the past. This code doesn’t age well either, optimizers don’t do well with gotos and the lack of function calls means that there are copy and pasted snippets of what my coworkers and I called “fassembly” that were untranslatable. One time I spent days translating one of these functions only to realize that it was an fft, I swapped it out with one from a library that had be optimized for decades the code was easier to read and 100x faster. I’m not even going to discuss any code base that was unfortunate to need any kind of string manipulation. To cap it off FORTRAN 77 doesn’t even allow variables with names longer than six characters.

That all said, modern FORTRAN is entirely pleasant language. Just stay away from the legacy.


>A lot of junk and spaghetti code gets written (or copied/pasted together) today because there are few hardware-based constraints in much of modern app and system dev

Maybe in the general case, but there are so many exceptions they're almost the rule. Apps like JIRA and Electron constantly have slowdown from bloat that pretty clearly is swamping the hardware constraints and would be avoided with better coding.

Also, I see classic games that are emulated or transformed on "fast" hardware that nevertheless have input lag. I've seen this one the Wii console for SNES games, and on inflight emulator that ran Pac-Man. Plus, TVs that upscale make Dance Dance Revolution have too much lag to be playable.


> I can’t wait for the current systems to become 30 years old. Making changes to micro services architectures written in different languages over multiple servers will be a lot of fun.

This will be bad even if they are all written in the same language, if they were not also all written in the same time frame.

With ancient FORTRAN systems, generally you need to learn the version of FORTRAN it was written in (FORTRAN 77, Fortran 90, etc) and enough about the problem domain to understand what they were trying to do.

With the kind of systems we are making now the hapless maintainer will not only have have to learn the specific ancient language version and the problem domain, but also whatever now forgotten framework was in fashion at the time the thing was written.


The stereotype isn't that the code was bad -- it's around the UI. (There are also stereotypes around modern UI being too minimalistic. Obviously these things are not so simple.)

There's also the idea NOT that it was bad when it was written 30 years ago, but that 30 years of patches by people who didn't write the original code mean it's no longer clean code.


Early in my career I worked on legacy i-Series systems. One of things we were taught was to never write a code without putting in comments in the header. And the commends need to specify dates, ticket number for tracking, and the rationale behind the change. Every changed line had to be inside a block with the same ticket number, making it easier to simply Ctrl+F through the code.

Now I work on newer software and I frequently see people making changes without spelling out the rationale, dates, author etc. And it makes it harder to track what has happened and what does the code do exactly. When I tell people to comment it correctly they often reply about - how much time saving can be had by writing a clean code.


Nowadays we keep that information in the VCS. git blame gives you more reliable info than careful comments (assuming that the rationale is recorded in the commit message).


Integrate with a ticketing system, force commits to match a pattern, and you can ensure the rationale is tied to your ticketing system (i.e., commits/merges have to be prefaced with FOO-####)


This is generally great (and I do it on projects I'm confident won't be moving any time in their useful life), but if you ever switch VCS/ticket systems, it gets messy quick.

Ever moved from Github Issues to Jira/Trello or vice-versa? How about Bitbucket to Github (as I did for my current company)? That's a whole lot of context potentially being lost, either in tickets or PR discussions.

How much of that lost context would have ever been used anyway is up for debate, of course. In my experience the answer tends to be "rarely, but sometimes".


This is why I discourage Free Software projects from using hosted issue trackers entirely, especially ones that are proprietary (such as GitHub). Mailing lists are the best solution for longevity and meaningful discussion. If you really need to, you can organize things in a file that is in the repository alongside the source code (Org mode is great for this), which has many significant advantages (and downsides, like not being point-click for PMs, but if your Free Software project has PMs, you have bigger problems that software cannot solve).


Often that sometimes is a life saver though...


This happens on github/gitlab too, with pretty much all open source projects. It's definitely a cause for concern eventually, because the context for the problem and the rationale for decisions is now something that you have to maintain apart from just maintaining the codebase. Right now github etc do a good job of managing that for you, but the long term handling of this context definitely has scope to improve.


Obviously this system will be obsolete in a few years too and nobody will know how to find stuff int it :-)


Well you know my last job I worked on a system that was a lot of 4gl under the hood and a Java application that sat on top calling the legacy 4gl code. The 4gl code had comments in the header with dates and ticket numbers. The dates stated late 80s and 90s. The ticket system was from 4 ticket systems ago so that was useless. The comments had become wrong over a period of 30 years of changes. The rationale for something in 1989 made a lot less sense for 2015. The author was now the boss for the whole office and hadn't touched code in 25 years so I can't go ask him about this function he wrote in 1989.

Entropy. It's not our friend. It's why we can't have nice things. Code bases suffer from bit-rot over time.


I think all of things you mention have value, but all of that information is (and in my setup, displayed inline) located in git. We keep ticketing, author, rational, and version changes, and you can walk through the changes with the tracked intentionality by stepping back through the version control.


I followed a similar protocol, but then I ran into a situation where good discipline kind of went out the window during "the great outsourcing". So when recently updated code had problems, because my info was the last actually properly recorded in the source code (even though I hadn't touched that code in years), someone might come gunning for me to fix the problems with "my" code. Not fun, especially when I might actually take a look at that code and see how badly the outsourced folks had mangled it!


Build the UI for the target user.

Minimalistic UIs are great if you're trying to make it simpler for outsiders.

If you have people who understand how to use the software, you can actually cram the UI full of everything you need in a single place.


Look, this sounds like you are talking about brutalist design for corporate software, and I can dig that.

What I can not dig is how old software prints things to screen. Touch a new dimension, let's recompile the data and re-load the list/entire screen right now! We've never done analytics our study our user base, so the most popular functions are buried 10 levels deep in a menu that re-loads every time you lose your scroll spot! ETC.

If you have spent time in corporate software developed in the WIN 95 days (cough oracle CRM cough) you won't see intentionally brutalist design. You will often see Magyver level hacks stacked through the roof, and software that loads so slowly you can re-load Gmail 3 times before it finishes.


First generations of browser-based UI did refresh/reload the page a lot. But there was nothing else they could do, really. AJAX wasn't a thing, DOM manipulation wasn't possible.

Older UIs written in Visual Basic, Delphi, Powerbuilder, or even for text-mode interfaces did not do this, because the frameworks supported refreshing only the changed data.

Look at how emacs or vi works over a 1200 baud dial-up connection. Surprise, it does, because it only redraws the parts of the screen that are changed. These problems were solved in the 1970s and 1980s.

The web broswer has been for most of its existence a really bad way to deliver user interfaces. The zero-deploy nature of it was very powerful though, so people suffered through it. Only in the last few years have browser-based interfaces approached the abilities of native clients.


“you can re-load Gmail 3 times before it finishes.”

After the latest changea to Gmail I have high hopes that it soon will catch up in terms of slowness :)


Most UIs these days seem written for beginners. Easy to get in but they become tedious for experienced people.


Oof. Currently developing software for doctors to use and this feels all too real. The concessions we have to make to cater to their relative inability to adapt would make me hate using this UI every day, but it's what they want.

It's not like I don't get it. I do. Just not 100% of the time.


eg Facebook or Amazon or Outlook or Photoshop. The antithesis of minimalism, features and buttons everywhere.


>Making changes to micro services architectures written in different languages over multiple servers will be a lot of fun.

Don't forget the 10,000 packages you need to find because they no longer have a feed available! They'll be rewrites.


> A lot of young people think that the people who wrote software 30 years ago were just plain stupid and all wrote crappy code.

The generation from the 70s and 80s will forever be the best technological generation the world will ever see. We had serious constraints that would need elegant solutions. I remember as a kid looking at Z80 maunals to learn how to save a few bytes so that code would run, or the 680k Microsoft limit that caused the code to be very concise and every bit was thoughtfully used.

Now that the sky is the limit we have cookie cutter solutions built on other pieces that "work" which really are just rewritten code in some new language or framework. This is now the new norm and it will never change.


> We had serious constraints that would need elegant solutions.

In the era of abundant hardware resources, I like to think that this sense of craftsmanship hasn't been abandoned, but rather transmuted--transmuted into the ability to make things as readable, accessible, and maintainable as possible.

The Redis, Phantom (Thrift proxy), Kafka, and PostgreSQL codebases are regularly things I consult as role models in the design of clean, pragmatic code.

The explosion of technologies since the elegance-by-necessity-of-optimization era does not necessarily mean that the capability of engineers has declined.


As a generation yes. But, I got started in 93/94, learned C. Got into infosec and reversing and can do machine code reasonably well these days. Plenty of my peers learned assembly on TI calculators and other places. By the numbers there are probably more competent low level systems engineers then there were in the 70s and 80s. As a ratio of programming professionals we are dwarfed by PHP programmers, etc.


> We had serious constraints that would need elegant solutions

I don't think those are comparable to complexities in IT that exist today.


> I don't think those are comparable to complexities in IT that exist today.

I think the constraints and lack of “here you go” resources many starting programmers dealt with, even for toy applications, out of necessity in the 70s and 80s are better preparation for the attitude necessary for dealing with the complexities in IT today than the learning conditions today.

Which isn't to idealize it, it was also a hard onramp that drove lots of people off that would have done well at many real world problems where they weren't soloing without support, and night with experience have still developed to great solo practitioners, too. But the people I've encountered that came through that 70s/80s start tend to be, IME, on average more willing to slog through and learn hard stuff in new areas than people who came up through, later, easier onramps.

Though that may also be in significant part survivorship bias, as the 70s/80s crew will have had to have stuck around the field longer, usually, and it may be people with that flexibility are more likely to stay in technology past the half-life of whatever was current when they got in.


In business the goal of software is to make money. Elegant solutions do not necessary increase the bottom line. Increasing productivity at the cost of elegance is a net win in most cases.


And yet I've seen plenty of business spend a huge amount of time and effort on "enterprise" abominations to attempt to build elegant extensible systems. I've seen at least a dozen instances of cron businesses re-invented. I've seen business throw money down bottomless holes on proprietary databases. There's a well known tech company that now owns two bloated electron based cross platform text editors, neither ever had the goal of making money, at least directly.

Business is great at wasting money on software, I just wish they'd waste it in a way that benefited people.


Survivor bias. The fortran that is still running is the elegant stuff, the stuff that was written well enough that nobody has needed to touch it for decades. Just as with "classic" cars, there was plenty of junk out there at the time. What is left is the most resilient, not necessarily representative of the whole.


It is entirely possible that one could make the assumption that if software was written 40 years ago and hasn't yet been rewritten, that it's good enough as-is that it survived the test of time.


Or that it is so horrendous as to be untouchable.


May be horrendous, but still does the job.


> A lot of young people think that the people who wrote software 30 years ago were just plain stupid and all wrote crappy code.

Stupid, no. Crappy code, yes. Part of the reason people wrote crappy code then is because programming techniques and best practices have come a long way. The other part is most people writing code then were inexperienced coders.

Of course, plenty of crappy code (most?) is written today, mostly because of inexperienced coders. It's not because they're stupid. After all, who would make a 20 year old a general?


Thinking about it as 'crappy code' is the wrong way to think about it. It's average code. Everything is normally distributed, including the quality of code. On average code quality is average. Some of it at the top tail end of the distribution is beautiful and some of it at the bottom end is an unworkable trash fire.

That hasn't change in the last few decades and it won't change in the next few decades either.


We've already reached the "fun" point for legacy Rails apps.


I have no issues navigating around in an old rails 1.2 app. The problem is usually the special sauce and anti-patterns that have been added to it over the years. This is a problem regardless of language or framework.


Running a Rails 1.2 app sounds... dangerous. Have you plans on upgrading it? (Probably at this point easier to start a fresh Rails 5 app, write some tests + copy/edit stuff over as needed).


Thankfully, most of those won't survive for 30 years. Anything that lasts that long without constant maintenance is probably a well-designed system that you won't mind working on.


It seems like with the modern pace of software today, things are practically guaranteed a rewrite within 5 years.


We said the same thing back then too.


A lot of developers using [insert this week's hot framework] think that the people who wrote software 30 days ago were just plain stupid and all wrote crappy code because they were using [insert the hot framework of four weeks back].


It's already "fun" trying to maintain systems that are two years old.


>I can’t wait for the current systems to become 30 years old.

Good luck with them lasting 10 years


That has been the fallacy for a long time. I remember discussions in the 90s when I asked people why we didn't store 4 digits for years and I was told "By 2000 this will already have been replaced". I know for sure that this particular system was still used in 2010 and probably even now. things that work keep getting used.


Yep. Software can last surprisingly long time. Personally wrote a micro service with intended life-time of 6 months. 8 years on, it is still soldering on, surviving multiple attempts to replace it, with almost zero maintenance.


At least there was some justification for space saving in the 60s when your PDP8 gave 4k core memory. Probably not unreasonable to expect the code to be obsolete by y2k either.


In the 90s that use case was not valid anymore but the mindset still persisted.


Very true. I was probably guilty myself - everyone was expected to know how structures aligned, and how to pack, so "being efficient" was often just thought part of writing good code. Hopefully there weren't so many 2 digit dates being coded in the 90s though. :)

It was probably still a valid use case for comms, if not within the application. Code was still being built on x25 and oh-so-slow modem links.


There was a lot of 2-digit year coded in the early 1990s. I wrote some of it myself. The reason was compatibility with older code/data formats, and also as mentioned the notion that "this will all be gone before 2000" or that it would be updated along with everything else if it had to be.


Same as today. If anything, programming was accessible to fewer people back in the day. What looks like poor design decisions today (fixed length integers) was just a reality they had to deal with.


> and all wrote crappy code

Yet, we didn't hear about left-pad until what? 2016?


Maybe some do, but I look at the past coding era as more of a mystical age where wizards and gurus made things happen out of thin air. Seems pretty impressive to me.


They also got a lot more time to think about more focused problems because many aspects of compute complexity simply weren’t possible


> young people think that the people who wrote software 30 years ago were just plain stupid and all wrote crappy code.

It's understandable. None of us back then had anywhere near the "copying and pasting javascript from stackexchange" skills that "tech savvy" kids have now.


There's also a disposable culture around the resources involved. A mentor of mine always says "You can have it, but if you break it then you get to keep both pieces". Without value in the broken pieces, no one needs to clean up their own messes anymore, so there's no need to move carefully or fail gracefully. If you trash your dev env, or even your workstation, it's trivial to roll back or redeploy and get back to work. Management is in on it too. Rather than hire more careful devs, it's easier to just move the explosions farther away from prod, and limit the blast radius. CI/CD


A lot of that stuff isn't gonna last 30 years.


That's what we thought 30 years ago and, yet, here we are.


I question your use of the word fun


This is totally right.


Old things are typically built to last. Because the cost of redoing them was prohibitively expensive.

If it were shit craftsmanship, it wouldn't still be running.

See:

- Roman architecture

- Browning designed firearms

- Savile row tailoring

- Old ships

- anything your great-grandfather owned


This can be rephrased as "things that lasted a long time were built to last a long time", which is an entirely unsurprising rephrasing of survivor bias.


The fact of the matter is that some things have lasted a long time. Isn't it worth asking _why_ they lasted so long–and conversely, why other things do not? All the survivorship bias in the world doesn't negate the fact that some things are made to higher quality than others.


Well, the Romans had a custom of charging less rent for many-story buildings, because they were widely recognized to be susceptible to collapse. I don't think they're really high-quality architects.

I had a look at some Roman houses in Ostia Antica, and the really striking thing is how little the techniques of architecture have changed. They could do some impressive architecture from time to time, but on the whole, they built stuff about the same as your average cowboy builder would today, minus a bunch of legal requirements (fire proofing, etc).


Well, old things still around today were usually built to last which is rather natural. (There are rare exceptions, the Eiffel tower was meant to stand only 20 years)


We do have a bit of a sample bias. We only see the things which have survived. But planned obsolescence was the norm.

Technology advanced too slowly for a producer to assume they'd have a new model to sell in the near future.


Exactly, which is why we should think twice about replacing them, just for the sake of having something newer. Newer isn't always better.


Old wooden ships had a short life BTW


Yeah old ships isn't as good an example as I thought after thinking about it a bit. I was only thinking of the ~100 year old dingy I have that seems nearly indestructible with little maintenance.


Thats why old wooden well preserved (made) ships survived up untill to now.

Crappy made w/e will break the moment you stop maintaining it.


You've obviously never dealt with a wooden boat much less one immersed in salt water. Maintenance to maintain those things is an never ending hard work chore. The best made wooden boats begin to rot the moment you stop maintaining them.


I bet a large part of that is that there is a good chance that someone writing FORTRAN in the'80s would work out their algorithms, how they would code those algorithms, and how they would structure their code on paper, and have pretty much the whole thing worked out before they ever sat down at a terminal to actually enter code.

When actually entering code, their focus can then be on the fairly straightforward translation of their notes specifically into FORTRAN, and on adding good comments for those who deal with the code later. Most of their creative energy at this stage can go into those comments.

Part of this is that it was still common in the '80s to either not have interactive access to the computer the code was for, or for such access to only be via shared terminals in a computing center away from your office. You needed to arrange things so that when you actually got to a terminal, you were efficient.

Since then, we've almost always got access to our own private computers that are powerful enough to run at least test versions of whatever we are working on even if it is ultimately meant for some big server somewhere else. Now we can sit down and start coding while still designing the program in our head. And so we do, even if sometimes it probably be better to separate design and coding.

I think this also might have something to do with why BASIC was good as a teaching language, as was noted recently in some other HN discussions. BASIC, especially with some of the limits put on it to fit it in some smaller computers, was constraining and painful enough to deal with that people quickly learned that trying to do that while also trying at the same time to figure out the design and algorithms they needed was way too hard. They naturally learned to separate figuring out how to do something from coding it.


In the 1980s, for FORTRAN, coding your program on punch cards was still not that uncommon.


I wonder if they were made to do leetcode algorithms on a chalkboard in FORTRAN 77 back in the day during interviews.


Back then, you likely didnt have anything resembling a PC or a white board (probably available but less common than today), so it was more likely on a blackboard with chalk and pen or pencil on paper before working on your punch cards. I started my internship at a Fortune 500 in 2001, and they had just retired their last card reader for the mainframe. Still had plenty of blank/partially used cards that made for great notecards.


I'm honestly terrified that America's nuclear missile silos are going to get "upgraded". Let them keep their 8 inch floppies and text interfaces, for the love of god. It works fine the way it is.


It will be fine they will just download the nuke app off of the app store and sign into Facebook for authentication.


I'm sure it will only need 70 or so NPM modules.


Only 70? That's a stretch. This is 2019, after all, not 2015. Those 70 dependencies are only the first-line dependencies. Their dependencies will bring in another thousand.


Hey, when you need to pad a string, what else are you supposed to do?


Hardware fails over time and gets increasingly less reliable and harder to source. These types of upgrades are mostly about maintaining existing functionality on modern hardware.


In the case of nuclear missiles, this is entirely a good thing. If both sides can just maintain Potemkin nuclear arsenals we'll achieve gradual disarmament through obsolescence.

We might hope that at some point in the early 2100s, somebody will notice that none of the nuclear missiles work anymore, and with any luck, the engineering knowledge to create new ones will have been lost.



Or, preferably, simply seen as a childish use of rockets and nuclear energy while the tech is being used productively elsewhere.


> FORTAN

I have not thought of FORTAN as legacy because whenever I need to look at very old FORTRAN code I am looking at mathematical structures such as matrices and vectors. At that point I am not thinking of FORTRAN anymore but another language (such as linear algebra) which is few hundred years old but still so crisp in a world of numerical computing.

HOWEVER: I cannot say the same about ABAP or Java that initially began to take shape in an enterprise system 20 years back. Then I have to go through reams of code that is truly legacy, hard to navigate through years of modifications, dependent heavily on local contexts, personal perferences, etc.


I've come across code on IBM System i (AS/400) with copyrights from the late 70's. The code still works, plenty of businesses run on it, and it was written quite nicely.


The core code base dates back to the System/38, which was released in 1979. You might expect to find a whole series of older to newer dates in copyright notices for the AS/400 (which is not its correct name now, and hasn't been for ages) code base, but the copyright laws have changed a few times over recent decades and I don't know what the current requirements are.


Did you leave a comment for the guy who has to fix it in 30 years?


Hopefully there's even odds it will be a woman by then.


Perhaps we go full circle and have predominately female programmers by then.


A full circle puts us back where we started?


Programming used to be a "woman's job" as secretaries.


yeah, but being a programmer had a different connotation in that sense, just like how you wouldn't call a person who xeroxed a book an "author".


No, a lot of those women legitimately did what we'd call programming.


It was soft-ware after all.

Downvoters don't seem to understand this point, that was the idea in the old days. That hardware was for men.


In the 1840s when programmable computing devices were first manufactured, 100% of programmers were women. She died about 10 years later.


I think one Mr Charles Babbage might disagree with you on that. Also I wouldn't say that any were actually manufactured. The first fully working model of the difference engine was created in the 1990s and as far as I know there never was a working analytical engine.


Looks like some folks missed my point, others got it 100%. I am in favor of more diversity in software builders, I practice what I preach


[flagged]


Bit of a strong reaction to a comment that began with the words "Hopefully"


Love this story. How much did you charge for changing those two numbers?


Hopefully at least a week for all the research.


I'm salaried, but it sure bought me a lot of goodwill.


God bless that coder, who probably received nothing compared to the aggravation they saved.


I hope to god you over sold yourself


The longer I work in this industry, the more I am coming to believe that waterfall is nothing more than a strawman set up to make development process X look like the Holy Grail.

Has anyone seen it in real life - in the pure form?

I once worked at a place that "did waterfall", but a diagram of the process would have shown arrows in all directions (would we call these things salmon runs or something?) and if you needed to go backwards only that specific piece did so while everything else continued as normal. Unit testing, integration testing, system testing, etc was all present on day 1.


> a strawman

"Figure 7. Step 3: Attempt to do the job twice - the first result provides an early simulation of the final product."

1970 "Managing the Development of Large Software Systems"

"Figure 3 portrays the iterative relationship between successive development phases for this scheme. The ordering of steps is based on the following concept: that as each step progresses and the design is further detailed, there is an iteration with the preceding and succeeding steps but rarely with the more remote steps in the sequence. The virtue of all of this is that as the design proceeds the change process is scoped down to manageable limits. At any point in the design process after the requirements analysis is completed there exists a firm and closeup moving baseline to which to return in the event of unforeseen design difficulties. What we have is an effective fallback position that tends to maximize the extent of early work that is salvageable."

http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970...


Back in the 90s and early 2000s every company I had any experience with used some waterfall-like development model. The usual case had a 6-9 month release cycle, up-front requirements and some level of technical specification of the required work. No one did automated testing (testing was QA's job) and releases were painful. I'm guessing you just didn't start in the industry until after this practice was already dead.


Companies and employees need to be able to sell the idea of a project/product/service to each other. The waterfall method serves as a bridge that connects the dots, allowing both sides to understand how they intend to reach the end goal.

The same way you haven't seen waterfall in its pure form, you will likely never see Agile, Design Thinking, etc in its pure form. That is because we are looking at environments prone to change and in control of humans with variable degrees of knowledge and prone to change.

What you have seen with the diagram that shows arrows in all direction is simple, it went against your perceived view or the standard and that put you off. Or, the person just didn't know how what they were doing (the more likely scenario).


> 2x4s spaced every 16 inches is plenty robust

The current recommendation is 2x6 exterior framing with 24 inch spacing. This results in a durable and energy efficient home.

https://www.apawood.org/data/sites/1/documents/technicalrese...


> It's truly a shame that so much money is poured into such utterly wasteful possessions

When the yachts are idle, which is the majority of the time, they are typically plugged into shore power and are thus just consuming electricity from the grid. The electrical systems of these things are typically far more efficient than your home because they must generate their own electricity when at sea.

As for the wasting of money, you'll note that most of the cost of building a yacht is labor because they are mostly one-off builds and can't take advantage of automation. The article mentions that they cost 10% of the purchase price per month in upkeep. The vast majority of those costs are labor.

These things are among the most efficient devices for transferring wealth from the rich to the middle class. I wouldn't discourage their use at all.


This all is true.

I also wonder how much of the yacht business is really about money laundering or tax-evasion. Like the art collecting world. If your company buys the boat in one country, reflags it to Panama, sells it in Monaco the next tax year... I don't know any details but I'm sure there's substantial room here to massage what numbers you present to various tax collectors.


Given how short the ownership tends to last (~9 months according to an anecdote in this thread), I suspect moving or protecting money is probably part of it somewhere. I also imagine a lot of it really is just wealthy people spending for fun rather than profit.


Heh, my anecdote is famous already :)

Indeed, it's pretty hard to tell. Both seem entirely plausible and I just have no way to guess how such buyers think.


Staff on these boats work on them year round, if they are being used or not.


This was a nice little review so I went to my local library website to reserve and found that they have two copies of this book published in 1967 from the Harvard University Press. The review's affiliate link to Amazon shows that this particular book is new and perhaps significant:

> “The Greek text of Characters is rather messy, with lots of sentences in dispute (or simply unintelligible) due to copyists’ errors in the transmission process. Only a few years ago, a new edition of the Greek text by James Diggle sorted out many of these problems. This new English version by Pamela Mensch takes advantage of that cleaned-up Greek text.”


> The crazy thing about this article is that it presents Chicago's solution (make the combined sewer bigger)

Except that Chicago's publicly stated position is that the Deep Tunnel system is not the solution. This is not a new thing, either. It's been that way for more than 2 decades.

For more than 10 years, they've had a stormwater management building ordinance and (with few exceptions) any project (new construction or renovation) that changes the amount of stormwater exiting a piece of property is subject to it. It limits the flow rate of stormwater leaving, the volume of stormwater leaving, and the amount of sediment that can leave a piece of property. It is entirely math-based, with published formulas and coefficients and makes determining compliance straightforward, objective and fast.

They've received international recognition for their incremental approach to stormwater management, which recognizes that the vast majority of property in the city is private property built long ago. As for public property, whenever the city rebuilds an alleyway, it is done with permeable pavement and is disconnected from the sewer. Whenever the city rebuilds a road, it is disconnected from the sewer to the extent that surrounding private property allows. Last year an entire block of a road near me was rebuilt and the side that was adjacent to a railroad embankment was built with swales every 50 feet or so to manage stormwater. It was actually visually attractive as well.


The S&P 500 publishes changes to the index in advance of the changes taking place, so all of the affected stocks have prices that reflect the change at the time of the change.

Most tracking indexes, especially the ones Vanguard uses, do not do that and thus do not allow the markets to front-run them. If you adopt the strategy of "do what Vanguard does" and you do it immediately after Vanguard says they did something, you are already too late to get the prices that Vanguard got and can kiss at least .05% goodbye just based on that. I would expect to under-perform by at least .25%, if not more.


Not being familiar with the term XNU, I did a little clicking around and hopefully I understand a bit better:

This file is evidence that Apple has been running iOS/MacOS on the Raspberry Pi 3?


It's evidence they have been running the macOS / iOS kernel, yes.

Whether they had the full stack running—and how much of the stack is necessary for something to be deemed "macOS"—is up for debate.


Or maybe it was an intern project.


Heh, this is quite believable too. Anyone found their LinkedIn profile?


The next thing I want Apple to do: If an app will request location services, it must: 1. Have a specific publicly available URL that contains all "location data" terms, conditions, and privacy information. 2. Monitor that URL and reset the permission dialog if the URL ever changes. 3. Immediately disable location services for that app if the URL disappears.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: