Hacker News new | past | comments | ask | show | jobs | submit login

It's a trap.

The problem with those old codebases that governments, hospitals, big businesses are struggling with is not really the language, it's the engineering practices of that time with regards to constraints of old technology. The language is not the problem - lack of comments, bad variable naming, bad structure (little or no procedures or readability), and just sheer volume of it, is.

It would be very interesting to see the old systems rewritten in a modern language, with modern engineering practices, but keeping the old UI and UX (which often is incredibly ergonomic) - so as to limit scope and not mess it all up by trying to introduce mouse navigation and windowing mess.




Not to mention GOTO. If your one of the people who hyperventilates when you see a goto because you learned that it was considered harmful in programmer school then Cobol might not be for you. ;)

You might be surprised about the comments though - depending on the age of the codebase. Mainframes were rented back in the day, you paid by resources consumed, terminal time was precious, and mainframes were often turned off outside business hours.

Because of this a lot of the development actually happened between terminal sessions in flowcharts, pseudo code, documentation, and peer review before the programs were ever modified and run.

If you ever run across really old comp-sci books you’ll typically see them divided into three sections - first section was usually a guide to the author’s terminology and symbology, second part was usually a guide to flow charting and documentation (IBM had standardized forms for developers to use), and then the remainder of the book was the content with lots of explanations of how to work the datasets hugely larger then the memory available to you.

But as time passed and computer time became cheaper many of those formal development practices started to get lax.


> Mainframes were rented back in the day, you paid by resources consumed

To expand on that, IBM used to rent mainframes based on a 40-hour week. The computer had a usage meter (like a car odometer) that would keep track of how much time the computer was running. If the meter ran over, you would be billed for the excess charge.

The computer actually had two usage meters, with a key to select between them. When an IBM service engineer maintained the system, they used their key to switch from the customer meter to the maintenance meter. Thus, customers weren't charged for the computer time during maintenance.

One interesting thing I noticed about the IBM 1401 is that it's built from small circuit boards (SMS cards) that are easily pulled out of the backplane for replacement. Except the cards driving the usage meter. Those cards are riveted into place so they can't be removed. Apparently some customers discovered that they could save money by pulling out the right cards and disabling the meter.


I found a picture of the meters for those that are curious - http://static.righto.com/images/ibm-360/epo-30.jpg


The terminals were turned off. The mainframe kept running. In the computer room, the second shift operators ran batch jobs, printed reports, and did backups.

Didn't matter if the terminal was turned off or not either. The UI was burned into the phosphors.


>The UI was burned into the phosphors

Oh wow, so true, brought back many memories and also one of those overlooked aspects when you change a system as the previous burned in fields would with the right lighting create a whole avenue of data input errors that truly was a case of given the user a new monitor, which can be fun if your looking at it as a software issue in some new rollout. Yeah that can be a fun one and sometimes can't beat site visits as the local environment will never be replicated in any user transition training setup, however well it is done.


I used to have an IBM flowcharting template I inherited. I mostly used it to draw shapes and didn't appreciate what it meant to have a standard you could make tools like that for.

https://americanhistory.si.edu/collections/search/object/nma...


Awesome! I used to have one of those templates, unfortunately it got lost in a move. Surprisingly you can buy similar templates on Amazon but I’ve not seen one used in decades.

https://www.amazon.com/Rapidesign-Computer-Flowchart-Templat...


The New York State Civil Service exams for IT positions (meaning anything involving software development as well as other stuff) still have a flow chart section.

I actually quit the interview process with an insurance company a year or two ago after they wanted me to take a test involving reading flow charts, but now I'm in the position of having to pass something similar if I want to get promoted.

However, I don't think people use flow charts on the job anymore, even in state government.


I've found that flowcharts are enormously helpful for software design and communicating the design decided upon. Maybe in your role you don't have a need to communicate how software should be written?


I like to make hierarchical text outlines.

I feel like once you need a general graph sort of structure, you're too far down the road to spaghetti and/or excessive detail.


I make flowcharts all the time. I find them particularly useful for showing stakeholders how different pieces of an integration project work together. Usually the moving parts are pretty coarsely grained, like an ETL job or something that writes out a file and something else that picks it up. IMHO, your 70s-era flow chart diagram is pretty good at that kind of stuff.


>However, I don't think people use flow charts on the job anymore

Only for our service desk. Much easier to read than walls of text.


>If your one of the people who hyperventilates when you see a goto because you learned that it was considered harmful in programmer school then Cobol might not be for you. ;)

I don't know, at this point seeing a GOTO in my « native language » (the C language) is so incredibly weird that my first thought is « this guy is trying to do something weird and interesting ». It just wouldn't cross my mind somebody would be using a GOTO as a result of ignorance or laziness.


Really? Not trying to be derisive, just surprised. One of the common paradigms in C is GOTO for cleanup and error handling. I know it can be avoided if you really try however when used correctly, it can greatly improve the readability of code.


C gotos are scoped to the function, making them fairly reasonable to use. COBOL and FORTRAN have global gotos.


Real programmers use INTERCAL's COME FROM


Oh lord, it's been a while since the last time I've heard someone mention INTERCAL.


That language with no pronounceable acronym remains an important teaching tool for programming language learning and design.


Yes, indeed. I started as a bank programmer, working on an IBM mainframe.

At night, nearly the whole machine was consumed by batch processing. If there was some problem that required a late-night fix, it was worked out first on green-bar print of the program, using hex to determine what was in the registers. Then the code could be submitted via ICCF, but the wait for a compile could take literally hours. If you mis-typed something (say a forgotten period), the compile would fail and you would have to resubmit the job. Waiting hours again!


> But as time passed and computer time became cheaper many of those formal development practices started to get lax.

I agree.

~15-20 years ago we used to have some teams which were specialized in creating flowcharts for anything that had to be implemented.

Nowadays I do a bit of everything (project mgmt, development, support, analysis, etc...) and in my area I'm the only one drawing logical overviews (very primitive stuff - I use "MS Visio" and I like it a lot) whenever we have to implement something that has the potential to become a bit challenging => so far I've always been very happy of having done that (all conflicts/complications/flaws/etc... of the proposed logic are then caught already at that stage, therefore no problems later during the core dev phase and we have as well less problems with the resulting implementations).


YIP I trained early 80's in JSP, didn't even learn GOTO was a verb, as could code around using it with JSP.

Then you hit reality and all the baggage legacy code has as well as standard, JSP did not traction well and when it did - well maintenance of code......lots of legacy spagetti out there.

WHy is COBOL still in use, its a robust data processing language and that is the bulklk of things - batches of data that need processing, mailing lists for the post, bills.

COBOL does handle data well if you want to know when it rounds/how it rounds and have and trancation....Formating ourput. This was at a time that no other could fit the job and all runs upon robust hardware designed not to fail as much as your consumer affairs that were still a glint in many eyes.

So the legacy grew, bloated. I've worked on a fair few migratioin projects for a software house and the costs to migrate you large blob of legacy code on legacy hardware to run upon something modern is not a quick process, not cheap and so much planning and due dilligence as well as data integretary and testing involved alone is a huge costs.

SO you end up with legacy code hanging in there as no managment team can justify a 6-10 year budget in the mindsets that work upon a 5 year plan and budget.

WHich ends up with many systems being literally too big to fail and too costly to ever be full migrated as the risk/costs just grow and management with the guil and drive to push against the status quo in management is often a path to career suicide, so they carry on with the heard mentality that management prevail. With those that do stick their neck out being of two types, those who care and know what's needed and those who just want to be seen to be doing something big, run into it, then the rush decisions unfold and before you know it, they have already flown off to another company saying how they initiated a project that will die a horrible death not long after they left as people realised what a mess and the true costs involved are.

Hence many reasons why COBOL still around today, it just works and in some ways you can't knock legacy. Nokia phones, work for days, just work and do the job of being a phone. So for that task they do the job much better than anything modern, however the modern android and iphones do much more, bells and whistles of all flavours and yet if you just want to, or need to make a call, overall they are not as robust compared to using an old nokia, that just works and works well for the task at hand.

This and the mentality, if it works, don't change it does have merit and is something you learn over time.

But there is always hope, so bits can be pulled away from the legacy and if planned and managed right by people who know what is needed and the business needs as well as requirements and mindful of minimising risk and interruption, there will always be a way.

Though I've seen many a project with the best in the world be doomed from the start as the bites of the cake made it an all or nothing approach and nobody wants to wait or budget/plan for something that takes longer than 5 years in software in the business customer world. ALways exceptions, but then many are planned for 5 years when known will take longer on the basis that 3 years in you push a new 5 year plan out and bolt on a few trinket features and justification to hide the fact that it was never going to go fluid and end up on target.

Best approach, bit by bit, batch processing and the like can be more easily migrated, though the data as always and interacting with that will always be as big a part of any migration than the code.

But yeah GOTO, when your working on code that needs performance and the platform is more costly to upgrade than most, you will see much use of GOTO in the code. Then you get wonderful things like variable length records that many won't even know about and unaware that in COBOL you can define a say a top level record definition as 80 character and then redefine that with a PIC(x) OCCURS DEPENDING UPON VARIABLENAME. Then write that feild and get variable length records stored and save data storage and other expensive resources that we take for granted today. So yes, many gotchas and creativity to eek out performance and reduce storage costs.

WIth that, GOTO is not your biggest problem with legacy code of the COBOL flavour, let alone linked in machine code specially crafted to do a sort upon the data as was faster and now nobody knows what that blob actually does or how to change it, so yeah, lots of traps in any legacy code of any flavour.


> Mainframes were rented back in the day, you paid by resources consumed, terminal time was precious, and mainframes were often turned off outside business hours.

Sounds very cloud


Cloud is return of the "computer centre" model of old, similarly billed by usage and with various level of vendors services provided.


Which is why everyone should be laughing at IBM Cloud right now for not succeeding that well at IBM's original business model. ;)


Azure Stack Hub and AWS Outposts are both fairly mainframe-like: you rent racks for your physical premises that are essentially opaque to you, managed by the cloud provider, and bill according to usage.


I doubt that mainframes where turned on and off every day even later with super mini's we left them powered up


Every mainframe I worked on until the mid-80s was turned on by the first person to arrive in the morning and turned off by the last to leave in the evening.


Many sites had operations plan that involved weekly or biweekly "IPLs" done on weekends.


Doesn't mean the power was off


> keeping the old UI and UX (which often is incredibly ergonomic)

I have the feeling that with the success of the iPhone many people forgot that a thing like a UI can have a target audience as well. If you make a tool that is beeing used twice a week for a minute at a time it has to look fundamentally different from a tool that is used 50 times every day.

With the former beeing intuitive is more valuable, while with the latter reducing friction is more valuable. This is a choice which has to be made – and sadly I often don't see it beeing made. People just make a UI that is akin to the ones Google or Apple make and call it a day.


>People just make a UI that is akin to the ones Google or Apple make and call it a day.

It's worse than that. Lots of people involved in the creation of software don't just follow the trends, they have internalized the idea a UI exposing any complexity is inherently bad. That if something can't be easily expressed in the interaction language currently fashionable in mobile, then it must be a misfeature.

A distant but perhaps illustratively analogous example can be seen in non-nerdy teens and young adults. Take one that does class writing assignments in a google doc on their phone (they're not hard to find, you can even find some that try to do CAD on mobile devices). Try suggesting that if they learned to properly touch type on a real keyboard they'd find the whole process easier and faster. Then tell them apple's bluetooth keyboard can pair to iPhones. Compare the reactions.

tl;dr: In the TV show Metalocalypse the characters derisively called acoustic guitars "grandpa's guitars." That's the UX world in a nutshell.


Just finished a contract with a hospital that built a lot of stuff in house in the 80ies based on a C (still pre C89 in many places) and Delphi/Pascal. The problem is indeed volume (over 10MLoC), two or three wizards supporting it all, but firmly coding like it's '83. No training of newcomers whatsoever, and thus really no way to contribute. If you manage to get some support, it will be once a year and in the form of code they wrote for you without much feedback possible.

Management prefers not to think about long term, because management obviously does not think long term.


Is it possibly not a management problem at the core? I mean, I know exactly what you mean about suits like to ignore the future, but I've worked with a few 'wizards' who, once they've got the job under their belt, use it to keep other people out.

They'll not comment, help, document or whatever, and once they're doing that, they are uncontrollable. They can't be sacked because they're the only ones keeping the system running, and they won't help others train up. That seems a very difficult situation for the suits to deal with, even if they want to.


I worked with a guy like this. He bragged he was the only one that knew the entire legacy codebase and he only shared parts with colleagues on his pretty sizable team to ensure he had a job and fat salary for life.

The the multinational decides a change in direction and fired the entire office, relocating it for regional diversification (this was not offshoring, spreading the work out across multiple teams). This was pretty niche stuff, and he became unemployable until moving across the country.

And I know of a company that similar was outsourced to. Fully outsourced with seemingly no internal expertise. Bleeding their client for increasing support prices year-by-year imagining they could do this for ever. I was in the team in that client that for two years, going from-scratch re-implementing the functionality, no cheap proposition, but within 3 years of in-sourcing the savings were already there.

Not all wizards know magic.


That smells familiar. I worked with one of those wizards and he designed everything to keep him the job. That was until he rode a motorbike into the side of a car at 100mph and someone else (me) inherited it. I had to start again because it was that impenetrable. Consequentially it turned out it didn't do a lot and I'd rewritten the bulk of it in ASP/SQL Server at the time in a couple of weeks reducing the cost of the entire platform to an hour or two a week rather than an entire "tier B" (whatever that was but I was told it was a lot and was tier F myself) salary. When I quit it took a few hours to hand over to the next guy.


Indeed, the word wizard was perhaps ill-chosen. These guys know this particular codebase, in all it's hairy glory, but not much else.


I don't think ill-chosen. In their eyes, they are wizards.


> Is it possibly not a management problem at the core? I mean, I know exactly what you mean about suits like to ignore the future, but I've worked with a few 'wizards' who, once they've got the job under their belt, use it to keep other people out.

> They'll not comment, help, document or whatever, and once they're doing that, they are uncontrollable. They can't be sacked because they're the only ones keeping the system running, and they won't help others train up. That seems a very difficult situation for the suits to deal with, even if they want to.

This sounds _exactly_ like a failing of management to me. If developers are expected or allowed to "just code" without documenting anything or training anyone, management is absolutely to blame for allowing it to go on.


well if you have 3 guys that are the wizards that can't be sacked, hire 3 contractors for a year long project to document things sufficiently that these guys become more sackable. Contractors of course have to be highly experienced as well, and well-remunerated, so maybe management doesn't want to go down that expensive avenue, which means it's a management problem.

Not to mention the years where they did not comment, help, document or whatever, and became uncontrollable sounds like a management problem over those years.


Your first suggestion might work, but having been there I know obnoxious programmers can be very obstructive, and put up some major barriers to newbies. That said, I think yours is a good solution if it can be afforded.

Per your 2nd point about long-term management problems, no doubt of it at all, but sometimes the new (and sometimes good) management simply inherits what previous mismanagement left behind.

Also perhaps you underestimate the power that programmers in well-bedded-in positions have. They can outright ignore management orders - experience speaking.

But a good post nonetheless, thanks.


The other side of the coin is that new hires can be very obstructive. 80% of new hires here are relentless politicians who snitch to management that they are "underappreciated" while making one idiotic suggestion after the next to show their relevance.

In reality they cannot do anything, so they scheme to get rid of people who can. During the battle no useful work is being done.


not that that doesn't happen, but in the context of the thread here, I imagine it wouldn't happen (or be more rare). You're needing to hire senior COBOL experts, and the 'senior' part is where the sort of behavior you're describing doesn't happen as much. Usually it's a mixture of confidence in your own ability, a dislike of politics, and an assurance that you can get work someplace else if/when you decide to leave.

The people that 'snitch' and feel 'under appreciated', and so on... my experience is they have trouble keeping work, are afraid of being found out, and do what they can to get rid of others who can recognize their lack of skills. I just don't think you'd find as many of those getting hired in the context of the needs of this thread.


I'd like to back up what you say because I wasn't clear enough in my first post - I've known a few people like that, a very few. Most are good and do their best. Despite the impression, idiots as I described are very much a minority.


If you're hiring people like this, I don't think old code bases are your biggest problem.


If you hire people like this you need to involve HR in better selecting who the org hires.


> if you have 3 guys that are the wizards that can't be sacked

Don't forget to also sack the management that looked the other way for years while this situation got where it is.

The zero asshole rule is non-negotiable.


Maybe if job security were a real thing, those guys wouldn't create obstructions to guarantee it. If employees don't get loyalty from management, why should the street go one way?

The problem is that management doesn't care about the human cost of their decisions and it causes technical problems.


In the real world this approach will most likely lead to getting 6 indestructible wizards instead of 3. Magical staff that needs to be supported by wizards should be shipped to Hogwarts and replaced


Management that allows one or a few persons to become that indispensable is a management problem.


I propose that most of the code written in the old days, in COBOL in particular, was not done by computer scientists. There were many lanes for learning COBOL, and they included community colleges and industry training courses. The amount of "computer science / software engineering" concepts that were widely known (or even possible) in those days was very limited. 're-engineering' the code is a valid operation, but the amount of code is extreme. and without a deep testing regime, a dangerous journey. so, we keep on keeping on. the same way.


I believe COBOL was invented prior to the first computer science department in the US, never mind CS degrees becoming a common thing.


This is true. Back in the day, COBOL was marketed to businiess people as an easy to learn code to just get stuff done. Serious programming was done with FORTRAN.


Actually, FORTRAN was marketed to scientists, not for "serious programming" which was still done in assembler.


Why sometimes still today someone's Jupyter notebook of data science Python is calling into FORTRAN libraries under the hood.


still true today. (not really, but there is a lot of f77 out there...)


I don't know anyone who uses f77 anymore, but f90 and up is still popular in scientific research.


Now imagine 30 years from now when you are going to have to track down the documentation for version X of the web framework for version Y of some language and version Z of frontend framework.


> Now imagine 30 years from

For the javascript ecosystem this is already true for projects that are > 2 year old.

Not kidding, try to build a 2 year old React web application, you'll see what I mean.


TBF, part of the problem is that dependency locking only caught on recently. (Yarn came out in 2016.) That makes it more possible.


So... don't do that.

There are plenty of options that have a reasonable probability of being stable.

(Also, consider committing the docs right in to the repo. In the 1970s such an idea would have been absurd. Today, a lot of my projects technically already have this, thanks to vendoring and docs embedded into the programs themselves.)


best documentation is getting the business logic mapped out and the code itself, along with data mapped-out with what does what to it when and how. As any code documentation will be out of date in way way or another, even best sites it will be case of getting that documentation and then mapping a few decades worth of change management, bug tracking and other avenues that modified that code.

I will say though, every migration project I worked up, the documentation was carefully worded in the contract as being the customers liability and with that, code gets migrated logic for logic, bug for bug and testing so anal that it will show that what goes in is the same that comes out in the migrated code. That and the business documentation will still be good, even much of the code documentation if high enough level, but the code itself will be the best documentation.

Until we get a standard in which the documentation produces the code and all changes done to the documentation over quickly hacking the code, the disparity between any documentation and the code will always be adrift.

So you see many bespoke solutions to go thru the code and produce documentation from that to varying levels of success, however that success for one sites quirks in code may not work as well with others.

Hence even the best documentation will wisely get treated with a pinch of salt and in many instances, be like comparing a book to the movie it spawned in many ways, some close to the original, many not even close. That is documentation and code in a nutshell.

Always best to map the data first and that can be done easier and more automated, more so databases and generating a schema and then map what code talks with what and gradual get to see what is happening.


> It would be very interesting to see the old systems rewritten in a modern language, with modern engineering practices, but keeping the old UI and UX (which often is incredibly ergonomic) - so as to limit scope and not mess it all up by trying to introduce mouse navigation and windowing mess.

I was the tech lead on one of these projects. Personally I was sad and frustrated we had to keep the old UX/UI. I would have much rather have made something more ergonomic for our users. Alas retraining would have been too expensive to do that even though it would have been probably more intuitive.

I do agree with you that there is some benefit to being able to do everything on a keyboard without having to deal with the baggage of what we consider modern.


At some point in the web, ergonomic went from something that meant all form and minimal viable function. Efficiency just isn’t a real noun in the UX/UI vocabulary when it somes to complex entry.

30 years ago, those of us on green screens at Big Org knew more shortcuts than any emacs user. Now whenever I have to use a “modern” CRM it’s the most anti-productive aspect of my job.


Web UI optimized for user who can close page any time if he does not like anything. You have 100 competitors and user can flee to anyone of them.

Enterprise UI optimized for speed of trained personnel. They can't close page if they don't like it, they are paid for work.

It's completely different situations and when someone mixes them, it leads to bad UI.


I think it would make a lot more sense to pay COBOL developers at the same rate as other software developers. The era where we had analysts writing psuedo-code so clear that it could simply be "coded" into COBOL is over, if it ever really existed in the first place. In addition, paying COBOL developers more is so obviously less expensive than trying to completely re-write all of that legacy code.

It is likely true that many people would not be excited to learn COBOL. That said, I do think there would be a good amount of developers, perhaps new to the field, that would be willing to write the code in return for a fair wage and the work experience. But this attitude that COBOL code is somehow worth less than JavaScript code needs to be worked out of the system. It simply is not true and it is clearly doing harm.


Reverse engineers can manage to stare at assembly and figure things out. Every programmer can learn the skills of staring at really fucked up representations of logic and eventually figure things out. It's definitely slower of course.

Now is definitely not the time for a rewrite. As much of a trap this seems to be, it's actually the best move given the circumstances.

I may actually take IBM up on this since my father was a COBOL programmer, but I wouldn't plan to make a career of it.


Other than constraints most of the issues you raise are just as likely to happen in modern languages as well. That comes down to how the team was managed. If anything I have found a lot of older code over documented. My difficulty when I had assist in migrating some old COBOL from CICS on a zSeries was understanding their file structure techniques. However that was easily remedied by understanding the task as hand so that the data was better represented on the new system.

Is it a trap? Well if you want a secure position in managing a code base and maybe eventually working with others to move it to a new platform I do not see how. I have been around enough new languages to know we are always going to run into code bases we just want nothing to do with but here we are.

The problem to me is you may land in a development shop that is not well maintained. The code has worked for so long that management outside the department just went assumed that everyone knew everything.


I completely agree. I work with a large APL codebase and the main problem is not the language but the culture. Overly long expressions, one-letter variable names, gotos etc. make the code obfuscated.


I was the team lead to rewrite a large APL codebase with a very small team as part of a much larger group that refused to change and I'll agree with this. Once we started documenting and building tests for the code that replaced APL it became clear that the complicated bits were easy to replace and the hard part was just man hours converting all the conditional logic.


And APL requires supreme discipline to prevent that from happening.


> one-letter variable names

Just like in math!


Except in math, the notation is surrounded by natural-language prose which carries the brunt of the semantic load.


It’s not a trap, just IBM pr.

The legacy stuff is usually fine, it’s the layers of middleware scaffolds around the mainframe.

Mainframe jobs are 90% batch, so even under stress, it can handle it. Your circa 2002 scaffolds are the problems.


But who would rewrite it for the low salaries? Most people I know are not interested in low-level correctness at all, the open source C++ projects on GitHub with a high churn rate always have tons of corner case bugs.

If you don't get the right people, the rewrite would be an overengineered object oriented nightmare with an endless stream of bugs.


> modern engineering practices

Heck, I’d be willing to settle for seeing modern software languages used with modern engineering practices.


Absolutely (and I say that as an erstwhile mainframe COBOL programmer). The language itself is dead easy.


The language isn't dead (and doesn't have to be). If anything, it must be taught as part of a programming language design course. In a way COBOL is similar to SQL which is far from dying. (IMHO it would have been useful if they had merged into one language; the available embedding of SQL into COBOL was kinda clunky.)


The person you're responding to didn't claim that COBOL is dead. He/she wrote that COBOL is "dead easy", meaning "very easy".


> The language is not the problem - lack of comments, bad variable naming, bad structure (little or no procedures or readability), and just sheer volume of it, is.

I wonder what basis, evidence or data you might be using to make this assertion. You are assuming quite a bit there as well as generalizing all problems across all affected systems to have your list of issues as the root cause.

Could it be that nobody bothered to maintain and modernize these systems because spending more money on software that "works" isn't going to earn anyone in government points? Government and politics have metrics and fitness functions that do not align very well with the real world (anything outside of government or large stagnant companies).

And yet, at the same time, have you looked at open source libraries lately? The phrase that comes to mind is: rotten smelly stinking mess.

I just had to deal with one of those a few weeks ago. No comments, horrible code structure, massive class hierarchies, just awful stuff. The complexity and thickness of the interface they created was astounding. We re-wrote the entire thing in about thirty lines of code. Yeah. A massive multi-source-file library got boiled down to just a handful of clean code, no classes, just clean, simple and easy to understand code with comments anyone could understand.

I know it might be difficult to modern programmers to understand the kinds of constraints software developers had to work with in the '80's or before. A simple example of this would be single character variable names. When you only have a few thousand bytes of memory and you are working with an interpreted language, variable names consume memory you desperately need, not to mention CPU cycles. So, yes, people resorted to use single character names to conserve memory and improve execution time. Context is important.

I took one semester of COBOL back in the dark ages, FORTRAN also. Thankfully I never had to use them professionally. I started professional life using APL, C and FORTH. I realized, years later, how lucky I was to have been shoved into that path by a physics professor who insisted I veer away from COBOL/FORTRAN and take his APL class.


lack of comments, bad variable naming, bad structure (little or no procedures or readability), and just sheer volume of it

This has nothing to do with technology and everything to do with people. For all the progress we've made with technology in the past 50 years, we've made little or no progress with any of the items in this list.

The second biggest problem building software has always been programmers too small for the task at hand.

The biggest problem has been managers who are even smaller.

So don't blame COBOL or any other technology. Fix the people and you can build anything excellent from almost any tech.


> This has nothing to do with technology

Oh, it has everything to do with technology. Wouldn't want to waste that perfectly serviceable punch-card by punching a * in column 7.


It always mess up by introduce not the mouse or any technical things. But your keyword introduce ...

What do you think to get the proposal off the ground. Just reimplement granddad system is not it I am afraid.

Saw that so many times ...


anyone familiar with current node/npm/front end web dev would have very little trouble dismantling the false notions of implicit chronological progress belying this argument.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: