Hacker News new | past | comments | ask | show | jobs | submit login

If an old timer who used to be good with C wanted to use C again, would they have to learn a whole bunch of weird new stuff or could they pretty much use it like they did back in the stone age (i.e., the 20th century)?

Back in the '80s and '90s I was pretty good at C. I don't think there was anything about the language or the compilers than that I did not understand. I used C to write real time multitasking kernels for embedded systems, device drivers and kernel extensions for Unix, Windows, Mac, Netware, and OS/2. I did a Unix port from swapping hardware to paging hardware, rewriting the processes and memory subsystems. I tricked a friend into writing a C compiler. I could hold my own with the language lawyers on comp.lang.c.

Somewhere in there I started using C++, but only as a C with more flexible strings, constructors, destructors, and "for (int i = ...)", and later added STL containers to that.

Sometime in the 2000s, I ended up spending more and more time on smaller programs that were mostly processing text, and Perl became my main tool. Also I ended up spending a lot of helping out less experiences people at work who were doing things in PHP, or JavaScript, or Java. My C and C++ trickled to nothing.

I've occasionally looked at modern C++, but it is so different from what I was doing back in '90s or even early '00s I sometimes have to double check that I'm actually looking at C++ code.

Is modern C like that, or is it still at its core the same language I used to know well?




I'd put it this way -- as someone who writes both C and C++ and has for a long while, I find that the difference between "best practice" C89 and C17 code is not as wide as the difference between "best practice" C++98 and C++17 code. However, this is subjective and may be specific to what kinds of projects I work on, so YMMV.


C17 doesn't look much different than C89. If you are used to K&R C there may be some adjustment but I would expect it to be manageable.

What might perhaps be more challenging is adjusting to the changes in compilers. They tend to optimize code more aggressively and so writing code that closely follows the rules of the language (rather than making assumptions about the underlying hardware, even valid ones) is more important today than it was back in the 80's.


Given the above, it is worth pointing out that the compilers are also much much better in verification and useful warnings/errors. Back in the (very old) days, there was a motivation to cut down PCC (Portable C Compiler) and give the birth to Lint as a separate application (because cutting the compilation time was a greater priority). The current trends are completely the opposite: compilers are getting increasingly more powerful built-in static analyzers and sanitizers by default.

I think the lack of powerful tools in 1990s-2000s contributed to the thought by some that C is 'diffcult' in terms of safety. However, things have moved on.


As additional info,

> Although the first edition of K&R described most of the rules that brought C's type structure to its present form, many programs written in the older, more relaxed style persisted, and so did compilers that tolerated it. To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions.

-- https://www.bell-labs.com/usr/dmr/www/chist.html



The main editing needed to bring "old C" source code up to snuff using a "modern C" compiler is to make sure that the standard header-defined types are used. No more assuming that a lot of things are, by default, int type. A second, related editing pass is to make sure all functions are declared as prototypes, no longer K&R style; K&R style is slated to be deprecated by the next version of the Standard. (There are some rare uses for non-prototyped functions, but evidently the committee thinks there is more benefit in forcing prototypes.)


So the ISO committee breaks the backward compatibility of C in behalf of modernity... but there is C++ guys!

A little effort and you could make C deprecated. ;-)

This makes me think that there are as many C++ gurus than Go(ogle) gurus who want to kill C to be the new Java which brings you a bad coffee from a dirty kitchen.


> but evidently the committee thinks there is more benefit in forcing prototypes

Why?

Consider the following code:

    LetsReconsiderPriorities(n, A)
      int n, A[n];
    {
      return A[n + 1];
    }

    main() {
      static int A[1];
      return LetsReconsiderPriorities(1, A);
    }
Can anyone guess what clang/gcc complain about? They complain about K&R syntax, yet say nothing about the buffer overflow error. Thanks to modern arrays, the overflow can be said to clearly contradict the intentions of the program author. So why aren't compiler authors focusing on that? Rather than showing warnings that I'd say rightfully belong in lint? Note: Same is true with -Wall, [static n], and even [static 1]: compiler complains about language style and ignores real bugs.

I would estimate that roughly 15% of the issues / pull requests that get filed against C language projects are due to these linter errors that accumulated in compilers over the years, based on a quick glance at STB. https://github.com/nothings/stb/issues?q=warning (29 + 132.) / (156 + 794) It's a big obstacle to sharing C code with others. It'd be great if the C Language Committee could ask compiler authors to remove all these distracting warnings like "unused parameter" now that we have amazing tools like runtime sanitizers that deliver real results.

Also, have we considered addressing the prototype problem with the freedom to choose an ILP64 data model instead? How much do prototypes honestly matter in that case? DSO ABI compatibility might be an issue for Linux distros, but it doesn't concern all of us. Also not terribly concerned about 64-bit type promo since 16-bit is usually what fast DSP wants, and the language today doesn't make that easy.

Lastly consider that C was designed at a research laboratory. If there's one thing researchers love to do, it's what I like to call "yolo coding" which is perfectly valid use case of getting experimental / prototyping code written in a way where one needn't care too much about language formalities and best practices. It'd be great if the standards committee acknowledged that as being a legitimate use case (similar to how "high level assembler" is explicitly acknowledged), because future revisions of the language should ideally maintain as much of the original intentions as possible. See also: https://www.lysator.liu.se/c/dmr-on-noalias.html (Note: I think dmr goes too far here, but interesting bit of history to think about, now that everything that isn't char is implicitly noalias!)

In other words, let us choose. Please don't force us.


I'm sort of in the same boat, although I didn't do as much C. (And my interest in getting back into it is more hypothetical.)

Aside from understanding how the language itself has changed, maybe something else to put on the list is how to apply more modern programming practices in C.

In the 90s, I don't think I ever saw C code with unit tests. Any kind of automated testing was pretty rare. I've become convinced that testing in some form is a good thing. If I were going back to C, I'd want to understand the best way to go about that.

People also didn't care (or know) much about security back then. C has some obvious pitfalls (buffer overflows, etc.), and it is pretty important to know good ways to minimize risk. I'd want to understand best practices and techniques for this.

Also, back then build tools were very simple, and some of them were not my favorite things to use (Imake, I'm looking at you). Build tools have advanced a lot since then. Features like reliable, deterministic incremental builds exist now. Some things could be less tedious to configure and maintain. There are probably best practices and preferred choices in build tools, but what exactly they are is another thing I'd want to know.

These are probably not questions that necessarily need an answer from people whose expertise is the language itself, though, so I guess this is a tangent.


People did know about security back then, since it was one of the driving design factors of Burroughs created in 1961, and still sold by Unisys as ClearPath MCP for highly secured deployment environments.

And there were plenty of security related papers and OSes from other companies like IBM, Xerox and DEC.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: