Hacker News new | past | comments | ask | show | jobs | submit login
Learn C The Hard Way (learncodethehardway.org)
220 points by llambda on Jan 15, 2012 | hide | past | favorite | 62 comments



I recently started working through Learn C the Hard Way, and after doing a few chapters I wrote down what I liked about it in a notebook. Digging it up, here it is:

"Why I like Learn C the Hard Way:

- Opinionated. I think opinionated textbooks are great because they limit their scope and focus on something. Rather than being an authoritative reference (who uses text references anymore?), it's a framework for learning.

- Emphasizes reading and editing which contributes to overall understanding

- It's lean. Goes with opinionated, but it's nice that it doesn't repeat what's been done, but sends you there directly, i.e. don't waste time writing about "strcmp", send me to "man 3 strcmp" since I need to get familiar with it anyway."

Disclaimer: I'm a bit new to low-level coding, so feel free to point out why these reasons for liking the text might be naiive.


I too just started reading this one.

For me, my first foray into 'low-level' programming was during my freshman year in a CS101 course that used C++. That first course on it's own wasn't too bad, but the subsequent 'introductory' courses for more C++ and Data Structures + Algorithms were just terrible. I couldn't absorb anything useful from them because I was just overwhelmed with the constant bombardment of Segmentation Faults everywhere. Poor teachers + high learning overhead as a result of using a low-level language, just made me want to stray as far away as possible from ever using a 'C-langage' again. If it wasn't for me discovering Python around the same time, I probably would've just given up and pursued music or audio engineering instead.

Fast forward to last year, having now learned a ton more languages, I decided to go back and start reading K&R. I didn't manage to get through a whole lot, but from what I read, I liked it. The problem was that I didn't really feel motivated to keep plowing through it. Looking through the snippets of code in it, more and more I began wondering if some things were just outdated practices, or if they were in fact crazy idioms that I would have to get used to when dealing with C. That doubt (as silly as it might sound) did put me off a bit. The explanations in the text were top notch, but again, the code just didn't compel me to type it in like I knew I should.

Now I started reading LCTHW just a few days ago, and I have to say, it is quite awesome. It's a lot more minimal and sparse than I'm used to for a book, but I have been seriously surprised at how effective that method is. At first I was a bit annoyed that a lot of parts were basically saying 'just google it', cause I look to books to present me with a more coherent explanation of things than I can find scattered around the web or in arcane documentation. But as I went on with it, I noticed that the things he makes you look up on your own are generally specific enough that it makes the task of filtering out all the nonsense much easier than if you were researching it on your own. I quickly got into the groove of the book, because it just builds your confidence that you can be pretty self-sufficient in a new environment, even at the early stages of your learning. Combine that with his 'code first, explain later' format, and that book is surprisingly motivating. In just 2 days, I went through the first 17 chapters (currently stuck on the extra credit for the Database exercise), and I just want to keep on going. I think this is the first time I've ever experienced this level of enjoyment and drive from a programming tutorial. I have no idea why this is, but it's damn impressive. I can't wait to get to the K&R portion at the end.

Thanks Zed!


> I noticed that the things he makes you look up on your own are generally specific enough that it makes the task of filtering out all the nonsense much easier than if you were researching it on your own.

That's my trick. I actually go googling and make sure that it's something you can find easily with a little nudge. Part of the goal of my books is to teach basic research skills so you can survive on your own. Glad you got that.

And, you're welcome. I'm still working on it, but feel free to fill out comments with problems you hit.


Hmm, I've always viewed K&R as kind of a paragon of C perfection -- I first learned C from K&R when I was 16 or so and have used it as a trusty companion to systems C programming all the way through grad school. Not only do I view it as a good book to learn C, I also view it as one of the best computer science textbooks ever written, partially because it manages to encompass so much of the C and UNIX style along with the programming language itself.

So I guess my question is: why not K&R? Learn Python and Learn Ruby always made sense to me because there's a serious lack of definitive texts on learning those languages, especially if you've never programmed before. However, in my opinion C is not the best first language and there already exists a fairly definitive text on it. So I would love it if someone could let me know what I'm missing.


Your question is addressed by the author at the end of the book:

http://c.learncodethehardway.org/book/learn-c-the-hard-waych...


He brings no pedagogical issues to bare, it's simply a facile critique of "style", I don't think that answers the question why not K&R at all.

Some may consider the points well taken, not surprisingly K&R had the foresight to respond in kind, two decades earlier:

Our aim is to show the essential elements of the language in real programs, but without getting bogged down in details, rules, and exceptions.


Wrong, I make a clear example of the copy() function being broken, give a demonstration of fuzzing it to break it, and show how to do it yourself. And, if you think the copy() function is valid, then you also think strcpy() function is valid, and therefore you don't know what you're talking about. Everyone who is aware of secure C coding knows strcpy() is buggy and the cause of most buffer overflows.


Dude, stop saying it's broken.

For higher level languages where we have intelligent string objects, yeah, bounds checking is assumed--but this is something originating in assembly-level stuff. If you call it with broken memory, of course it won't work correctly.

You're doing a good job spreading knowledge--don't spread misinformation.

Everyone who is aware of secure C coding knows strcpy() is buggy and the cause of most buffer overflows.

If strcpy() was truly buggy and unpredictable in its implementation, it wouldn't be nearly so useful as an attack vector. Be accurate--strcpy() is unsafe, not buggy. Sheesh.


Zed's Twitter killfile must be a mile long if this got me into it: https://gist.github.com/9efed904a1ea0902c7c7/213de692d29fdde...

He's right though. This really is a silly argument. Your life will almost certainly be better if you forget strcpy exists.

Really, C strings are broken in general. Computing their length is O(n) when it could be O(1), and a missing null terminator results in undefined behavior (a crash if you're lucky). Worse, copying an oversized string into an undersized buffer results in a buffer overflow, which gets you a nice mention here: http://www.kb.cert.org/vuls.

strcpy is about as safe as a hand grenade: you'll be fine if you know what you're doing, but God help you if you don't!

Realistically, if you're reading "Learn C The Hard Way," you don't know what you're doing yet. Don't use strcpy.

For that matter, you're human. You screw up sometimes. We all do. Don't use strcpy without a damn good reason.

Is there ever a legitimate reason to use strcpy over strncpy?


No, it's defective and buggy. You can't prove logically that the while loop inside will end given any random input. An implementation with a length will end given any input. That is a bug. If you wrote code like that then I'd flag it as a bug, so how is it that strcpy is somehow reclassified as "unsafe" but yeah totally not a bug?

It's so poorly designed that it should be considered wrong, buggy, defective, unsafe, and removed. To go around saying "it's totally alright" when even Microsoft, bringer of more virii than a whore house, has deprecated it:

http://msdn.microsoft.com/en-us/library/kk6xf663%28v=vs.80%2...

Is idiotic thinking. Go ahead and come up with all the distinctive classifications you want, it's a defect to use strcpy because it's buggy, and copy() in that code is the same.


> You can't prove logically that the while loop inside will end given any random input.

C is all about layers, and at the bottom you just assume that "all input given to this function is safe". If you're passing random input (without any error checking) to pretty much any of the built-in functions, You're Doing Something Wrong.

Whether you call strcpy/copy buggy or unsafe doesn't really matter. The implementation on all platforms pretty much follows the standard, with its well-known issues. Sometimes it's the right tool for the job; sometimes not.

It's also important to remember that strcpy_s doesn't just magically solve all your problems. Someone might think that this code is safe:

    strcpy_s(src, 10, dst)
But if dst is shorter than 10, you'll have a problem.

Going from ASCIIZ to length-encoded strings isn't something you can just do in the middle of a function; it requires changes all over the place. K&R was written with ASCIIZ and low-level programming in mind. There's nothing inheritly wrong about this; it has both its advantages and disadvantages. Your book is written with length-encoded strings in mind. (Which I think is the best way to teach today).

I love the concept of this chapter, and I pretty much agree with you in your critisms to strcpy/copy, but suddenly you go from "this code has issues" to "this code is bad, bad, bad; never use this code; only silly people would write this code" and so on (at least that how I feel when I'm reading it). I think you should place more emphasis on "this code was written to only work with certain input; look how bad it performs under fuzz-testing; see how easy we can fix it!".


I'm going to have to side with Zed here. If I'm teaching an introductory class on Chemistry, you'd better believe that when I reach the section on cyanide I'm going to tell students: "bad, bad, bad; never use this chemical"! If those introductory students were to take a more advanced class, then I would probably tell them: "well, ok, cyanide isn't going to kill you instantly and is actually really useful for a wide range of applications".

Part of being a good teacher is recognizing that there are limits to how much you can expect a student to learn at a given level, and then making sure their knowledge is as "complete" as possible within those limits.


> If I'm teaching an introductory class on Chemistry, you'd better believe that when I reach the section on cyanide I'm going to tell students: "bad, bad, bad; never use this chemical"!

I agree. So would I. However, what Zed is doing in the last chapter is showing code written by other people. If you taught your students about an experiment done by other (widely regarded) researches, would you say "bad, bad, bad; they should never have used these chemicals"?

I would say: "See, they used it here, but only because they were very, very, very careful. Let's explore different ways this breaks down. … As you can see, I'd recommend you to not do what these people did."


Gah, that's horrible. :(

Teaching people (especially in natural sciences!) to immediately disregard something, just to be safe, is not conducive to learning. Sure, at later levels, you can teach them the truth of things, but what if they don't get there?

Then you have a bunch of people freaking out over incomplete information that they learned in school, and influencing dumb policies (e.g., hooplah about nuclear plants).


So, how does one of the "safe" string-copying functions work safely given "any random input"?


By requiring that the length of the string be given as one of the arguments, just like strlcpy does.


And how are you going to make sure that the length specified is correct?


This is a borderline straw-man argument. Size (length) is one of the few things that you must be very aware of and control when programming C. You cannot "malloc" without knowing the length to malloc. You cannot create an array on the stack without knowing its size. In each case, you may not use the full memory allocated, but at least you can set an upper bound on the memory that you own. Thus, however you created or ingested the string to copy (and the memory to copy it to), you will have an upper bound on how much memory it is safe to copy.


This is about maintaining invariants, and I interpret the "any random input" from the parent question as "input that breaks the preconditions the function expects of its inputs."

With str* functions the assumptions are that the string is null-terminated and stored within a memory block of large enough size.

When providing string+length, "any random input" means that, e.g., length may be arbitrary; maybe it became garbage as a consequence of series of unfortunate integer overflows (when did you last check for those when checking that len1+len2 < buffer_size ?).

So, how DO you write a string-handling function that can safely handle ANY random input?


THIS!

It would've been nice if C came with real string support, but it didn't. Instead, we're stuck mucking around with character arrays. All the built-in functions expect null-terminated strings, but many functions don't guarantee they'll generate these strings in all cases.

Look at strncpy. If the string you're copying fills the destination buffer completely, the function won't write the null terminator; the resulting string will blow up several C standard library functions.

If you're using null-terminated strings, it is your job to make damn sure those strings are always null-terminated.

C is an unsafe language. Get used to it.


It's so poorly designed that it should be considered wrong, buggy, defective, unsafe, and removed.

Look, think about the context here. Consider that the language is glorified assembly against a (somewhat) defined virtual machine.

The functional description of the routine is "Hey, so, look at this byte in memory and copy it to this other byte, leave if the byte copied was zero, otherwise increment both addresses and do it again."

There is nothing there that need be "proven"--it's all right there! If you have a sequence of bytes with a null, it terminates; if you don't, it doesn't. Boom. Done.

You can't prove logically that the while loop inside will end given any random input.

It's not intended to be used with any random input. Zed, programs are not written to turn arbitrary data into what you want--that entire notion is absurd. You have to make assumptions somewhere. strcpy() et al. are written with the knowledge that those assumptions are being made elsewhere in the system. With that being the case, the routine can be written as short as is needed to perform its task.

If you wrote code like that then I'd flag it as a bug, so how is it that strcpy is somehow reclassified as "unsafe" but yeah totally not a bug?

Perhaps because your bug detector is broken, and everyone else knows that tools can be dangerous and useful?

It is not a bug because it will do exactly what it claims to do, and nothing different--it is deterministic. It will copy bytes from one place to another through it hiting a null (or segfaults). This is not a bug--the domain of behavior is well known. The implementation is correct and the algorithm is simple.

It is unsafe because if you are incorrect in fulfilling the preconditions for using it (i.e., your source buffer isn't null terminated and your destination buffer is too small) it may have undesirable behavior.

It is unsafe because in the real world you get strings from other libraries whose authors may or may not have done a good job of terminating them in all annoying use cases.

It is unsafe because in the real world you can have another part of the program screw up and corrupt the string pointer it is being given, landing it in the middle of the heap far away from a saving terminator.

~

There are plenty of very good reasons not to use strcpy() when you don't control the entire flow into and out of it. You don't have to complain about it being buggy (which it isn't, at least in the distro of libc I'm using) or defective (which it isn't, as it does exactly what the spec says it should do).

You could just point out these issues and then show how to wrap them. The use of strncpy() would be an acceptable substitute here, as another user has pointed out, as it addresses the safety concern.

~

Stop encouraging new programmers to be afraid (blindly and without reason!) of their tools, and stop setting a bad example for being pigheaded and imprecise in the language of software development.


I think he meant the use of strcpy() is buggy.


First, there's no need to be aggressive.

I don't really get what you're trying to say. I would understand if you said that strncat is broken (since it can output non \0 terminated strings, unlike the non-standard strlcat, breaking the principle of least astonishment) but I fail to see the problem here.

It's just garbage-in/garbage-out: strcpy expects a C string, and the char array you give it is not a C string, so it fails by triggering an undefined behaviour. The function has no way to detect this problem, so it's not even laziness.

Pascal strings embed the count within the string, but then you could forge a bogus pascal string with an incorrect size and trigger the same kind of problem.


> First, there's no need to be aggressive.

He isn't aggressive, he simply is the inimitable Zed Shaw :)

> The function has no way to detect this problem, so it's not even laziness.

His point : therefore you should not use this function, but strlcpy instead. Or course the K&R has the excellent excuse of predating about every alternative secure version of strcpy.

> Pascal strings embed the count within the string, but then you could forge a bogus pascal string with an incorrect size and trigger the same kind of problem.

I don't think that you could easily forge such a string using Pascal standard functions, but last time I wrote Pascal there was neither syntax highlighting in the editor nor hard drive in my PC.

Anyway, strlcpy should be immune, because it will truncate whatever string of bytes to the required length, see:

http://www.manpagez.com/man/3/strlcpy/

It doesn't preclude any implementation error (preperly a bug), of course.


There's a difference between 'difficult to use correctly' and 'broken'. It would be a good idea for you to learn it.


The charge of being excessively clever is, I think, spot on, and exactly points to pedagogical issues. I think a lot of the examples in K&R are more amusing to those glancing through it who already know C, than to the novice trying to learn it.


Of course the completed portion only includes two examples and they are both stupid. ;)

    while (...)
        if (...) {
            ...
        }
    if (...)
        ...;
"A quick glance makes it seem like this while-loop will loop both if-statements, but it doesn't."

How could you possibly conclude that, considering the indentation?


I've always been ok with no braces for one line statements. But they have to actually fit on one line and not be another nested block. I recently found a style guide for javascript that recommended only leaving braces out for statements that could be fit on the same line eg:

    if (something) doSomething();
    for (...) repeatSomething();
What I will disagree with is that braces are completely free. I believe that code which is vertically succinct is a good thing. It allows you to see more context at once which is particularly useful with well designed, well structured short methods and classes. I've always been opposed to excess white space, like the random blank lines within methods that programmers seem to put arbitrarily with no rhyme or reason, other than their personal aesthetics of course. Also spaces around arguments, putting the opening brace on it's own line (which I find terribly redundant anyone can see when a block has started).

I've always liked the K&R style aesthetically, but again I'll agree that Zed has found a decent example of where not using braces just because you can is taken too far.


I don't think it's intuitively clear to a non-programmer and maybe not even to a non-C-like-language programmer why that while loop encompasses the first if and not the second.


Great, thanks!

I'll admit that I don't think I've ever used K&R C code verbatim[1] so I never saw how outdated all the stylistic code was. I would never recommend this primitive style in modern projects. Thanks for your insight Zed!

[1] Really, if you start dealing with the complexities of C at all K&R will no longer do it. First you'll need to read the ANSI C specification to understand what's supposed to be going on, then you'll need to read your compiler's documentation to figure out how they have actually done what the specification says they should do.


Caveat: only scanned the structure and read the final chapter.

Regarding the final chapter: K&R, like most programming books (especially those for new language) shows pedagogical code, not for productive use. I highly doubt it was ever intended as the equivalent of the modern-day "Code Complete"!

The value of LCTHW, apart from the intro and availability, is its unusual dissecting/analytical approach, which I welcome and am grateful, and for this reason will read it in its entirety.


Wasn't this same book on the front page of HN just a few weeks ago?

It just seems so random to see it appear again today? Was there a special release announcement that I'm not seeing?


The page that was on the front page a few weeks ago was just a respond to K&R's C book from the author itself.


This should be titled: Learn C The Hard Way

It's for the C version, not the top level domain.

edit: titled was fixed. Thanks!


OK. NOT TROLLING, but can I ask why anyone would want to learn C, other than to develop device drivers, kernel modules or other arcane software that is yet to be replaced by C++?

I asked this question of a younger programmer the other day, because it seemed to me, that to HIM, learning C was a rite of passage, and that he was less of a man for not knowing it.

I find macho philosophies in software development both amusing and counter-productive.(If you really want to be macho, become a lisp hacker). My amusement may be personal, but the counter-productivity of using C, when better abstractions (i.e. programming languages) already exist, is real. It creates fiefdoms and priesthoods that are counter-evolutionary and hard to maintain, and leads the death of much software.

Personally, I would rather std::string be pored over by many eyeballs and evolved than change strcpy to _strcpy or strlcpy. Unless I am working for NASA on an embedded device for a satellite, I would rather Moore's law or SMP or DSMP give me the speed I need, than give up productivity to squeeze every last CPU cycle. Developer time is a lot more expensive than hardware (except on satellites and space stations).

Apologies if it is your ambition to work for NASA on embedded devices in space stations, I just think you may as well learn C++. You get most of C, plus some really useful and productive abstractions as well.

If you want speed, learn inter-process communication and the principles of symmetric multi processing. With C++, you also get to abstract away the problems of strcpy and strlen, replace Byzantine function references with class methods, and 35 parameter functions with polymorphism. Best of all, most of the programming world will still think you are manly if you know C++, so you get that too.


Not all of us see C++ as an improvement over C.

Sure, C++ has great advantages, but it also great disadvantages.

I use C for many things not because of "macho", but because C is good enough. I can work around the disadvantages using standard techniques. The result is code that is easily FFI'able from any language, quicker to compile, has a smaller footprint, sane error messages, easily debuggable in gdb, etc.

Also, there are some advantages in the abstractions that C encourages you to use. For example, lack of templates pushed C programmers to discover intrusive data structures, and those are better in many ways than templated containers ala STL.

Another example: "virtual" methods may fail to override a base class method if you make a typo - resulting in potentially cryptic bugs. Using a simple macro in C:

  #define MAKE_VTABLE(prefix)  { prefix##foo, prefix##bar }
I can make vtables that are safer than C++, and require no more boilerplate than C++ code. The compiler will guarantee that I implement all of the required vtable methods.

If I want to get speed via multi-core rather than pure uniprocessor speed, why use C++? I can use Haskell and get much easier and safer parallelism with many other advantages.


Thanks for your reply.

> Not all of us see C++ as an improvement over C.

I know, but you are the first I've encountered to quantify it.

> The result is code that is easily FFI'able from any language

I agree that this is a big advantage for interoperability. While it's possible to build an interface in C++ with 'extern C', it means maintaining two call APIs rather than one. Still I see this as a current limitation of C++ rather than an advantage of C.

> Using a simple macro in C:

Is generally a pain as the macro is expanded at compile time and causes difficulties debugging. Your point about VTABLEs is well taken however and I have encountered subtle bugs with virtual member functions in C++.

> If I want to get speed via multi-core rather than pure uniprocessor speed, why use C++? I can use Haskell and get much easier and safer parallelism with many other advantages.

You would need to enumerate those advantages for me to answer. Why then wouldn't you use Haskell instead of C?


For the same reasons he listed above, eg you want to use an ffi etc. This is not a temporary issue with c++ as you imply, there are really hard issues about the c++ programming model, eg exceptions and classes, being really hard to interface with other languages that work differently.

The question is not why Haskell not C, sure if you can use Haskell why not, but for most use cases you cannot. But most languages can be used instead of c++, Haskell or Java or whatever, with many advantages.


>> OK. NOT TROLLING, but can I ask why anyone would want to learn C, other than to develop device drivers, kernel modules or other arcane software that is yet to be replaced by C++?

Why does a medical doctor have to learn a little bit of Latin? After all, you should be able to find all the books and material you need in your native language or at least English.

C is to computing what Latin is to medicine. There's just so much history and important code written in C, that it's an important skill to master. Or at least know a little about.

These days you can spend your entire career in the comfort of a high level language working on a virtual machine and make a fortune. However, if you don't know C (and the fundamentals of how computers work) you'll be standing on a foundation that you cannot understand like a living in a cargo cult.

If you're passionate about computing and software engineering, you'll have a natural interest in how things work under the hood. Learning C is almost mandatory if you want to see how deep the rabbit hole is. Knowing it will certainly help when exploring the intricacies of processors and operating systems.

Learning C++, on the other hand, I think is optional. I'm pretty seasoned with C++ but these days I prefer plain old C for low-level tasks. In theory, it may be possible to learn C++ without knowing C, but in practice it's not. If you try to write anything with C++, it will be inevitable that you'll have to interface with C code. std::string and std::map are very nice but there's a ton of tasks that require using a C API (maybe through a wrapper layer).

Embedded and other low-level tasks are often written in C because in order to run C++, you need runtime support for exceptions, static & global constructors, etc. Porting the C++ standard library is even more painful. C++ without exceptions has very little advantage over plain C.


> If you're passionate about computing and software engineering, you'll have a natural interest in how things work under the hood.

When I was 12, I learned 6502 assembly language. The computer I had did not have a C compiler, only BASIC or assembler (you could also enter hex into memory locations, which was hardcore). Learning assembly is actually a LOT easier than people make out. Assembly is basically mnemonics for the underlying machine code, plus named locations. Assembly has everything a language needs for Turing completeness. Assign, Add, Subtract, Compare, Branch. And THAT is how the computer works 'under the hood'. C is one level of abstraction above that, and while it's true that a LOT of software has been historically written in C, before C came along, most software was written in assembler or BASIC (which is older than C, for the historians).

> Learning C is almost mandatory if you want to see how deep the rabbit hole is.

Not really. I know what is going on 'down there', regardless of which language was used to compile the machine code. What is REALLY useful is to have really succinct, powerful, high-level abstractions to build software quickly and with as little code as possible.

> Knowing it will certainly help when exploring the intricacies of processors and operating systems.

If that's what you want to do, then fine. I'm too much of a utilitarian for that kind of exploration. I want to build stuff.

I think if you want to understand what's going on 'down there' in the 'rabbit hole', learn assembly language. There are only seven to ten basic instructions and you can learn them in a couple of days. You can build anything you want in assembly language, if you have an eternity to do it in.


> Assembly has everything a language needs for Turing completeness. Assign, Add, Subtract, Compare, Branch. And THAT is how the computer works 'under the hood'.

While what you say about Assembly is true, there's more to computer internals than doing arithmetic and control flow in the CPU. Virtual memory, DMA and IO are equally important, and the code that deals with that stuff is usually C code.

> > Learning C is almost mandatory if you want to see how deep the rabbit hole is. > Not really. I know what is going on 'down there', regardless of which language was used to compile the machine code. What is REALLY useful is to have really succinct, powerful, high-level abstractions to build software quickly and with as little code as possible.

You can book-learn what's going on under the hood but that's no replacement for getting your hands dirty. If you want to actually write code that uses memory and pointers (e.g. memory mapped files), runs in kernel mode or twiddles with page tables and virtual memory, C is the best language you can do it with.

> I think if you want to understand what's going on 'down there' in the 'rabbit hole', learn assembly language.

Learning Assembly language(s) is a very useful skill for everyone. But doing anything practical with Assembly is a futile effort, it's best to stick to C in low level stuff and resort to Assembly only when you absolutely have to. For example when you have to change between processor modes or write an interrupt handler routing, there must be some (inline) Assembly involved. But if you want to get stuff done and work with the interesting stuff, going all-assembly is not worth the effort.

I wrote a tiny multitasking operating system in Assembly. While it worked very well initially, as soon as the complexity went above one screenful of assembly, it started becoming very unwieldy. I moved on to C and got more stuff done. I could focus on the interesting stuff like scheduling algorithms and virtual memory when I didn't have the mental overhead of having to make register allocations manually or whipping up my own control structures.


Objective-C is a strict superset of C, so learning C helps a lot for any iOS or Mac programming.

C++ is also heavily based on C, so it helps to learn C++.

I agree C is hardly used anymore, and for good reason, but it's still interesting to learn.


> I agree C is hardly used anymore

Any time you want to provide libraries, you'll likely use C: all languages have C FFI, and it's not possible to have a C++ FFI. So you'd have to rely on `extern C`, and then you have to build a bunch of stuff over your OO code so it can be used procedurally.

Often not worth it, C is the lowest common denominator of languages, if you want to be accessible to all languages... you'll probably use C.


> Often not worth it, C is the lowest common denominator of languages, if you want to be accessible to all languages... you'll probably use C.

Personally I see that not so much as a feature of C, but as a limitation of other languages. Interoperability is a Hard Problem(tm) which deserves more thought and effort than it currently receives. It's easy to say 'If you want interoperability, use C' because that's the current state of affairs. It would be better if interop was a solved problem.


You need a common language to interoperate. Currently, C is that common language: everybody understands C.

Some languages have built specific language interop (e.g. Erlang with Java), it's usually broken and often not even as good as C.

> It would be better if interop was a solved problem.

It is: go through C.


The Linux kernel is in C. Large parts of the Gnome project are in C.


Also the GNU utilities, Apache and nginx, and the odd interpreter (Perl, Python, Ruby) or runtime system (Haskell), OpenSSL, qmail, Emacs' and vim's core, etc.

But yeah, apart from that stuff, C is hardly used.


> I agree C is hardly used anymore

Ehh what? From the top of my head I can think of these things written in C:

linux bsd windows nt kernel python php perl sqlite gcc glibc gtk+ apache nginx gimp blender mplayer wine x264 ffmpeg libjpeg curl/libcurl zlib rsync

And there's obviously tons more I could think of if I spend some more time, then we have basically the entire embedded industry.


If I wanted to do iOS/Cocoa programming, I would go straight to Objective-C, bypassing C without stopping, but that's me.

I learned C++ without ever learning C. Although, one could make the argument that in learning C++, I learned about 80% of C anyway. I can debug and fix C code, but I don't enjoy it, and avoid it if possible.

C is interesting, in the same way that Cuneiform is interesting. Personally I just find the Roman alphabet a lot more productive.


[dead]


Nothing is more enjoyable than watching idiots like you imagine that the world is a fully formed perfect thing that can't possibly be a work in-progress. I watch you guys never put anything out there for fear that someone will think it's just not quite perfect enough. Meanwhile, I crap things into my toilet better than anything you'll make mostly because I put stuff up while I'm working on it so I get immediate feedback. To me, the new artist is about showing the process, not just the final work.


Zed, I admire you immensely. Please don't feed the trolls.


Agreed. I really love your work Zed. Let haters hate and keep doing what you best do.


The rise of the internet fanboi has brought a large number of young and/or impressionable people who confuse hubris and verbal violence with competence and depth. After years of "ignoring the trolls" I now prefer to thrash them to warn the good people away from their stupidity.

Also, it's a pretty fun way to practice my creative writing. :-)


That's pretty much his M.O.


K&R isn't available as an E-Book, it is expensive and as someone who has no knowledge about how C is developed and has advanced the age of the book makes me doubt if it is suitable to learn how people develop C programs nowadays. I don't want to finish it and find out that a third of what I learned is outdated.


I have some pretty serious issues with the content in LCTHW, and I would say that reading this book is absolutely not going to give you a deep understanding of how modern c programming is done.

K&R is an excellent reference, and LCTHW covers some new topics like valgrind, which is great.

However, it fails to point out that some of the exercises are just that; exercises. For example, creating a custom type system in C. Never do this.

Some other failings include; little attention to testing, no mention of cmake (used in major projects like opencv and clang) or lint(!), criticism of K&R systax (arguably validly so, but industry standard is industry standard).

Look there's a lot of good in Learn C the Hard Way. I completely recommend it as a way to learn some C... but don't make the mistake of assuming a new book is better than an old book just because it's new.

If you're programming C, read K&R. If you're doing it professionally, you probably also want to read Deep Sea Programming by Peter van der Linden.


If you're serious about programming in C you'll read multiple books anyway. LCTHW seems like a good starting point to me. It's probably not for everyone. No book I've ever read about anything is. But it gets you to from passive reading to actively playing around fast. And it doesn't throw just code in front of you. It also explains the most basic functions of some of the non OS specific tools you will actually use when you'll be doing serious stuff much later. You can build on that pretty well, because you now know more about what you don't know, which gives you a tremendous advantage in terms of further research.


I've not read Zed's online book. I guess he encourages the use of -Wall and -Wextra when compiling on gcc? That's always smart. Also, cppcheck is a nice tool. If I were going to mention valgrind, I'd mention that too.


Excellent critique, but you show your hand here:

> K&R is an excellent reference, and LCTHW covers some new topics like valgrind, which is great.

Wrong, I already demonstrate that they have an example that is basically strcpy() and I show why it's broken. By the time I'm done with the book I'll have taken apart the entire book and shown that it's the source of most security holes through poor examples.

> However, it fails to point out that some of the exercises are just that; exercises. For example, creating a custom type system in C. Never do this.

Totally wrong. You basically just said that all of gobjects (the foundation of Gnome) should not exist, yet it's a successful object oriented abstraction. You may think that's true, but then you'd be in the vast minority. Without a demonstrated reason why one should never do this, your statement is total bullshit.

> Some other failings include; little attention to testing, no mention of cmake (used in major projects like opencv and clang) or lint(!), criticism of K&R systax (arguably validly so, but industry standard is industry standard).

There's extensive attention to testing, with nearly every exercise having use of valgrind and the inclusion of defensive programming macros not found in most other books PLUS a constant demand to break the code they've just written. In the second half of the book I'll be including the testing suite I use in my own projects:

https://github.com/zedshaw/mongrel2/blob/master/tests/minuni...

This claim is borderline slanderous it's so wrong.

Edit: And cmake is a piece of crap that isn't better than modern make.

> Look there's a lot of good in Learn C the Hard Way. I completely recommend it as a way to learn some C... but don't make the mistake of assuming a new book is better than an old book just because it's new.

Meanwhile, you make the mistake of not questioning the code in K&R when a cursory read finds numerous flaws and buffer overflows. If you feel that people should evaluate all things objectively (I agree!) then do it yourself and actually read K&R again.

And by the way, they updated it several times to remove defects since you probably read it. My copy was only a few years old and I had to update it because they fixed bugs in that copy() function I tear apart (but not enough).

> If you're programming C, read K&R. If you're doing it professionally, you probably also want to read Deep Sea Programming by Peter van der Linden.

I agree, solid recommendations, but I'm going to advocate people read K&R for historical reasons and to practice code review, which is what I'm doing in my book.


> Wrong, I already demonstrate that they have an example that is basically strcpy() and I show why it's broken.

strcpy() is not broken. You cannot expect a function to work if you violate its preconditions. Exercise for you: enumerate strcpy's preconditions. And once you know them, you design your program in accord with them.

> There's extensive attention to testing

Testing can only prove the presence of bugs, not their absence. Heavy emphasis on testing leads to coding by trial-and-error, which, I argue, is the prime source of bugs (security and others).

"Hey, I tested it, it works!" What's missing in that statement is that it works for usual test cases. Software breaks on unusual/abnormal inputs. If, e.g., strcpy causes buffer overflow in your program, it is YOUR code that is crap because it didn't ensure proper preconditions. Don't project your incompetence/stupidity on strcpy.

THIS is what programmers should learn.

Your quote from K&R "critique": "And, what's the point of a book full of code you can't actually use in your own programs?" This line of thinking is despicable. A book is supposed to teach people ideas and concepts, not give them ready-made code recipes that they can copy-paste in their own programs.


Hey, sorry if you took it the wrong way. I'm just pointing out some flaws I see in the text, and I fully acknowledge it's a work in progress and these may be solved over time.

However, I will repeat.

K&R is an excellent text. If you disagree, I dispute your credibility.

I also stand by my comment. Smart people have created strong type systems like gobjects and ctypes in C.

As a programmer, you should unequivocally _never_ do this yourself. C is about reuse, and not reinventing the wheel. Sure, if you want to use a strong type system instead of C++, do so. Do not write one yourself.


> As a programmer, you should unequivocally _never_ do this yourself.

Bullshit. You guys going around throwing your platitudes, shoulds, and nevers at people rarely have any evidence supporting your claims, and usually it just hides a lack of real knowledge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: