Hacker News new | past | comments | ask | show | jobs | submit login

I'm pretty ignorant about this stuff, so please don't think I'm trolling.

I'm confused when you speak of a virtual machine with regard to C... can you explain what you mean by this?

I had to wikipedia the Burroughs machine. I guess the big deal is that it's a stack machine? It looks very interesting and I plan to read more about it. But I guess I don't understand why that is a hindrance to C.

The JVM is a stack machine, isn't it?

btw, I haven't read the article yet. It's my habit to check comments first to see if the article was interesting, and seeing your comment made me want to reply for clarification.




The Burroughs was a stack machine, but that's only the beginning. Look at how it handled addressing hunks of memory. Bounds checked memory block references were a hardware type, and they were the only way to get a reference to a block of memory. So basically, null pointers didn't exist at the hardware level, nor out of bounds writes to arrays or strings. Similarly, code and data were distinguished in memory (by high order bits), so you couldn't execute data. It simply wasn't recognized as code by the processor. Also interesting, the Burroughs machines were built to use a variant of ALGOL 60 as both their systems programming language (as in, there wasn't an assembly language beneath it) and as their command language. The whole architecture was designed to run high level procedural languages.

C defines a virtual machine consisting of a single, contiguous block of memory with consecutive addresses, and a single core processor that reads and executes a single instruction at a time. This is not true of today's processors, thanks to multicores, multiple pipelines, and layers of cache.


No, C does not define a single contiguous block of memory with consecutive addresses. It _does_ qualify that pointers are scalar types, but that does not imply contiguity or consecutive addresses (with the exception of arrays)

There is no requirement in C that you be able to execute data.

You certainly could have a C implementation that bounds-checks strings and arrays. (See e.g. http://www.doc.ic.ac.uk/~phjk/BoundsChecking.html for a very old paper on how you might do that)

The "abstract machine" of C explicitly does _not_ make reference to the memory layout. (cf 5.1.2.3 of the spec)

It also makes no reference to the number of cores, and order of execution is not one at a time, but limited by sequence points.

That's the whole point of C - it is very loosely tied to actual hardware, and can accomodate a wide range of it, while still staying very close to actual realities.

Edit: Please don't take this as a "rah, rah, C is great" comment. I'm well aware of its shortcomings. I've spent the last 20+ years with it :)


I would argue that C's problem is not that it's too strictly defined, but that it's too poorly defined. An in-depth look into all the cases of undefined behavior in C will show what I mean.

You want to really understand C? Read this[0]. John really understands C.

[0] http://blog.regehr.org/


Can't upvote this enough. Well, except that I'd replace "poorly" with "vaguely". "Implementation-defined behavior" is there for very good reasons in every single case it's there.

Sidenote: With John's name, I'd be tempted on a daily basis to change the last two letters of my name to a single 'x' ;)


You mean like this?

  $ whoami | sed 's/hr/xx/'


No, he meant:

    $ whoami | sed 's/hr$/x/'


Minor nit: She meant :)


I think you mean:

  $ echo "$PARENT" | sed 's_r/x_r$/_'
P.S. It's a bit hard to believe I misread that... thanks.


Random tip: in Bash (at least), you can execute the previous command with some changes using ^..^..^, e.g.

  $ echo john regehr | sed s/hr/xx/
  john regexx
  $ ^r/x^r$/^
  echo john regehr | sed s/hr$/x/
  john regex
(The second last line is just bash printing the new command.)


My sed knowledge isn't very advance. What is your invocation supposed to do?


You can use any separator you want in an s/../../ expression, not just /, in this case the separator is _ (this technique allows you to use / without creating a "picket fence": s/r\/x/r$\//).

So the regex just means replace "r/x" with "r$/".


I'd argue that there's a big distinction between C as described in the standard and C as actually used in real-world code, and the latter has much stricter semantics and is harder to work with. A C compiler that follows the standard but not the implied conventions won't do well.

For example, take NULL. Even on a machine with no concept of NULL, you could easily emulate it by allocating a small block of memory and having the pointer to that block be designated as "NULL". This would be perfectly compliant with the standard, but it will break all of the code out there that assumes that NULL is all zero bits (e.g. that calloc or memset(0) produce pointers whose values contain NULL). Which is a lot of code. I'm sure that many other examples can be found.


"C defines a virtual machine consisting of a single, contiguous block of memory with consecutive addresses"

This is 100% false. The C standard makes no mention whatsoever of memory. I don't know much about the burroughs machine, but it sounds like it would map very well to the C virtual machine:

C permits an implementation to provide a reversible mapping from pointers to "sufficiently large integers" but does not require it.

A pointer to an object is only valid in C (i.e. only has defined behavior) if it is never accessed outside the bounds of the object it points to.

Converting between data pointers and function pointers is not required to work in the C standard either.

C does require that you have a NULL pointer that has undefined behavior if you dereference this, but this could be trivially done by the runtime by allocating a single unit of memory for it.


>C defines a virtual machine consisting of a single, contiguous block of memory with consecutive addresses, and a single core processor that reads and executes a single instruction at a time. This is not true of today's processors, thanks to multicores, multiple pipelines, and layers of cache.

Which is true, for a rather stretched definition of "virtual machine"(which falls apart at the kernel level, because it's pretty hard to work with a machine's abstraction when you're working directly on the hardware).

The problem with the virtual machine comparison is that C doesn't mask ABI access in any meaningful way. It doesn't need to, since it's directly accessing the ABI and OS. So the argument that C isn't multithreaded is rather shortsighted, because C doesn't need that functionality in the language. It's provided by the OS.


FYI when discussing the ISO C standard the term "virtual machine" is well understood to be the abstracted view of the hardware presented to user code. Things well defined in it are portable, things implementation defined are non-portable, and things undefined should be avoided at all costs.


As a C programmer this is like watching virgins attempt to have sex. Normal people just write some code which does some sh*t and that's OK. We don't need to deeply reflect on whether it's cache optimal, because that will change next week. Just good clean C. When did that become a strange thing to do?


"good clean C"

Is there such a thing? It seems like every C program, even ones that are praised as being excellently written, are a mess of pointers, memory management, conditional statements that check for errors, special return value codes, and so forth.

To put it another way, look at the difference between an implementation of a Red/Black Tree in C and one written in Lisp or Haskell. Not only are basic things overly difficult to get right in C, but C does not become any easier as problem sizes scale up; it lacks expressiveness and forces programmers to deal with low-level details regardless of how high-level their program is.


Um. Read Bentley. Get back to me. Yesterday's old shit is last week's high level. Turns out clear thought in any language is the main thing.


"Turns out clear thought in any language is the main thing."

No, the ability to express your thought clearly is the main thing -- and that is why languages matter. If your code is cluttered with pointer-juggling, error-checking conditional statements, and the other hoops C forces you to jump through, then your code is not clear.

Try expressing your code clearly in BF, then get back to me about this "languages don't matter as long as your have clear thought" philosophy.


Can both of you guys just get back to me later? Kinda busy now.


Sure. RBTree in C is that ugly. Take your time.


If you can't see the flaws in C you're probably writing poorly optimized, security-atrocious C.


I'm a professional pentester and I have been a C programmer for over well over 5 years, but I acknowledge that my C is probably still pretty bad :) how about you? :)

P.S: now I have figured you out (on a very basic level of course) and I have a lot of respect, but nonetheless, let's play :)


I've been writing kernel code in C for about 8 years, including a hardware virtualization subsystem for use on HPCs. I used to teach Network Security and Penetration, but I lost interest in security and moved on to programming language development.

My code, in any language, is full of bugs. The difference is that in C my bugs turn into major security vulnerabilities. C is also a terrible language in that you never write C -- you write a specific compiler's implementation of C. If a strict optimizing compiler were to get a hold of any C I've ever written, I'm sure it would emit garbage. All the other languages I write code in? Not so much.

That said, is C useful? Hell yes.


Based on that I will buy you your beverage of choice at any conference you choose :)

P:S: I've probably written commercial stuff you work with and also I don't give a shit if you give a shit, if you see where I am coming from. I have a pretty good idea of what the compiler will do and I will be pissed off if it doesn't do that. It normally does.


Thanks. I hope you didn't take my first comment as an insult.

What I meant by that is C is not just something you sit down with after skimming the specification and "bang out." There are years of community "inherited knowledge" on how to write C so it doesn't result in disaster. The very need for these practices exemplifies the flaws in C as a language -- by the very nature of working around these vulnerabilities, you acknowledge that they are vulnerabilities. Thus, if one doesn't see C's issues then one is doomed to C's mistakes (this sentence is very strange when read out loud).


I think that your situation is pretty different from most programming projects in that you are way closer to the machine than most people need to be. Also, you are working on an OS which is particularly sensitive to compiler nuances. I would have a hard time imagining different compilers spitting out garbage with the standard "hello world". Now the almost mandatory caveat: I know that C has its flaws, but not all programming projects are the same. Projects which are not like your will have the "You write a specific compiler's implementation of C" problem in way smaller doses than you (possibly to the point of not having them at all, like hello world).


I'll have to read more about the memory references to get a feel for that.

However it speaks of a compiler for ALGOL... it was compiled down to machine instructions. Assembly is just a representation of machine instructions, so I don't see how it can be said to not have an assembly language.

Maybe nobody ever bothered to write an assembler, but that doesn't mean that it somehow directly executes ALGOL.

Thanks for your replies, you have given me some food for thought.

[1]http://en.wikipedia.org/wiki/Burroughs_large_systems_instruc...


> However it speaks of a compiler for ALGOL... it was compiled down to machine instructions. Assembly is just a representation of machine instructions, so I don't see how it can be said to not have an assembly language.

In this sense, you're completely right. But I think that people who grok the system mean something a bit different when they say it doesn't have an assembly language. (Disclaimer: I have no firsthand experience with Burroughs mainframes.)

The Burroughs system didn't execute Algol directly, true. But, the machine representation that your compiled down to was essentially a high-level proto-Algol. It wasn't a disticnt, "first-class citizen". It was, if you like, Algol "virtual machine bytecode" for a virtual machine that wasn't virtual.

If you're writing in C, or some other higher-level programming languages, there are times when you want more fine-grained control over the hardware than the more plush languages provide. That's the time to drop down to assembly code, to talk to the computer "in its own language".

The Burroughs mainframes had nothing analogous to that. The system was designed to map as directly to Algol as they could. It's machine language wasn't distinct from the higher-level language that you were supposed to use. To talk to a Burroughs system "in its own language" would be to write a rather more verbose expression of the Algol code you'd have had to write anyway, but not particularly different in principle.

So, I guess the answer to whether or not the Burroughs systems did or did not have an assembly language is a philosophical one. :P


C doesn't care for fancy terms like VM, multicore, threads, ... But you can always make libary and implement what you need. This approach has advantages, for example you can share memory pages between processes, because that kind of stuff are part of hardware/OS, not C language. It would be stupid to implement it directly in C language. You will now say that it is reason why C is bad, i say it is reason why C is so popular all these years.


> C defines a virtual machine consisting of a single, contiguous block of memory with consecutive addresses, and a single core processor that reads and executes a single instruction at a time. This is not true of today's processors, thanks to multicores, multiple pipelines, and layers of cache.

This type of machine has become so ubiquitous that people have begun to forget that once upon a time, other types of machines also existed. (Think of the LISP Machine)


It may be the parent comment is referring to the Runtime-Library when using the term Virtual Machine.


No, I am referring to the virtual machine defined by the C language.


I'd say "abstract virtual machine." You are just confusing people. "Virtual machine" most commonly refers to a discrete program that presents a defined computational interface that everyone calls the virtual machine. This VM program must be run independently of the code you wrote.

For C there is no such virtual machine process. The "virtual machine" for C is abstract and defined implicitly.


If you're going to be pedantic, use the right terms. The C language defines an 'abstract machine', not a 'virtual machine'.


Second this; in all my years (granted, not a lot, but enough), this is the first time I've heard anyone claim that C has a virtual machine. You can hem and haw and stretch the definition all you want, but when it compiles to assembler, I think that most reasonable people would no longer claim that's a "virtual" machine.

Edit: if you want to argue that C enforces a certain view of programming, a "paradigm" if you will (snark), then say that. Don't say "virtual machine", where most people will go "what? when did C start running on JVM/.NET/etc?".


It may be less confusing to say "C's underlying model of computation".


Given the way that LLVM has come onto the scene, I'm not sure I'd agree. C defines assumptions in the programming environment and does not guarantee that it at all resembles the underlying hardware. You are never coding to the hardware (unless you are doing heinous magic), you're coding to C. That's a "virtual machine" to me.

The concept of C as a virtual machine isn't new (I first heard it around 2006 or so? I don't think it was new then) and it's much more descriptive than referring to its "model of computation".


It's more descriptive, but somewhat incorrect.

The common definition of a process virtual machine is that it's an interpreter that can be written to that essentially emulates an OS environment, giving abstracted OS concepts and functionality. This aids with portability. Another concept of virtual machines in general is, for lack of a better term, sandboxing. You're limited to only the functionality that the VM provides.

C goes halfway with that. You generally don't need to care about most OS operations if you're using the standard library(which abstracts most OS calls), but you definitely do need to care about the underlying OS and architecture if you're doing much more than that. Also, simple C definition doesn't allow for threads or IPC, both of which are provided by the POSIX libraries. You're also allowed to directly access the ABI and underlying OS calls through C.

The best example of C not really having a VM is endianness. If C had a "true" virtual machine, the programmer really shouldn't need to be aware of this. But everyone that's written network code on x86 platforms in C knows that you need to keep it in mind. Network byte order is big endian, but x86 is little endian, so you need to translate everything before it hits the network.

EDIT: I think LLVM is somewhat of a red herring in this context. Realistically, unless you're writing straight assembly, there's nothing stopping anyone from writing a VM-based implementation for any language. The problem with C and the other mid to low level languages is that if you're writing the VM, you need to provide a machine that not only works with the underlying computational model, but also provide abstractions for all the additional OS-level functionality that people use.

So C could definitely become a VM-based language, especially if the intermediate form is flexible enough.


You mean like the TeDRA compiler

http://en.wikipedia.org/wiki/TenDRA_Compiler

Or the C compiler for i5/OS (ILE bytecodes) with an OS wide JIT?

http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/topi...

People keep on mixing languages with their implementations.

It should be compulsory to have compiler design classes in any informatics course.


"The common definition of a process virtual machine is that it's an interpreter that can be written to that essentially emulates an OS environment, giving abstracted OS concepts and functionality."

Is it? I have seen "virtual machine" used to describe the process abstraction and to describe the IR in compilers (hence "Java Virtual Machine"), and to describe the Forth environment (similar to compilers).


Using the term "virtual machine" to refer to the "virtual PDP-11" that C exposes to programs is possibly older than the internet.


What's so confusing about the term virtual machine? It's an abstraction of the underlying machine.


For the same reason that if you put a hunk of chocolate in the oven and called it "hot chocolate," people would be confused that it's not a warm beverage of chocolate and milk.

That is, the phrase "virtual machine" is usually assumed to be the name for a piece of software that pretends to be some particular hardware. It is less commonly used to mean a "virtual machine", that is, not a noun unto itself, but the adjective virtual followed by the noun machine.


The term "virtual machine" is already pretty overloaded. This isn't referring to virtualized hardware in the VMWare sense or a language/platform virtual machine in the JVM sense. Rather, it's talking about how C's abstraction of the hardware has the Von Neumann bottleneck baked into it, so it clashes with fundamentally different architectures like the Burroughs 5000's.


No need to downvote, guys.

The C language specification [0] defines an abstract machine and defines C semantics in terms of this machine. 5.1.2.3 §1:

> The semantic descriptions in this International Standard describe the behavior of an abstract machine in which issues of optimization are irrelevant.

[0] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf


If C makes use of a virtual machine, then why we have to recompile it for every new machine/platform?


"Virtual machine" in this context refers to the computation model of the language. In C, that model is essentially a single-CPU machine with a big memory space that can be accessed with pointers (including a possibly disjoint memory space for program code).

Other models are possible; for example, lambda calculus and combinator logic are based on a model where computation is performed by reducing expressions, without any big memory space and without pointers. Prolog is based on a model where computation is performed by answering a query based on a database of rules. These are all "virtual machines" -- the realization of these computation models is based on compiling a program for a specific machine. It is not different with C; C just happens to use a model that is very similar to the real machine that a program executes on (but it is not necessarily identical e.g. you probably do not have so much ram that any 64-bit pointer would correspond to a real address on your machine).


Because C compilers write out to native code. It may help to think that C is the virtual machine language (as well as its specification for the virtual machine). This concept has been extended by things like Clang, that transform C to a somewhat more generic underlying language representation (LLVM bytecode) before compiling to native code.

You can ahead-of-time compile Mono code to ARM; that doesn't mean it's not defining a virtual execution environment.


See silentbicycle's sibling comment.


I'm also a novice on low-level stuff, but if I had to guess...

I'd guess that the virtual machine of C pertains to the addressing and the presentation of memory as a "giant array of bytes". Stack addresses start high and "grow down", heap addresses start low. These addresses need not exist on the machine. For example, two running C processes can have 0x3a4b7e pointing to different places in machine memory (which prevents them from clobbering each other).

Please, someone with more knowledge than me, fill me in on where I'm right and wrong.


C does not require the presentation of memory as a "giant array of bytes"---certainly when you have a pointer, it points to an array of bytes (or rather, a contiguous array of items of the pointer type) but that's about it. The stack does not have to start high and grown down (in fact, the Linux man page for clone() states that the stack on the HP PA architecture grows upward) and the heap doesn't necessarily start low and grow up (mmap() for instance).

You are also confusing multitasking/multiprocessing with C. C has no such concept (well, C11 might, I haven't looked fully into it) and on some systems (like plenty of systems in the 80s) only one program runs at a time. The fact that two "programs" both have a pointer to 0x3A4B7E that reference two physically different locations in memory is an operating system abstraction, not a C abstraction.


C pointer aliasing defeats certain compiler optimizations that can be made in other languages, and is frequently brought up in C vs FORTRAN comparisons. I think that's probably what the GP had in mind.


C99 includes restricted pointers, but support is a bit spotty. Microsoft's compiler (which is of course really just a C++ compiler) includes it as a nonstandard keyword, too.

http://en.wikipedia.org/wiki/Restrict


That makes a lot of sense. Thanks!


Well that is how memory, the hardware we have and also all normal operating systems work, but if you want to discuss other stuff we can do that too :) Try serious debugging and you will find that all your preconceptions are confirmed, yet it's still hard to know WTF is going on.


The "memory is just a giant array of bytes" abstraction hasn't been true ever since DRAM has existed (because DRAM is divided into pages), ever since caches were introduced, and certainly isn't true now that even commodity mulch-processors are NUMA with message-passing between memory nodes.


Look if we want to be super anal about shit all memory is slowly discharging capacitors with basically random access times based on how busy the bus circuity is with our shit this week. It turns out that memory is really complicated stuff if you look at it deeply, but the magic of modern computer architecture is that you get to (hopefully) keep your shelf model for as long as you can. If you were to try to model actual memory latency: here's a shortcut: you can't. That's why everyone bullshits it.


Fair point. At this point, it's a very leaky abstraction because not all levels of "random access" (e.g. L1 cache vs. main memory) are created equal.


True, and this is my biggest problem with writing optimized code in C -- it takes a lot of guessing and inspecting the generated assembler and understanding your particular platform to make sure you're ACTUALLY utilizing registers and cache like you intend.

If there were some way of expressing this intent through the language and have the compiler enforce it, that'd be fantastic :)

That said, there's really not a better solution to the problem than C, just pointing out that even C is often far less than ideal in this arena.


i'm actually spending much of my time these days having a go writing HPC codes in haskell.

So far, the generated assembly looks pretty darn awesome, and the performance pretty competitive with alternatives i have access too :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: