You might have noticed that this language definition is short on specificity. I haven't specified how big the number can be (such as if it can be bigger than a 32-bit integer) or even if the number can be negative.
The grammar itself (which, in this case, is expressed in EBNF notation) says nothing about details such as the size of numbers. Grammars only describe what can be said, not what that means.
The C grammar, for example, has many different number types. But it's not the grammar which determines if an unsigned long is 32 or 64-bit. That's determined by parts of the language specification outside of the grammar.
Yes. That said, it was probably a mistake for C not to define standard fixed-width types such as int8, int16, int32, int64, uint8, uint16, uint32, and uint64. (This limitation is worked around via the <stdint.h> header file; however, that header isn't present in Visual Studio, so anyone who wishes to use those fixed-width types needs to manually download <stdint.h> and include it in their projects. ( http://en.wikipedia.org/wiki/Stdint.h )
Certainly. But the semantics of what "uint16" would mean is not in the grammar. There's nothing to prevent me from defining "uint16" to be a double precision floating point number. That would be stupid and unintuitive, but still possible.
If you look at the specification for any language, the grammar is the smallest part. The C++ grammar takes up less than a dozen pages, but the entire standard is over 800.
In that case, thank you. I've done a lot of systems programming with C and C++.
Understanding exactly what is defined by a language's grammar, though, originally comes from my undergrad courses in programming languages and compilers.
Cool. Would you say formal education has made you a significantly better programmer? (The definition of 'significantly' in this context is up to you to decide.)
I'm completely self-taught, but I've wondered if I missed out on something valuable by not going to college.
Long answer: it's hard to know for certain. I chose Computer Science as my major having done very little programming. I didn't do any serious programming until my intro to programming course my first semester. I did some programming in a high school course, but really, my start as a programmer coincided with the start of my formal education.
But during my education, I was exposed to things I doubt I would have encountered on my own. In particular, what I learned from my programming languages, compilers and operating systems courses. In fact, if it had not been for my programming language course, I don't know if I would have realized how much I like thinking about how to express concepts in code. If it wasn't for my graduate studies, I doubt I would have become so comfortable with systems programming and performance in general.
I'm also a biases person to ask because I'm close to getting my Ph.D. I have a Bachelor's and a Master's. So I've done a decent amount of schoolin'.
The problem with being self-taught is sometimes you just don't know what you don't know. I know it sounds boring, predictable and constraining to learn a menu of concepts. But the benefit is that lots of people who came before you decided "these things are important to know to be in our field."
(Hope you see this reply, I didn't realize you replied until just now.)
Visual studio has __int8, __int16, __int32, and __int64. Any of these can be prefixed with "unsigned" to got their unsigned equivalents. They're compatible with their stdint equivalents.
It's not exactly a replacement for 'the dragon book' but it certainly takes some of the mysticism out of compiler hacking.
If you have never actually made a language from the ground up, then please do so ! It will teach you plenty and it will also help you to write better code because you understand better what a compiler will do internally with the code you wrote. Another great source of insight is the 'produce assembly output' switch for your favourite compiler. Have a look at what the compiler does when it parses your code and turns it in to assembler ready to be turned into machine language.
The grammar itself (which, in this case, is expressed in EBNF notation) says nothing about details such as the size of numbers. Grammars only describe what can be said, not what that means.
The C grammar, for example, has many different number types. But it's not the grammar which determines if an unsigned long is 32 or 64-bit. That's determined by parts of the language specification outside of the grammar.