I didn't checked the details, but I would love to see ten attempts every year, trying to solve the same problem.
D and Go are not trying to provide us with a better C, because they don't share the concept of, a language that translates straightforward to assembler, but still is designed so that you can mount good abstractions with it.
This is what you need for kernel programming, embedded systems, and a good deal of system programming as well.
At the same point C is one of the top-used languages in the world, with a vast code base, and yet almost all the attempts in the languages area are about much higher level or exoteric stuff. Not that I don't want that kind of research, but it's bizzarre that no one is focusing where a lot of meat is.
I bet this better C will arise in a few years at max, and it is not going to come from academia. Times are mature apparently, there is even a C conf this year ;)
> I bet this better C will arise in a few years at max, and it is not going to come from academia.
I take this statement to mean: Academia is failing us by not providing us with the next generation C. If that was the intent, I think it's not a fair jab.
Academia IS actively investigating low-level languages: ATS, Cyclone, and typed assembly are all examples. But, it's not academia's job to create widely-used languages and tools. Sometimes they do create something that achieves wide use, but that's mostly a side benefit. Their job is to generate and evaluate useful "idea nuggets" that can inform future system builders and to train students to do novel systematic research.
There seems to be a common meme that academic CS is "out of touch" with what real developers need. I think it's mostly unfair: while the world always needs a slightly better web framework, compiler, language, etc., that's not the goal of academic CS; they aim to fertilize the ground with ideas that facilitate the growth of more big and small ideas.
> I take this statement to mean: Academia is failing us by not providing us with the next generation C. If that was the intent, I think it's not a fair jab.
I don't want to say that academia is failing or should responsible of giving us this "better C", just where I guess we should look to see "new C" coming is not there. However I think that academia may have a role in this matter, that is, to study the interaction between the programmer and the programming language about getting things done. This is probably done (but I can't remember famous recent studies about this) and could provide useful hints about how we can incrementally build / modify C to get a more reliable and less bug prone language.
However ultimately "new C" will be designed by one or two guys at max, as it always happened with this kind of practical programming languages.
I mostly agree, but I would make a small correction: that's not what academic CS is paid for. Jonathan Shapiro didn't stop work on BitC because he ran out of enthusiasm but because he ran out of funding. Habit will only remain a live project while its PIs can keep the ~~grant money~~ spice flowing.
Native Oberon, Spin, Singularity, Home, Inferno are all operating systems written in GC enabled system programming languages.
D and Go are just following that thread.
Eventually all mainstream OS will have such type of languages and C will be legacy.
Microsoft is already slowly doing that by dropping support for C later than C90, and adding COM based API as the default Windows API. Now mainly for Metro, perhaps for the complete OS in later versions.
Apple as well, by adding reference counting and GC on their systems language, Objective-C.
C only got spread thanks to UNIX, and brought upon us the wrath of buffer overruns and dangling pointers security exploits.
We need better languages for doing systems programming.
Note that Apple is abandoning garbage collection for Objective-C.
Additionally, GC is not strictly necessary to solve buffer overruns and dangling pointers. Region-based memory management is an alternative for many use cases.
Reference counting is the low-end of automatic memory management and garbage collection.
And about GC in kernel, of course you can. The issue is that most efficient GC needs giant locking and giant locking in kernel is something that you don't want.
In fact you can really live without GC. The burden of integrating it in kernel space out-range the benefit of using it.
You "imagine" incorrectly. A functioning Obj-C requirement requires a runtime. Currently that runtime (libobjc) is written in C, and it requires a C library to function. Yes, you can rewrite those portions to use only functions available in kernel-land, but it is by no means "easy" as you suggest.
This just goes to show you don't understand compilers.
No, admittedly, I don't, at any more than a basic level. But you're still wrong.
Try compiling, linking, and running an ObjC program without the ObjC runtime. See how that fails. See how long it takes you to write a minimal runtime that can run your program in user-space, and then kernel-space. It won't be quick.
Try compiling, linking, and running a plain-vanilla C program without the "C runtime" (I guess you mean a combination of libc, libgcc, and crt.o, basically what you get when you pass -nostdlib -nostartfiles to gcc). Yep, that'll fail too. Then get it to compile and run anyway. Not that hard.
That's* the difference I'm talking about.
At the very least, kernels like Linux and Mach/Darwin already have kernel-friendly replacements for the libc functionality they commonly use. Doing something similar for ObjC would be time consuming. Certainly it isn't impossible, but note that I never said that: I merely pointed out it wasn't easy.
C hardly requires it's runtime. Objective-C without the runtime is just C, if you write the runtime, GC, and other low-level system components in the C subset of Objective-C you are just writing the kernel in C.
This is in the original context of how all systems will be written in "GC enabled system programming languages". If the language is not GC enabled due to missing a runtime or w/e then the technicality isn't relevant.
The problem, is that at the end of the day, some code somewhere is going to have to deal with resource allocation. Generally speaking with all the other fluff aside, an operating system, fundamentally manages and multiplexes resources. It's naive to think that resource management would be best done in a language with automatic GC. Somebody
I don't doubt that C is not the systems programming language of the future. But it's not going to be done in a system that's based around automatic GC either.
You use the word proven as if it means something. The work you list is no more proven in terms of building production systems than any other research work.
Additionally, I find it interesting to note, that if you had in fact been very familiar with all the work you mention. You should actually have noted that many of these systems go through significant effort to sidestep the GC.
It is proven, because groups of people went through the effort of spending time and effort implementing those systems and used them for daily work as well.
It not just talk about some guys showing up papers at OS geek conferences.
The Oberon Native for example, was for long time the main operating system at the operating system research department at ETHZ. Most researchers used it as their daily OS for all tasks you can think of.
> Interesting that none of those systems are in active usage...
That's a ridiculous dismissal to make; we live in a world where hand-written assembly is considered a badge of honor, where people prefer to patch up a 40-year-old dinosaur rather than make something new, where people to go ridiculous lengths to avoid using a mouse simply because Unix didn't have mouse support in 1973. Is it any wonder that we're not using anything better?
> we live in a world where hand-written assembly is considered a badge of honor
Even in video games where this held true for longer, this is generally no longer the case.
> where people prefer to patch up a 40-year-old dinosaur rather than make something new
Also not true, Generally engineers prefer to make new things, The problem is that, creating new products from scratch to replace old ones ends up almost always being more effort than expected. Failure, is what causes patching the dinosaur the better approach, rather then engineering desires, Netscape is a classic example.
> where people to go ridiculous lengths to avoid using a mouse simply because Unix didn't have mouse support in 1973
I don't know what world you live in, because in the world I live in, computers with mice are much more popular than the alternative.
The fact that none of these systems are in usage is true, you can make excuses for them, but the fact of the matter is that there is a lot of smart people trying, and very little success. There is little evidence to support that this is a superior approach.
A programming language is under copyright of the creator.
, which can decide to put it in public domain, standardize it, give an open source license to it, whatever.
While NeXT did not exactly created Objective-C, it acquired a license to be able to create their own implementation, which became the official Objective-C compiler.
Plus, lets see what is the outcome of the Oracle vs Google trial regarding copyright.
A programming language is under copyright of the creator. , which can decide to put it in public domain, standardize it, give an open source license to it, whatever.
What requirements would you have for a better C? The only major requirement that I can think of is that it's compiled directly to assembly without requiring an intermediate runtime or VM. This helps avoid the fundamental "chicken and egg" problem that a lot of higher level languages have. This means that GC needs to be an optional component. It does not specifically exclude it, however.
hi antirez, do you have a top contender from the current crop or a suspicion of which/what C Next would be like? I would guess that you would also eliminate languages like ATS, Felix, Rust and Vala.
I think that languages that are a just thin shell on top of C (as opposed to languages that merely target C, e.g. Chicken Scheme) might have some potential. I'll call this coffeescriptification--still eagerly awaiting the first transpiled language that does nothing but remove C's braces and semicolons. If someone doesn't do it by next April Fools, I certainly will (I already have the name in mind: Glee (the italics are part of the name, and mandatory)).
Sounds like what you're looking for is the original Bash source code, circa 1977, for example [1]:
LOCAL STRING copyto(endch)
REG CHAR endch;
{
REG CHAR c;
WHILE (c=getch(endch))!=endch ANDF c
DO pushstak(c|quote) OD
zerostak();
IF c!=endch THEN error(badsub) FI
}
Applicable macros if you want to give your code that classic ALGOL 68 smell in [2].
I remember going through that code, and wanting to simultaneously reach for a spraycan of holy water, and claw my eyes out.
Similar efforts (the linguist at DRI who did a #define of totally new control structures . . . in Russian, though he wasn't Russian, it was just a lark) are why we can't have nice things.
I too am looking for this in Java. Especially if the transpiler was easily extendable to add your own language features. The key would be to make sure it transpiles both ways so that it could be adopted in a corporate, Java-only environment.
One thing often overlooked in language design is the filename. It's 2012, why are languages still stuck with IBM-era filename extentions when they could use unicode?! For instance:
Rust: filename.ℛ
Java: filename.☕
Glee: filename.☺
Isn't that just so much clearer and nicer-looking than ".rs" or ".java" or ".g"? It's really the fine aesthetic points like this that can make or break a language.
Brilliant. I'll be sure to mention your <s>name</s> memory address in the foreword of the obligatory O'Reilly book (taking suggestions for the cover animal; I'm partial to the thylacine, myself).
One suggested improvement: Rust's extension should preferably be .® to jive with the language's official logo.[1] And with the advent of emoji, we can give Perl a file extension of .🐪 (Unicode Character 'DROMEDARY CAMEL' (U+1F42A)). Perl 6 can get the bactrian camel instead.
It's not clearer at all for me, especially when some of these characters won't render reliably on several platforms (for instance, the java char isn't showing in this browser right now).
Extensions and filenames can be made not to be so central, which I'd consider a more reasonable development over superficial silly hacks like that. This is IDE domain, definitely in 2012 and for years to come.
If you want to render your extensions to funny happy Unicode symbols in your specific purpose file manager, then fine by me. Actually storing filenames like that would just make you a number of new enemies.
Indeed, however this first blog post doesn't tell us much about the language itself. The couple examples seem a bit... anecdotal.
I'd also like to point out that setting individual bits in an integer is not usually atomic. So the "syntactic sugar" here could be pretty error-prone (and I can't say it was very high on my list of "features I wish C had").
But they've identified this as for systems based programming where you don't have any guarantee of stdint.h holding true (POSIX compliance).
Their implementation is much more flexible. Albeit slightly reinventing the wheel.
Indeed, linux uses u32 and friends for instance. It's really not much of a problem in practice (kernels are by definition very hardware dependent anyway...)
If you're doing systems level programming, you shouldn't be counting on anything being portable anyway. Reimplementing it is worthless for projects are on a single OS. If you're implementing the OS, then these decisions need to be made anyway, independent of the language.
You should be coding to(or with) the OS you're working on. If you're doing multiplatform stuff, then #defines with a naming scheme work, and are already the standard practice.
How about a few more default data structures, like resizable arrays or hashes? I find the biggest bump in my productivity in most other languages is that those two things are there already.
The problem is dynamic memory allocation. It's actually one of the gripes I have with the "rust" language [1]. As a C coder I find it hard to take seriously a language that don't let me customize memory allocation easily.
It's not impossible to do right though, C++ has "allocators" parameter templates to do exactly that (there's a default implementation that you can "override" with a custom class).
Hey, thanks for pointing that out (and thanks to kibwen for the link).
I've been trying to use (or rather, bend) rust for kernel development lately, and memory allocation was my biggest limitation so far (including stack allocation). Definitely looking forward to future improvement on the language, it looks very promising.
At least one of the developers wants to make Rust's runtime optional, to allow the runtime to itself be written in Rust. That would probably also serve your goals as well, so perhaps you'd be interested in helping out? :)
Quote:
"I'd like to see a 'runtime-less Rust' myself, because it'd be great if we could implement the Rust runtime in Rust. This might be useful for other things too, such as drivers or libraries to be embedded into other software. (The latter is obviously of interest to us at Mozilla.) Rust programs compiled in this mode would disable the task system, would be vulnerable to stack overflow (although we might be able to mitigate that with guard pages), and would require extra work to avoid leaks, but would be able to run without a runtime.
If anyone is interested in this project, I'd be happy to talk more about it -- we have a ton of stuff on our plate at the moment, so we aren't working on it right now, but I'd be thrilled if anyone was interested and could help."
Allocators were well intentioned, but writing them is not a walk in the park, and you cannot pass, say, a “std::vector<T, my_allocator<T>>” to a function expecting “const std::vector<T>&”.
Regarding the use of integers as bit arrays, it would be nice if the syntax supported referring to a subset of the bits, similarly to bitfields in structs.
Example based on the syntax from the article:
x: int<+32>;
x[0:6] = 42; // Set six bits starting with lsb 0 to 42
In fact, it doesn't reach top priority since we can do without for now (we have bitfields.) The issue with slicing is more a question of design: does slicing need to be constant expression or should we authorize more expression power (and then how could we implement that and how could we check it … )
Anyway, I'm looking for a clever way to define values accessed in a none uniform ways.
Integer as bit array was the simplest idea to test (and seems useful to me.) Now the compiler has almost everything I need to implement easily that kind of syntactic sugars.
It would work fine. It had bit-level operators, easy assembler integration, simple macros. Decent looping constucts after the first version. Everything was an expression which makes lots of things easier and a good optimizing compiler (rember Wulf?). It did have a major quirk: you had to be explicit on addresses vs. values. To get a value you put a '.' in front of an address. Used to drive some people crazy, but I never had any problem (much) with it.
seriously? i personally think this is an enormously exciting space to watch people explore, and i'm glad that the field seems to be enjoying a renaissance of sorts at the moment. there are a lot of powerful features that modern languages have developed, and if some of them can be brought to the C world without sacrificing the low-level features that systems programmers need, it will benefit everyone.
D and Go are not trying to provide us with a better C, because they don't share the concept of, a language that translates straightforward to assembler, but still is designed so that you can mount good abstractions with it.
This is what you need for kernel programming, embedded systems, and a good deal of system programming as well.
At the same point C is one of the top-used languages in the world, with a vast code base, and yet almost all the attempts in the languages area are about much higher level or exoteric stuff. Not that I don't want that kind of research, but it's bizzarre that no one is focusing where a lot of meat is.
I bet this better C will arise in a few years at max, and it is not going to come from academia. Times are mature apparently, there is even a C conf this year ;)