Hacker Newsnew | past | comments | ask | show | jobs | submit | _wugy's commentslogin

I'm really surprised by the "hate" for C that is appearing in these comments. What ever happened to actually enjoying the danger of getting low level? Is assembly also useless because it isn't readable?

There is a lot of great code written in C, and a lot of crappy code written in C. Because C doesn't protect you from yourself, it exacerbates any design flaws your code may have, and makes logical errors ever more insidious. So in this sense, the quality of the C you write is really a reflection of you as a C programmer, not the shortcomings of the language. Maybe you've been badly burned by C in the past, but keep an open mind and understand that C can be beautiful.


Hear, hear!

Unfortunately, C does get a lot of hate on HN. I suspect it has to do with this site's demographics. Many (not all) of the HN clan seem to be oriented towards / mostly familiar with web based technologies. I suspect that for many who have tried, going from a web dev environment to a C oriented dev environment feels like a robust shock to the system.

I'd also be willing to bet that there's an age bias at play here; C has been around, like, forever. It is certainly not the new hotness. Most (not all) people that I know who enjoy it and are proficient at it, are 40 or older. Much of the web based dev crowd that hang around HN seem to be in their 20s, and as it is a time honored tradition to poo-poo the ideas / methods / tech of the older generation(s), it's not surprising that C doesn't get a lot of love.

Yes, I realize I'm painting with broad strokes here. It'd be interesting to see a survey or three that correlates age ranges and tech used on a day-to-day basis to see if these assumptions or legit. (Anyone got any survey data up their sleeve they'd be willing to share?)

Me personally - I love it all. C, C++, Java, Python, Javascript, Rust, Haskell, Scheme, etc. Making computers do things for you, and for other people, by writing detailed instructions is quite possibly one of the funnest things in the world. Double bonus for getting paid to do it!


It's not just that HN does a lot of webdev. It's that even in its element as a "systems language" it's virtually impossible to write 100% safe C/C++ code and guarantee that it will remain safe into the future, even for experts who are making every effort to do it right. There are just too many gotchas with "undefined behavior" and too many clever compilers out there waiting for you to make a mistake.

One only needs to look at something like the OpenSSL library to see the problem. You really need to hammer the hell out of C code with something like AFL to get at a reasonable majority of bugs - and you could hammer out every last bug one day and then the next day a compiler starts optimizing away your safety checks. This isn't a theoretical problem, this actually happens. Code rot is a very real problem in C++, to a far more massive extent than any other language.

http://blog.llvm.org/2011/05/what-every-c-programmer-should-...

http://www.kb.cert.org/vuls/id/162289

Personal opinion here, but with few exceptions C/C++ are inappropriate languages for starting new development at this point. I realize the tooling is not there yet but I would rather see something like Rust used in almost all performance-sensitive applications where C/C++ are currently used. Unless you can guarantee that you are operating in a trusted environment and will only ever operate on trusted data, C/C++ is just not the right language for the job.

Yes, it's fast, but at what cost? I would gladly give up a massive fraction of my performance for better security and portability - and that's why I program Java. Not that Java is perfect either, but at least I can be certain that the sands aren't shifting out underneath my programs.

I would actually say that porting the Linux kernel to Rust would be very high on my wish-list at this point. I am well aware of just how enormous that task would be and I might as well wish for a pony too, but it gives me heartburn to think of just how much C code is sitting there operating in the most untrusted of environments on the most untrusted of data. I have every faith in the kernel guys to do it right, but the reality is there is a lot of attack surface there and it's really easy to make a mistake in C/C++. It may not even be a mistake today, only when the compiler gets a little more clever.


> Yes, it's fast, but at what cost? I would gladly give up a massive fraction of my performance for better security and portability - and that's why I program Java.

While I agree with the sentiment, a problem with Java is that you're dependent on a runtime environment with a fairly consistent history of vulnerabilities, right? [0][1]

> Personal opinion here, but with few exceptions C/C++ are inappropriate languages for starting new development at this point.

Maybe, but now there's SaferCPlusPlus [2]. At least it may be a practical option for improving memory safety in many existing code bases.

[0] http://www.cvedetails.com/product/19117/Oracle-JRE.html?vend...

[1] http://www.cvedetails.com/product/1526/SUN-JRE.html?vendor_i...

[2] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus


I think the bottom line is that it simply takes too long to actually become fluent in 'C'. This makes it a horror for open source, where you have to draw on volunteers.

You simply can't just write 'C' without making sure all the details that are necessary to run safely are in scope at all times.

While I agree - the OpenSSL cases certainly show the weakness of the language, there's just no way I'm gonna hang all that on 'C'. Writing protocols and protocol drivers is a fairly tedious sort of skill to attain. We inevitably descend into a counterfactual ... "fantasy" ( sorry; don't mean anything insulting by that - besides I do it too - it is just the nature of counterfactuals ) in which 'C' ends up the villain, when it was a much richer set of failures in play.


I don't think anyone can demonstrate that it is virtually impossible to write 100% safe C code. Sure, you can always find people who don't know how to write a proper safety check. That doesn't mean nobody knows. You can always find people who ignore or don't know about best practices, but that doesn't mean everyone's like them. And you can find people who write goto fail; and ignore the warnings about unreachable code posted by any half-decent compiler or static analyzer, yet there are people who will pay attention to that kind of stuff. People scream UB, UB, C is evil because of UB, but goto fail is essentially a logic bug, something you could have implemented in any language. It doesn't need UB to happen.


> That doesn't mean nobody knows.

Yep. Have a look at the code coming from the OpenBSD crowd. Those folks really know how to wield C. It involves, first and foremost, writing readable and straightforward code, in an attempt to make any bugs obvious. The OpenBSD folks also insist on code review, which also helps.

And wrt tooling: C has some of the best tooling around of any language. GCC, Clang, and Visual C++ can all do some pretty decent static analysis, and then there are tools like lint and Frama-C, and tools like valgrind. Coverity also offers free static analysis for open-source projects. Make use of all the tools available to you. Testing is also important. Shoot for 100% code coverage (see SQLite3, for example, which has a massive test suite).

As you say, one of the requirements is to pay attention to warnings and fix them. In compiler parlance, "error" means "I can't compile this code" while "warning" means "I can compile it, but it's going to misbehave at runtime".

And here's something about undefined behavior: it's possible to know which behavior is undefined and to avoid it! Not every C program is riddled with undefined behavior.


I think that you have the formulation backwards. You claim that people can just write better, and should attain perfection.

> I don't think anyone can demonstrate that it is virtually impossible to write 100% safe C code.

I think most people come at the other way. Most people are aware that they are fallible and wants tools to help with that. Most people strive for perfection and none will ever actually attain it.

> I don't think anyone can demonstrate that it is virtually impossible to discover errors safely in C code.

There is a huge difference simply moving from C to C++ with exceptions. The type system in C++ can detect several classes of errors at compile time and prevent then grom going into the results.

Then for runtime problems if an underlying functions throws, it cannot simply be ignored. Any programmer can miss a single statement, or worse refactor a function with a void return to one that returns and error code (which then results in every caller ignoring the return value). However, it takes a special kind of malice to use something like carelessly catch(...) in C++ to disregard exceptions so that runtime errors are avoided. C++ with exceptions has more sane defaults because it fails fast and the failing itself doesn't need tests until it starts doing something meaningful.

Now imagine the advances in error detection moving to languages that catch additional classes of errors.


> which then results in every caller ignoring the return value

And a whole load of compiler warnings. Worse yet, people who ignore warnings might ignore them.

> Now imagine the advances in error detection moving to languages that catch additional classes of errors.

Languages don't catch errors, tools do. The C tooling has been and still is constantly improving.


Lint was created for C in 1979 as the language authors saw how easy it was to make errors, static analysis is still largely ignored by the majority of C developers nowadays.

https://www.bell-labs.com/usr/dmr/www/chist.html

I am yet to see it being use in enterprise C code.


In projects with centralized build scripts, like most projects, hopefully they have -Werror or its equivalent on by default. I was speaking about the case were a group has systematically ignored warnings and they are already beyond fixing. This is a depressingly common state for many shops. The best fix I have seen to enable as many warnings as possible and treat them as errors as early in the project lifecycle as possible. For whatever reason C++ shops are much more likely to do this than C shops in my experience.

If the compiler isn't the "language" enough for you, then please explain how to write a buffer overflow in Javascript?


So I see this argument as "should the tools catch these things?". I suppose that would make some people feel better. But the fact is, when you're in the seat, it's up to you to make sure you Do No Harm.

But please be aware - generalizing all failures and integrating them into the tool suite is a pretty daunting task. Perhaps the economics of it make sense. But if you're stuck writing 'C', especially on legacy code bases with legacy tools, you're stuck, and there's only the one thing to do...


That did sum up my argument well, same one extreme you are taking.

You don't need the compiler or exception to cover all your errors. If you know something would be too costly to integrate in these mechanisms then you are free to disregard it. I have written throwaway code that did gross things with pointers, memory and system specific resources. But if I want code to last and be maintainable I do my best to get the compiler to watch my back.

This also works well when interfacing with legacy C. If the new code can be written in composable and unit testable classes, then you can prove (only to the extent of the quality of your automated tests) that problems are in your code or in the legacy code as they arise. Then when you find problems in legacy code, try to break a piece out and replace it with another class, even a a big ugly one just so you can get some unit tests in there. Then you can break the big ugly class into smaller, cleaner, composable and well tested units.


This again. :) I think there's an angle your side of the discussion is missing on this. You might, with enough experience or team talent, be able to consistently write good code in C without defects. You might be able to do that up to millions of lines of code if your project becomes a Linux. However, the vast majority of projects will involve something along these lines:

1. The team is average or below since they're affordable or the work kind of sucks. This often happens in practice even with smart coders because the deadlines force them to move too fast with too little QA. Product might still have high impact, though, esp if it's widely-used product or service. The language itself preventing common problems is helpful here.

2. It's a FOSS project made by people that want to get stuff done without learning tons of rules for working around C's issues or stopping every common operation to prevent language itself from killing their project. I'd say vast majority of projects don't need whatever absolute advantages like max performance that C has over safer languages. Again, the language could be helpful.

3. Either of the above given the effects of time where new contributions come in that work against a codebase that fewer and fewer people understand due to organic growth. The language itself can be helpful with a combo of type-safety, programming in the large support, modules, etc. Better support for safer modifications of stuff you barely understand. Rarely a problem for Ada and Eiffel people if the team was even half-competent because the compiler simply demands it.

There's embedded people that can do one-off or steady projects however they like with enough time and tooling to get it right. ArkyBeagle appears to be in a category like that if my broken memory isn't fooling me. Then, there's vast majority of programmers either in the corporate crunch, scratching an itch barely caring, or fixing something they barely understand. Human nature will push defects in from all these directions. The tooling, if designed with human nature in mind, can prevent a lot of them automatically and aid efforts to catch the rest.

Hence, my opposing C language in favor of safer-by-default system languages. Especially those that avoid tedium of constantly watching out for dangers of most-common operations. Gotta work with human nature rather than against it. A hard lesson I learned after years of failed evangelism of high-assurance INFOSEC. Now, I exclusively look for ways to embed it seemlessly into stuff with other benefits listed. Much better responses on that. :)


Well, I guess you missed the Linux Security Summit:

http://arstechnica.com/security/2016/09/linux-kernel-securit...


> I suspect that for many who have tried, going from a web dev environment to a C oriented dev environment feels like a robust shock to the system. > I'd also be willing to bet that there's an age bias at play here; C has been around, like, forever. It is certainly not the new hotness. Most (not all) people that I know who enjoy it and are proficient at it, are 40 or older.

As someone who went the "other direction" (Java -> Ruby -> Javascript) I can say that a lot of it has to do with the accessibility of the ecosystem rather than the language itself. This could absolutely just be my filter bubble, but I've noticed that the communities surrounding Ruby, Python, and Javascript seem to go above and beyond the call of duty when it comes to making libraries easy to use, documenting those libraries, building and refining the tools, and so on.

I know there are good tools out there for C development. I know there are good learning materials. I know there are communities out there dedicated to writing good C code (Shout-out to /r/c_programming on Reddit. Love those folks.) But I can't sort out the signal from the noise, because there isn't a lot of discussion about C programming happening in the online spaces I'm familiar with. As a counterexample, there was a _fantastic_ article on here the other day about "writing your own syscall" in Linux. Yes, it contains a lot of hand-holding and overexplanation, but that's useful for me because I haven't built up the mental model to parse a more terse explanation.

In fact, I think this is how having "the new hotness" change every couple years has been helpful _in some respects_- there's an incentive for lots of people to write blog posts, tutorials, and articles about how to properly use the latest and greatest tech, there's active development going on as people forward-port functionality (and therefore plenty of opportunity for devs to make meaningful contributions and have meaningful discussion about "how to write code using this language/library/framework"). For a short period, both the "old hands" and the newbies are in the same boat, and this is unbelievably useful for training up the next generation of developers.

> Me personally - I love it all. C, C++, Java, Python, Javascript, Rust, Haskell, Scheme, etc. Making computers do things for you, and for other people, by writing detailed instructions is quite possibly one of the funnest things in the world. Double bonus for getting paid to do it!

Same here, friend. :) For what it's worth, I wish there were more of this attitude floating around the Internet.


It gets a lot of hate because the majority of developers are not embedded developers, kernel developers, or doing anything involving hardware. The other reason, IMO, is that to do anything that's actually kinda cool or fun in C you have to get pretty adept, so it's probably just written off as an old, boring language.

Personally I'm in my mid-20s and quite enjoy working in C. And for things like bit manipulation it's much easier than in higher level languages. I suspect at some point even the smallest MCUs will be able to run Rust or Go, but until that happens there is still a place for C/C++. Haters can hate but that won't change the fact that C is still the most widely supported language for embedded platforms (and Linux, the other elephant in the room).


People have strong feelings about C because C is far from being perfect by modern standards and yet it continues to be the single most important programming language of our time. There is nothing wrong or surprising with people being frustrated about this fact. I only wish that there was less irrational hate on this forum in this regard.


Well, the haters could always reimplement the whole infrastructure in their language of choice, wouldn't they?

It's been done at least once before for ideological reasons (and in C none-the-less) by the FSF. It should be even easier to give it a go in modern languages. I bet you can even get funding if you can write a compelling case that the wheel is actually broken!!!


There is also the issue of undefined and implementation defined behavior.

When developing on one platform for an extended period of time, it is human nature to forget which features are implementation defined as you use them day after day and then have unexpected errors/flaws when porting.


Undefined behaviour actually isn't the monster that most C language lawyers want you to believe it is. With tools like valgrind, address sanitizers and modern debugging toolchains, most of these issues can be caught. Compilers are also mature enough to issue warnings about the use of uninitialized variables, missing return statements or mismatched printf specifiers. Heck, Clang maybe has more than 250 -W options.


In theory, a good fraction of these can be caught. In practice, these issues keep coming up in production again and again and again.


Knowing your tools and compiler switches is key. The reward is that the final production code can be very lean and performant, without any runtime penalties to provide safety.

Most people who complain about the dangers of C probably have used it in an unprofessional setting without any additional tooling. It's a bit like saying that all RWD cars are dangerous just because you've once driven a '92 BMW, disregarding any technological advancements since.


> Most people who complain about the dangers of C probably have used it in an unprofessional setting without any additional tooling. It's a bit like saying that all RWD cars are dangerous just because you've once driven a '92 BMW, disregarding any technological advancements since.

Actually, most of the people I know who think C is a problem that needs fixing are longtime professional compiler developers and people who work on security critical codebases. In fact, I don't know any compiler engineers who don't have serious reservations about C and C++. Those people know more about tooling and instrumentation than virtually anybody. It's precisely that knowledge that leads one rapidly to the conclusion that there are serious flaws in C for secure software that can't just be papered over with tooling.

It's usually C++ enthusiasts who are the ones trying (unsuccessfully, IMO) to argue that undefined behavior isn't a problem in practice.


Are you sure you aren't mixing causes and consequences? I'd say that this is actually because undefined behaviour is hard (didn't say impossible) to get right at the human level that tools were, and still are, being developed.


UB and IB ( implementation-defined ) have not been a problem for me for a couple decades now. No advocacy here - I started using C because it was about all there was - but it's just a learning curve.

There was no direct cost to me because I was getting paid to learn this stuff on the job.


> With tools like valgrind, address sanitizers and modern debugging toolchains, most of these issues can be caught.

Most of these tools require support from an operating system. This is not the case when you do kernel programming. For some reason even existing tools are not popular among kernel programmers [0].

IMO, there are bugs that can be caught well a compile time without my effort, so why should I waste time on catching them at runtime?

I would better make love to compiler instead of having sex with debugger.

[0] http://lwn.net/2000/0914/a/lt-debugger.php3


>Compilers are also mature enough to issue warnings about the use of uninitialized variables

That depends on the compiler. It's not true with GCC: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=18501


Any user action that you didn't have a test case for can provoke catastrophic UB that valgrind never saw, so catching most issues has almost no value. This is why every OS and every app I have ever used were always unreliable shit.


Undefined/implementation-defined behavior are necessary if you want optimal performance (unless the compiler requires formal proofs of correctness).

>it is human nature to forget

High-performance programming is a job the average human does not do. A professional programmer should use spaced-repetition technology to rise above human nature, and use tools like valgrind for extra safety.


Porting is it's own thing - you must seperate implementation dependent things and ... "business rules" fairly strictly if you are to keep it portable. I'd also strongly suggest a really comprehensive test suite that beats the snot out of the implementation dependent portions.

Somebody mentioned "scripting languages" - use the ability of scripting languages to construct combinators to write your tests. They migth even emit 'C' code.


>enjoying the danger

This is a hilariously bad attitude for any software that other people will use. When software crashes, people lose work and time. When software has vulnerabilities, bad guys take advantage of them and build stronger botnets. "The danger" isn't like wiping out when you're pulling a stunt; "the danger" is wasting the good guys' time and empowering bad guys.

>There is a lot of great code written in C, and a lot of crappy code written in C

This is true of any mainstream language, so it's completely uninformative and pre-emptively shuts down the possibility of any meaningful language criticism.


This is a hilariously bad attitude for any software that other people will use. When software crashes, people lose work and time. When software has vulnerabilities, bad guys take advantage of them and build stronger botnets. "The danger" isn't like wiping out when you're pulling a stunt; "the danger" is wasting the good guys' time and empowering bad guys.

Computers aren't just tools, they're also toys. People use computers for entertainment in varied and sundry ways. What is so wrong with somebody wanting to enjoy hacking around in the low level guts of a system? As long as no lives or livelihoods are at stake, what's the problem?


What's amusing to me is the amount of terribly unsafe code (that isn't C) that powers rockets, moon landers, and a variety of other safety-critical systems and yet isn't the subject of such persistent and severe criticisms. There's a reason C and C++ are targets. My (obviously controversial) opinion is it has at least as much to do with ego as a desire for safety.


As far as I know most space software these days, and embedded in general, is in (at least a sub/superset of) C.


Yes, but it wasn't always, and still isn't always.

Although your point is very good in that it weakens (further) the "safety is everything" argument. In my opinion. There is so much mission critical software today that is written in C and C++. That's one reason why "safety, safety, safety!" just isn't as persuasive to me as it perhaps is to others.


We have a winner. Kill your ego. It's the only way.


On the other hand a lot of the criticism of C and C++ is structured to exaggerate their deficiencies and minimize the proposed alternatives' by couching the comparison in contexts which favor the latter over the former by dint of language design. I'm not convinced that is a path to open and honest discussion, either.


> So in this sense, the quality of the C you write is really a reflection of you as a C programmer, not the shortcomings of the language.

Can't you substitute "C" with just about anything in this sentence?

It's all well and good to talk about how "beautiful" a language is, but when people are literally endangered because of totally preventable security vulnerabilities that don't happen in programs written in other languages, it's hard to sway me as to how important this so-called "beauty" is.


But don't the security vulnerabilities come from poorly implemented code? These vulnerabilities are not inherent to C.


In what commonly used language other than C and C derivatives do you regularly see use-after-free leading to remote code execution?


C makes it trivial to implement poorly, though.

(Note: I'm playing devil's advocate here to some extent. My view is that safety is important, but lack of provable safety is not some terrible Demogorgon that we should hide in fear from. I think a lot of the concern over safety is valid, but in some contexts it's just overhyped.)


My view is that lack of provable safety should be resolved by defensive code (runtime checks). And then, you are safe (if safety is important in your code, which probably should by default in a professional setting).


I agree, it is solvable by defensive code. The vast majority of the time that code is perfectly sufficient. The number of people who don't die when the hundreds of thousands of things that don't go wrong when an embedded C-program doesn't crash or blow apart because of memory safety bugs daily demonstrates this. I don't think people understand just how much of our world is run, quite literally, by "not provably safe" code. It's not just C and C++, either.

Which is one reason why I don't buy the "memory safety" argument as a very strong one for adopting Rust. There are other much better reasons to do so for a certain class of programming, in my opinion.


Vulnerabilities like buffer overflow do not happen in languages with a string type. Humans are responsible if something bad happens, but without a safety net, the outcome is worse.


C has a perfectly useable (null-terminated) string type, and there is no good reason to ever have a buffer overrun in C.

I understand that this is... obscure for some reason and I'm not saying it never happens, but let's be realistic....


C has a char* type, which we call a string, but it is also the type of a pointer to a single char, which is not a string at all, and also something perfectly usable. "Ends with nul" is barely a part of C, it's more like a programmer's agreement. The language doesn't enforce it, require it, or check it. All it does is insert nul characters in literals, which is hardly enough to make a string type.

Thus if you have a to_upper(char*) function, you don't know what it takes or does without looking it up. Does it uppercase a single character or a whole string? How do you even tell what you were passed without potentially reading past the end of a buffer?

If I happen to have a pointer-to-char and pass it to a to_upper function that operates on strings, it will just write on invalid memory, because C can't distinguish between the two.


From the signature, I would say it expects a NUL-terminated sequence of characters (a C-string) and it would modify it in-place to upper case each character. C already has a standard C function:

    extern int toupper(int);
(via #include <ctypes.h>) that will upper case a single character. If, on the other hand, I saw:

    extern char *to_upper(const char *);
I would expect that to_upper() returns a new string (freeable via a call to free()) that is the upper case version of the given string.

> If I happen to have a pointer-to-char and pass it to a to_upper function that operates on strings, it will just write on invalid memory, because C can't distinguish between the two.

Um ... how do you "happen" to have a pointer-to-char? And unknowingly call to_upper()? I'm lost as to how this can happen ...


The signature doesn't tell you that. If my API said

    int frobnicate(char*)
and you make that kind of assumption, then your code may or may not work, depending on what the function does internally. You simply do not know whether I am operating on null-terminated char sequences or a single char.

>Um ... how do you "happen" to have a pointer-to-char?

    char* text = "some text";
    char* c = text[2]
There you go.

>And unknowingly call to_upper()?

Who said anything about unknowingly calling a function? It's "toupper", not "string_to_upper" or "char_to_upper". The function signature simply doesn't tell you what the function requires of its input.

PS: char* is also a pointer-to-byte in C.


Your response to me shows you don't program in C all that much. I ran your code example through a C compiler and got:

    a.c:2: warning: initialization makes pointer from integer without a cast
    a.c:2: error: initializer element is not constant
What you really want is:

    char * text = "some text";
    char * c    = &text[2];
which still doesn't prove your point because c is still pointing to a NUL-terminated string.

If fronnicate() really takes a single character, I might ask why the function requires a pointer to char for a single character instead of:

    int frobnicate(char);
but if you are going to really argue that point, so be it. Discard the fact that in idiomatic C, a char * is generally considered a NUL-terminated string (and here I'm talking ANSI C and not pre-ANSI C where char * was used where void * is used today).

You are also shifting the argument, because in your original comment I replied to, the function you gave was to_upper(). toupper() is an existing C function.

P.S. char * is a pointer-to-character, not a "pointer-to-byte", pedantically speaking. All the C standard says is that a 'char' is, at minimum, 8 bits in size. It can be larger. Yes, there are current systems that this is true.


A single typo doesn't tell you anything about my programming habits.

>which still doesn't prove your point because c is still pointing to a NUL-terminated string.

No, it's pointing at a char that happens to be part of a nul-terminated string. The semantic intent of that distinction is entirely lost because C fails to make a distinction. I could easily overwrite that nul, and it would no longer be the case. Then it's suddenly an array of chars, and everything pointing at it is now a new type of thing.

char* s = (char*) rand();

This also will point at a 'nul terminated string' with very high probability. Doesn't mean it is safe to call string functions on it...

>I might ask why the function requires a pointer to char for a single character instead of int frobnicate(char)

You could say the same about any pointer argument. Obviously pointers are useful for a reason. If frobnicate returned a char, I would just end up dereferencing a pointer to stick it back in the string it came from. Whether that is frobnicate's job or it's caller's job is a matter of API design, and should not be determined by C, especially when it makes no preference for any other kind of pointer.

>You are also shifting the argument, because in your original comment I replied to, the function you gave was to_upper

My arbitrary example function name doesn't matter one iota. Get over it, and stop being needlessly dense.


This is all true.

So don't do that.


Don't worry about me, I never make any mistakes. I'm a true C programmer: I believe that "implement a good string type" is an unsolved problem and that the last 50 years never happened.


Your first statement is pretty false, even in Rust (for example). Unless you mean something else by "buffer overflow" than I'm accustomed to.


You are right, "do not happen" sounds too much like "will never happen". See also Wikipedia's entry about that example[0]. My point is that if the programmer can't prove accesses are always within appropriate bounds, there should be a runtime check. That is simple. This is not "slow" (and even in the case it you need it fast and are ok to randomly crash, avoiding checks should be explicit). And some languages do it by default and make it really hard to mess with memory.

[0] https://en.wikipedia.org/wiki/Buffer_overflow#Choice_of_prog...


Well, yes, I agree in general bounds should be checked at runtime when it isn't possible to statically verify access at compile time.

I'm not sure how default access in C or C++ isn't explicitly avoiding checks. By definition "a[b]" is an unchecked dereference. It doesn't get more explicit than "by definition." Of course if by "explicit" you mean "syntax exists that demarcates unchecked access" then C and C++ will never satisfy. I'd argue that's a contrived and artificially narrow use of "explicit" meant, er, explicitly to exclude C and C++ from being acceptable by definition and therefore not terribly fair.



Yes (Rust's "unsafe" blocks serve the same purpose), and my point is you're narrowing the definition of "explicit" to exclude C or C++ by definition. And that isn't exactly a fair, in my view.


There is no doubt that C, by definition, opts out from performing bound checkings. But if bounds were always checked by default (implicitly), then you would have to opt-out explicitly, which is a safer approach, because all else being equal, in case of a programming mistake, the code ends up not being vulnerable to that specific kind of attack.


Yes I totally agree with this and think that it's funny when people go off on the danger or C due to the fact I learned it along with many other kids between the ages of 8 and 12. At Oglethorpe University they ran a coding camp for children like me interested in learning C, QBasic, and ASM. At the behest of a fan letter I wrote to LucasArts as soon as I saw a C instructional class for younger people I had access to I signed right up. Being in Florida that was the closest me and my family could find and within our budget. I remember there was one kid in the ASM class who made a TSR for another more obnoxious student that re-wrote his hard drive until it physically broke. It was quite the statement but some of us apparently didn't consider these design features dangerous rather we considered them powerful. As far as writing good C code is concerned if a bunch of pre-teens could do it then I'm sure its possible for anyone given enough practice.


C was not the only way of doing it.

Many of us were enjoying the danger of getting low level with Think/Quick/Turbo Pascal and Modula-2.


Turbo Pascal even allowed to mix in assembler right in your source code. That was super awesome at that time. No need to write separate assembly code, no need to link !

(not to say that Turbo Pascal was the only one to do that, just fond memories...)


I once wrote a mouse driver for MS-DOS like that.


Interesting. Though I used Turbo Pascal quite a lot earlier, I either don't remember that feature (being able to mix in assembly) or may have known of it then but forgotten it later. Getting a vague recollection - was it something like a function call of sorts (syntax-wise)? Start with a $something, then open parens, then some assembly statements, then close parens)?

If that was the way it was done in TP, the BBC micro (which was mentioned quite a bit in the recent HN thread about BASICs on personal computers of earlier years), also had a similar feature. I did use that one a bit. You had to open a square bracket in the middle of your BASIC program (though probably not in the middle of a BASIC statement), write your assembly code (6502 instruction set), and then close the square bracket, IIRC.

D (language) these days also has the ability to mix in assembly, though I haven't tried it yet.

Edited for typos and wording.


In TP you could use inline Assembly in several ways.

It could be just a block, a complete procedure/function with or without prolog.

Also it was quite comfortable to write, just as a plain macro Assembler with Intel syntax, not those asm functions with strange syntax used in gcc/clang.


C also allows for inline assembly.


Modern delphi carries that tradition.


I don't recall specifically about MSC but Turbo C had the same....

We considered having all the assembler in separate files better practice...


Even in embedded examples, I see a lot of inline ASM in C functions. So, what's separate files like? Do you just compile them separately, link them in as libraries, wrap them as a C function, and then call it? And what was the argument for this over just putting them inside functions of C source where necessary?


We'd generally wrap them as C functions. We'd frequently use the compiler to generate the assembly; have a prototype and all that.

Which is better depends.


I think it's not being taught very much or used professionally as much as it used to. So people if they do have exposure feel the frustration of beginners, and never reach the point where they are productive with it and start to appreciate its strengths.


As for me, I like C because I consider it a "high level assembler", as a backend for modern programming languages like Nim which profit from the C compiler's strong code optimizations. If there is any new hardware platform, there usually is a C compiler, too. This makes porting source code really easy.


" So in this sense, the quality of the C you write is really a reflection of you as a C programmer, not the shortcomings of the language. "

That's not true. BCPL language was specifically designed to get something to compile on a machine with no resources to run safety checks, programming in the large, anything. C tweaked it a bit to run on a PDP-7 and then PDP-11. The problems that are in C are there specifically due to the challenges of non-language experts getting a relic of a language to run on machines with no resources.

Later on, Wirth designed Modula-2 that was safe-by-default where possible (eg overflow checks), low-level, easy to read, faster to compile, allowed integrated assembler, and so on compiled also through a PDP-11. They did whole OS's in languages like that. There were numerous languages like that with similar performance to C but way safer and easier to extend. Then there's languages like SPARK 2014 that let you write code that it automatically verifies free of common errors. As in, they can't happen under any circumstances in such apps rather than whatever you thought of during testing.

Having seen those and knowing C's history (i.e. justifications), a number of us know its problems aren't necessary, are easy to avoid with better design, and you still get most of its benefits. Worst case scenario is wanting that but also wanting benefits of C compilers that received a ton of optimization work over the decades or its libraries. In that case, the better language can generate C code as a side effect and/or use a FFI with interface checks added. Still safer-by-default than programming C by hand.

Heck, there's even typed, assembly languages these days you can prove stuff about. Also work like verification of LLVM's intermediate code. So, even for low-level performance or something, C still isn't either the lowest, safest level you can get. It's just the effect of inertia of years of doing stuff with it as a side effect of UNIX's popularity and tons of code that would have to be ported. Again, you can use that without even coding in C at all past some wrappers. So, people liking safety & simplicity of Modula-2, Component Pascal, etc prefer to avoid it since we know the problems aren't necessary at all. Some want extra benefits of stuff like SPARK or Rust, too.


It's because C is terrible when it's not strictly necessary, and it's not strictly necessary for the vast majority of things people work on here.


That would be perfect if you left off here. I worked with C compilers and libraries without ever writing a line of C outside my code generator. A lot of companies and people do that. Those not needing to integrate with C code outside of API's just need a FFI with a number of systems languages available.

Truth told: system programmers either never need C or need it so few times it's almost totally unnecessary. Others don't need it at all. So, it's "not necessary for vast majority of things application or system developers work on." ;)


I don't hate C. I'd rather program in C than C++. There's a "uniformity" and simplicity about C that makes it beautiful. You've got structs, functions, and pointers...that's it.

I remember reading some of ID's engine code and admiring how well I could follow it and know what's going on. With C++ and other OO languages, it's much harder.

Don't get me wrong, I'm not going to write my next web app in C, and there's some obvious benefits to the features that C++ offers, but C++ ain't beautiful.


Comments like this perplex me. I am a full time C++ dev and I could just rewrite your comment woving the "++" to the other "C".

Really like that I can create a class that stores all the knowledge of one concept internally and if I wrote that correctly I never need to look inside it again. Even better, if I document the contracts of using a class I can carefully optimize it and have broad performance effects with small code changes.

Things like std::string are just so much easier to work with that their C counterpart and things like std::filesystem::path simplify (or use the boost one if you don't have C++17 yet) so many things and doesn't even have a C counterpart. I point these out as simple examples but in all the code bases I have worked on there are similar examples, like most games a class to represent 2d and 3d points, which are used to define AxisAlignedBoundingBoxes, which are needed for collision detection algorithms which them selves need several classes to describe.

Then I can build systems of a size and complexity I literally could not comprehend without those abstractions. And then the compiler enforces them for me so other people can use them safely as well. Why is it so much harder to do this in C if C is so much "simpler".


C++ has the issue that there are a lot of ways to hide magic, and a lot of hidden magic can blow up in unexpected ways if it interacts badly with other parts of the language that you do not understand.

The result is that you really need to stick to a subset of the language that has been chosen to work well together. Safely adding to that chosen subset is challenging. And it just takes one developer to create a major headache.

See, for example, https://google.github.io/styleguide/cppguide.html#Exceptions documenting why Google is not willing to allow even something as basic as exceptions to be used in C++ code.


Every language can hide stuff. Classes, templates and operator overloads are just pretty wrappers for functions with better designed conventions for the common cases. Anyone can write "hidden magic" in any language with anything like these abstractions.

It is particularly easy to make this hidden magic in C. For example, there is no way to express who has ownership of a pointer or when a function expects a pointer or an array. I have seen plenty of C libraries that document things like this, but for each that does there are 3 that don't. For each one does document things like that, they do it differently with different conventions but for the same reasons, but I cannot even use common idioms to be safe I must understand every part of each library I call. It is much more clear what is going own in C++ when a function accepts or returns a std::unique_ptr and I cannot screw it up without trying hard.

What seems more important to me is exposing the relevant parts of the software when needed. If I care about the business logic (Even if its nots a business app... HP and Mana can be considered the business logic of a game) it can be hard to tease that out when lots of "hidden magic" is shoved in my face. But when I need to handle a new file format that the business logic requires indirectly I don't want to mess with the business logic. Having more tools to cleanly express this helps. So have a type that handles this IO while some other type handles business logic is indeed changing magic into "hidden magic", but it is also enforcing separation of responsibilities. Something much harder to do when your only real tool of abstraction if functions.

The only people I have met in real life that stick to anything like the "hidden magic" argument are the same people who advocate for single large functions. These people like their function on the order of hundreds or thousands of lines so they can "see everything". You aren't doing that are you?


The problem is not so much hiding stuff. It is being surprised by the interactions between features. The "can't get there from here" like trying to printf to an iostream. If you're not careful, simply scanning through a large file is several times slower than other languages. Template magic makes seemingly reasonable type names blow up into insanely hard ones to figure out. There are non-obvious patterns that you have to know such as RAII.

And good luck if you want to make things portable! I remember at Google being asked how I checked in unit tests that broke integration tests. Turns out that GCC and Clang disagreed on whether a friend of a subclass can see protected elements in the parent class. The local language lawyer decided that gcc was right, sent off a bug report to Clang, and I had to make my little unit test class a friend of the parent class as well. Maybe I was unlucky, but why does this sort of thing not happen to me in other languages?

In other languages I am consistently able to read up on the language syntax and features, implement things within my knowledge, catch my bugs with unit tests, and have a basically working system. I've had this experience with C, Lua, Perl, Ruby, Python, PL/SQL, Java, JavaScript, etc.

But C++ finds ways to astound and surprise me. Perhaps if I was a full-time C++ developer I'd learn all of the corners and would simply be productive. But the last time that I wrote a non-trivial C++ program, there wound up being multiple things which worked right in unit tests but not the full program until I ran Valgrind and followed suggestions that I would have had no way of figuring out on my own.

Yes, I'm aware of how easy it is for third party libraries to be bad. From dangling pointers in C to monkey patching in Ruby there is a lot of crazy stuff that can be screwed up by third party developers. But C++ is the only language where I had trouble not screwing myself up.


Phrased this way your concerns seem much more valid.

As for C++ file IO, I think it sucks too that something as idiomatic a iostream iterators are pretty much garbage. I hope they are removed in STL2 and they keep the good parts of iostreams and ditch the bad. With tellg and seekg you get the same or better performance as the c lib functions unless you need to care about visual studio performance... but if you are using that compiler you never actually compared enough about performance to benchmark.

I feel I must point at that C features shouldn't be expected to interact with C++ features. Printf does what it does, it was not designed to live with objects and types. It is a holdover from the C days. It is totally unaware of non-trivial types and does really gross things when you give it the wrong type. Using some other function to write to something that can be streamed or writing and operator<< overload for the the thing you actually wanted to output seem like the simplest approaches.

As for finding implementation specific bugs, you claim not to have found them in other languages, then included Javascript on a list of things you have used. Javascript is the poster child for implemtation specific problems, to the point where there are several sites that put real effort into describing differences between different implementations. This whole listing seems odd. These languages either have 1 implementation (Lua) so cannot have differences or are so under-specified that of course the implementations have huge differences (Ruby, Python, Sql). Clearly I have had more problems with all these then you I find all languages terrible at this point, I just a few to be less terrible.

I think I may know why you are shooting yourself with C++, as you put it. If you find RAII non-obvious after you have worked with it then there is definitely a problem with how you are approaching something about C++.

RAII, or deterministic deconstruction in general, is probably the single strongest thing the C++ language brings to the table. With RAII you can implement your own memory safety model via any kind of shared pointer you can dream up. With RAII you can prevent race conditions by creating exception safe mutex guards. Write your own transaction handler by putting the roll-back code in the deconstructor. With RAII you can clean up any kind of resource in a deterministic and safe way that few other languages offer.

I had to some work with automated UI testing recently (a complex application and framework with C++, Java, Ruby and Python). My application leaked resources in a very gross way because we had to pass handles to our resource back out to users writing scripts that could manipulate them. This leaky resource was whole web browsers.

For a combination of technical and business reasons the only suitable tool for creating browser instances. If I could have relied on Java's finalizers to be called I could easily have close them there. We found several situations where they clearly failed and much documentation about the reason they failed (and apparently how the JVM could be fixed if the standard authors were so compelled). After a couple of weeks of research and failing to be able to explain the various segments of the Java community the "using" keyword was inadequate for this usage pattern, the smallest hack we could come up with was a silly watchdog timer that checked the processes on the machine and knew when the web browser manipulation API was used. This was almost a 1000 lines of code to get right to buy nothing but resource safety in the face of exceptions. It would have been 4 lines of C++ and two of those would have been curly brackets.

Of course I am biased, I pick a story from my experience to suit my argument as you have done yours. I am still not sure how one hurts themselves more with C++ than particularly and if you have access to C++11, C++14 or C++17 it seems a fair bit safer than Java or Ruby because of the precise guarantees and strong tools for safety the language lacked before. Still can't keep up with With Rust or Haskell in the safety department though.


It is not that RAII is non-obvious after you've worked with it. It is that you can read through how the language works somewhere like http://www.cplusplus.com/doc/tutorial/, start producing software, and not realize that you have to do RAII. You can even, as Google did, have experienced and competent people write a large and well-structured program and then only belatedly realize that you can't use certain features because they didn't structure the program right.

There is a lot of that in C++. If you get everything right, then wonderful. If you don't, then that is a problem.

On implementation bugs, I have found implementation bugs in lots of languages. But not generally as things that I stumble over while proceeding with what seems to me like it should be the obvious thing to try.

With C++ it isn't like that. I gave you an example where there is a disagreement between compilers. But, for example, what happens if I supply an ordering method that isn't consistent? In other languages I get stuff sorted slightly inconsistently. In C++ I get a core dump. Good luck figuring out why.

On the complaint that you have about Java, that falls in the category of things that I expect to have to deal with. Part of my mental model for a language has to be the memory management rules. C++ lets you set up whatever you want. Perl does deterministic destructors but therefore can't collect circular data structures without help. Java has true garbage collection so it collects circular data structures, but it can't guarantee that they are collected in a timely way. JavaScript does true garbage collection now, but back in the IE 4.0/5.0 days they separately collected garbage for native objects and JavaScript objects with the result that garbage never got collected if there was a cycle between native and JavaScript objects.

This is one of the basic facts that I know that I have to understand about any language I work with. It is like pass by value vs pass by reference, or the scoping rules. I immediately look for it, understand it, and then am not surprised at the consequences. I see other people use the language for a while without trying to understand that. I understand their pain. But I'm not caught by surprise.

However C++ keeps finding new ways to surprise me. In the past I reserved it for cases where I need to implement an algorithm and squeeze and order or two new magnitudes of precise memory layout and performance beyond what is available in scripting languages. I've resolved that the next time I need that, I'll try a new language. My past experiences with C++ haven't been worth it.


I think this says more about John Carmack's devine software engineering talent than it does about C++. I've seen some slick looking C++, and I've seen some flat out atrocious C (and vice-versa). Even Carmack decided to switch to C++ for iD Tech 5.


"I remember reading some of ID's engine code and admiring how well I could follow it and know what's going on."

Npte though that id hasn't really created an influential game in 15 years and arguably the game of the century (Minecraft) was programmed, badly from what I hear, in Java. People often say that "you can write good C code" without considering what you're giving up in terms of architecture and creativity.


Writing good C code also means knowing when to use another language on top. Scripting languages such as Lua are commonly used in the games industry.


C is utterly predictable. Maybe you're the exploding chainsaw in this metaphor.


How detailed is your knowledge of undefined behavior?


We know when undefined behavior will occur(the specs are written very clearly), but not what will happen when it occurs. Our job as competent C programmers is to avoid undefined behavior. C isn't hard(perhaps tedious to do correctly) - it does exactly what you tell it to.


It does, but it's unnecessarily difficult to keep track of all the places undefined behaviour might occur and make sure you don't step into any of them under any conditions ever.

We shouldn't have to work like this anymore. C's been an amazing language, but it's getting time to gently, respectfully, move on. There's active and interesting development in alternatives which attempt to retain C's primary advantages while also allowing the compiler to keep you out of trouble as much as possible.

We have hugely powerful computers available to use as developers. We can contemplate designing and compiling languages with a complexity which would have completely defeated the systems available when C was developed. Why shouldn't we use some of that power to make our lives easier?


You'd be hard pressed to find a non-trivial C program that doesn't contain any undefined behavior.


Depends on what you mean by "non-trivial," I should think.


> it does exactly what you tell it to.

This is a tautology.

What do you expect to happen on signed overflow?


If you've ever listened to Rush Limbaugh with your conservative father, you can't go more than 15 minutes without hearing an ad for LifeLock. Their audience seems to be paranoid older people who don't understand technology and are looking for piece of mind.

Side note: Here's another kind of ridiculous service that I hear advertised on conservative talk radio: https://www.reagan.com/


> Unlike some of the largest email service providers like Google, Yahoo, AOL, and Hotmail, @Reagan.com will not copy, scan, or sell a single word of your email content. Your "Private" email will stay "Private"!

Is this true? Is it possible, with current SMTP requirements? I'll give them the benefit of the doubt that they're not actively copying it for the purpose of keeping records they can give to someone else, or scanning it to insert ads.

But to act as a mail server, in my understanding of the state of the art, they need to take incoming mail in a format they can decrypt it - functionally equivalent to 'scanning' it. They need to store it on their servers - 'copy' it - so you can download it over POP3 (or IMAP, in which case it remains on the server).

I don't think it's possible to act as a no-knowledge mail server given current mail requirements without using PGP. And to do a bit of demographic analysis, the group of people who "Feel pride of owning an email address with Ronald Reagan's name" have a narrow intersection with the group of people familiar with PGP.


Of course an email provider can't be zero-knowledge. They're just a warrant, NSL, or social engineering away from disclosing the full contents of any (or all) persons email. Just like any other email service. And the metadata from any person using their service will be swept up by any security service listening in between without a warrant.

So the whole "private" thing is just baseless marketing.


$33 a year for an email service that doesn't do spam filtering?


The service does spam filtering, that's different than scanning the contents of your email to put up display ads on your mail client (ex. Gmail, Yahoo, etc.) or selling your behavioral information and email address to third parties (other "free" mail services).


Spam filtering used to be a client feature. I'm sure it could work that way again.


I don't disagree completely. A certain subset of gamers (the hardware enthusiasts, pros) will NEVER use something like this.

But certainly over wired LAN it does offer near native performance for most games, past the threshold where many gamers would even be aware they were playing remotely. The real questions are: how good will the internet get, and how quickly will it get there?


I've played CS:GO from NY to Virginia with Parsec (to an AWS machine) and topped the scoreboard consistently (in pugs).

Also did a 3 hour raid night on the new Legion content with it on WoW, lot of dynamic camera movement there. Cleared Nythendra on Heroic as the 4th DPS.

May not be to everyone's liking, but worked for me.


Hmmm. I have 10Gbps fiber in my office that is 3ms away from us-west-2...


Hey guys, I'm the author of the post. Let me know if you have any questions!


I'm very interested is streaming 3D content creation UIs.

A major issue with virtual filmmaking is the datasets are huge, which makes it very, very difficult to work with a remote team, because you can't effectively get the dataset to people's machines even just to work on.

What I'd like to do is host our dataset AND the 3D content creation apps on our own hardware at Switch (in Las Vegas), and then have our employees and contractors throughout California access 3D content creation apps remotely, via something like your streaming approach.

Thoughts?


I just tested this service with a full blown 3D Development Environment like Unity3d using an AWS Instance and it works amazingly well and is very responsive. Through Amazon Gigabit connection everything installs in seconds as well. I had a fairly low resolution (800p) set, but i can totally see this being viable for CAD/3D Development etc.


Thanks for the feedback!


We've heard of something like this before, and while we're very much focusing on gaming right now, our goal is to make Parsec general purpose enough to fit into a lot of different use-cases. I see no reason why it couldn't be used for what you're describing.


Obviously, remote desktop "works" (as in: functions) today, we just want something with better performance, specifically, lower latency. :)


You wrote about H.264 compression, but not on how you transport the compressed stream over the network. Can you elaborate a little bit about that?

E.g. do you multiplex into a mpeg transport stream? RTP? UDP?


Good job! I see that your first picture is from Rocket League, i ussualy play rocket league using the steam controllers :) It would be really nice if the steam controllers could work with it.


Yeah, I hear you, we're getting there. Our next major feature release will be Raspberry Pi support with controller support.


Does that also imply general linux support, or would you only have a closed source arm binary?


Luckily the video processing libraries provided by Intel/NVIDIA/AMD are cross platform (Win32/Linux). So the arm specific video processing code for the RPI is a different can of worms, the rest of the client (window creation, polling for input, etc) should be able to support Linux and X11 generally.


Cool. How 'gaming' specific is the platform itself? It seems very similar to an optimized version of PCoIP.


Actually not gaming specific at all. You can think of it like a high performance remote desktop app that happens to work really well for games.


Hey, nice post! Quite a detailed write-up and an impressively optimized video pipeline. I'm curious how much end-to-end latency you experience on average?


On newer hardware (Geforce GTX 900+ series on the server side, recent Intel HD Graphics on the client) our system only adds 8-10 ms. On older rigs it might be somewhere around 20. Of course the network latency will go on top of that.


Really cool stuff, i have just read the article that inspired you to do this today because i wanted to actually test it.

If i understand correctly you also have a Streaming Server to install in the local network, so it's basically doing what Steam in Home Streaming does ? How does it compare to that in terms of performance ?


What would it take to port this tech to Linux, on both sides (server/client)?


Client side, we've already done it (we have a Raspberry Pi client that actually works well, but still needs a bit of polish.).

Server side I'm not sure. I know the Windows API for grabbing frames is really good, and I assume there's a similar API for Linux. Shouldn't be too bad. But you will likely see a macOS server first...


On the technology - can this be used to stream the games from the beefy machine I have in the basement?

On the business side of view, how do you plan to make moniez?


Yeah, it can be used for in-home streaming or over the WAN.

Making money, we plan to wrap up the cloud experience for you nicely and take a margin on it. But for personal use it will remain free.


Given a nice price-point (~ the cost of Netflix, which I believe varies from region to region), I'll be a very happy customer.

I have a beefy home PC that is showing it's age and I'd prefer to have a SaaS solution that I can access everywhere rather than role on my own!

So if you guys do release in the next 4 months or so...consider me a customer.


Steam in home streaming works very well for local streaming. You can add non-steam games to the library.


I wonder how well it would work with this setup if you were connected to it over a VPN. Presumably (despite the latency) it would consider it the same network.


Would you not gain something by using Metal instead of OpenGL on Macs?


We actually tinkered a bunch with Metal, it will likely one day be the default, but we went with the broader support of OpenGL for now.


The main benefits are that it 1) allows you to price compare and 2) it checks compatibility to you don't make mistakes. Hound-O-Matic is a completely automatic and requires simply a budget / vague filters.


As a guy who is learning the piano but doesn't know how to read sheet music very well, I'm really excited about this as a learning tool. Would it be possible to show the letter notes next to the notes on the sheet music? Also, how easy is it to incorporate new pieces... could I potentially submit sheet music that I have purchased and you could convert it for me for a fee?


Yes, I am exploring adding more functionality to it, and one thing that I'd like myself (and I think, you too), is piano tutoring.

For example, it could play a certain section once, then repeat it, but without sound, while the student plays, and then continue in a loop like that until the student feels comfortable with that particular section.

I'm sure there are many other things that could be done in that realm too, so feel to tweet or email me anything that comes to mind (@notezillaio / notezilla.io@gmail.com)

As for incorporating new pieces, it's definitely technically doable and it seems that that's something people are interested in. There might be certain copyrights issues that have to be ironed out first though... ;)


If you're wanting to get better at reading sheet music you really should not have notes being shown. It will be harder to read but that just means you need to start simpler and build up over time. If you get used to reading notes it will always be a crutch you have to work with, and instead of sight-reading the notes themselves you'll be more accustomed to sight-reading the letters.


You might be right about that. I still find though that the most daunting part about learning something new for me is grinding through writing the notes in on the sheet music then "sounding it out" going back and forth from my computer/ipod/ipad getting the timing right. This tool seems to wrap it all up into one neat learning process, perhaps one day after using it enough with "notes on" I could uncheck that box and go pure sight reading.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: