Hacker News new | past | comments | ask | show | jobs | submit | more tuveson's comments login

I think the real problem on Twitter is the massive amount of spam that now exists. It's much worse than it was a couple of years ago. If all it consists of is bots and hucksters that pay to appear at the top of viral posts, it will cease to be a useful "pulse". That being said, this might just be the end of there being a single, central Twitter-like platform. There are already right-wing clones like Gab and Truth Social, it could be that more and more niche platforms pop up.


90% of the suggested posts are clickbaity threads people post for engagement. Its become very hard to navigate to real content.


I think it just comes in waves. Years ago, I'd see links to Twitter and replies full of crypto scammer. It has been a while since I've seen that problem to that degree, though there was a smaller wave of them a few months ago.

Meanwhile, my YouTube feed regularly gets attacked with AI generated crypto ads using the Tesla or SpaceX brands.


For n<=8, GCC will also just inline it for the naive version in C: https://godbolt.org/z/TvdcTvEKx


I disagree. The conventions for declaring arrays, pointers, and function pointers are all idiosyncratic. In C, the type is always to the left of the variable being declared. Except for arrays, which have part of the declaration to the right. And except for pointers, which need to be affixed to every item if there are multiple declarations. And except for function pointers where you need to wrap the variable name like (*name). Individually I can wrap my head around these exceptions, but putting all of them together, it's just hard to read.


Fast to compile, fast to run, simple cross-compilation, a big standard library, good tooling…

As ugly and ad-hoc as the language feels, it’s hard to deny that what a lot of people want is just good built-in tooling.

I was going to say that maybe the initial lack of generics helped keep compile times low for go, but OCaml manages to have good compile times and generics, so maybe that depends on the implementation of generics (would love to hear from someone with a better understanding of this).


There are a million little decisions that affect compile time. A big factor here is inlining. When you inline functions, you may improve the generated code or you may make it worse. It’s hard to predict the result because the improvements may come about because of various other code transformation passes which you perform after inlining. After inlining, the compiler detects that certain code paths are impossible, certain calls can be devirtualized, etc., and this can enable more inlining.

Rust is designed with the philosophy of zero-cost abstractions. (I don’t like the name, because the cost is never zero, but it is what it is.) The abstractions usually involve a lot of function calls and you need a compiler with aggressive inlining in order to get reasonable performance out of Rust. Usage of generics still results in the same non-virtual calls which can be inlined. But the compiler then has to do a lot of work to evaluate inlining for every instantiation of every generic.

Go is designed with the philosophy of simple abstractions, which may come with a cost. Generics are implemented in a way that means you are still doing a lot of dynamic dispatch. If you need speed in Go, you should be writing the monomorphic code yourself. Generics don’t get instantiated for every single type you use them with. They only get instantiated for every “shape” of type.


> Rust is designed with the philosophy of zero-cost abstractions. (I don’t like the name, because the cost is never zero, but it is what it is.)

So when the generated asm is the same between the abstraction and the non-abstraction version, wheres the cost?


The point is that different people have different understandings of "cost." You're correct that that kind of cost is what "zero cost abstractions" means, but there are other costs, like "how long did it take to write the code," that people think about.


Cognitive cost is the most important cost to minimize.

A Rust project's cognitive cost budget comes out of what's left over after the language is done spending. This is true of any language, but many language designers do not discount cognitive costs to zero, which, with the "zero cost abstraction" slogan, Rust explicitly does.


> So when the generated asm is the same between the abstraction and the non-abstraction version, wheres the cost?

The generated asm isn’t the same.

There’s also a presupposition here that you know what the non-abstracted version would look like. If you don’t know what the non-abstracted version looks like, you can’t do a comparison.


> I was going to say that maybe the initial lack of generics helped keep compile times low for go, but OCaml manages to have good compile times and generics, so maybe that depends on the implementation of generics (would love to hear from someone with a better understanding of this).

OCaml types are complex enough that monomorphization like Rust or C++ is impossible, so everything is boxed.


> a big standard library

Not really, no. It doesn't even contain wrappers for the most common system calls.


If you heavily rely on the Python standard library, then you’re using a lot of Python code that doesn’t call out to C extensions. Peruse the standard library code, if you want to get a sense of it: https://github.com/python/cpython/tree/main/Lib

So you can expect any code that heavily relies on the standard library to be slower than the Rust equivalent.

A purely interpreted language implementation (not JIT’d) like CPython is almost always going to have a 10x-100x slowdown compared to equivalent code in C/C++/Rust/Go or most other compiled/JIT’d languages. So unless your program spends the vast majority of time in C extensions, it will be much slower.


> Hampering Google’s AI tools risks holding back American innovation at a critical moment

I would pay money to never see google search AI suggestions. If that’s what they consider innovation, then we definitely need more competing search engines.


They completely broke verbatim too, been that way for a few months. Sheer incompetence.

But they at least have a "web" option, which just shows websites.


I think C is a simple well-designed systems language. It has some warts, but many of the things people complain about are matters of preference – or due to a lack of understanding of the problems that C is good at solving.

The only major challengers to C in the last 50 years are C++ and Rust. I think that’s a testament to the quality of the language.


What do you think would make a better target? C maps pretty closely to assembly, so it seems like it would be the simplest. Maybe Pascal or BASIC, but most people these days don’t have experience with Pascal, and BASIC would probably be too simple for a full-length book.

For writing an interpreter or transpiler, there are probably better options, but for a true compiler I can’t think of a better choice than C (or at least a subset of C).


How would hygienic macros fix this? It seems like the macros were working fine, but the amount of generated code increased compile time significantly. Wouldn’t a more sophisticated macro system still generate a bunch of extra code and result in slow compile times? It even seems like the solution was to fall back to “dumb” macros when feasible for compile-time performance reasons.


> Wouldn’t a more sophisticated macro system still generate a bunch of extra code and result in slow compile times?

Not necessarily.

The preprocessor code here picks up the original source, and blows up the initial code (which is about "min3(long_a, long_b, long_c)") to 47 MB of code. no fancy stuff, just 47MB of C-Code on the disk. That's a lot of code the compiler then has to parse and handle.

If hygienic macros are a first-class citizen in the compiler, the compiler parses the original macro code once and then just modifies it in-memory. There is no reason to write 47MB of code somewhere and read it back, this would just happen as an AST modification in memory.

But that is also a much smaller reason. First-class macros allow the compiler to reason based off of the types and structure of the macro inputs. You don't have to guess if something is constant, bounded, unbounded and such. Strong types can enforce this safely and macros and optimization can use these strong guarantees. And sufficiently strong type information can open doors for far, far more powerful optimizations overall.

Just for the record - I'm fully aware why the kernel is where it is, and why it will stay there, but there is far improved compiler and language theory from there.


Nacho fries are pretty good though…


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: