It does not follow that people making more searches means people are having more successful searches. If google found the exact thing you were looking for and put it top centre in the results, would the number of human searchers stay the same but the number of human searches drop?
Again, then why are people using Google more than ever?
I don't really see how "dead internet theory" explains that. If it were as bad as you claim, surely usage would be plummeting? But it's just the opposite.
Dead internet theory means real users are declining while bot users are skyrocketing.
For example google search is such a terrible experience these days that I’ll often ask an LLM instead.
That LLM may do multiple google and other searches on my behalf, combine, collate and present me with just the information I am looking for, bypassing the search experience entirely.
This is a fundamentally different use case from human traffic.
> Are you sure it’s _people_ driving this increase?
Most likely - yes. If Google has been dead for years people wouldn't pour hundreds of billions of dollars into ads there. The Search revenue keeps increasing, even since ChatGPT showed up. It might stagnate soon or even decrease a bit - but "death" ? The numbers don't back this up. One blog saying he stops paying for Google ads conflicts with the reality of around 200 billion yearly revenue from Search.
Exactly this. Businesses decide whether to pay for ads based on clickthru rates and conversions. Bots don't click through. They don't convert. If these rates fall, advertisers will pay proportionally less as their max bid, and Search ads revenue will fall substantially.
That hasn't happened. Google continues to grow with real users.
The policy described in my link is literally about making each user search more to get the results they want in order to drive more ad revenue. That would create more searches and a less good user experience.
Much better to make seconds slightly larger than 2 seconds, and move to a dozenal system throughout. One hour is (1000)_12 novoseconds. A semi-day is (10000)_12.
Oh, we should switch our standard counting system to dozenal a well.
If you know something everyone else doesn't, it would be great to see your paper describing how you do that and demonstrating efficacy. So far, the evidence seems to suggest it's not sufficient: https://www.sciencedirect.com/science/article/pii/S0022202X2...
Nonsense, if we view proving as providing evidence for, then absolutely we can prove a negative. We have our priors, we accumulate evidence, we generate a posterior. At some point we are sufficiently convinced. Don't get hung up on the narrow mathematical definition of prove (c.f. the exception[al case] that proves [tests] the rule), and we're just dandy.
I like to think that what the “can’t prove a negative” phrase originated from was someone grasping at the difference between Pi_1 and Sigma_1 statements . For a Pi_1 statement, one needs only a single counterexample to refute it, but to verify it by considering individual cases, one has to consider all of them and show that they all work (which, if there are infinitely many, it is impossible to handle them all individually, and if there are just a lot, it may still be infeasible) . Conversely, for a Sigma_1 statement, a single example is sufficient to verify the claim, but refuting it by checking individual cases would require checking every case.
From the looks of it, Rust is usable un a tiny embedded system but it is not "great". I think that out of the recent, trendy languages, Zig is the best suited for this task, but in practice C is still king.
The big thing is memory allocation, sometimes, on tiny systems, you can't malloc() at all, you also have to be careful about your stack, which is often no more than a few kB. Rust, like modern C++ tend to abstract away these things, which is perfectly fine on Linux and a good thing when you have a lot of dynamic structures, but one a tiny system, you usually want full control. Rust can do that, I think, like C++, it is just not what it does best. C works well because it does nothing unless you explicitly ask for it, and Zig took that philosophy and ran away with it, making memory allocation even more explicit.
Rust has no malloc in the language whatsoever. In embedded, you don't even include the libraries for dynamic allocation in the first place, unless you want to. And it's very normal not to.
It probably depends how tiny you mean. If the reason you can't allocate memory is because the only 1024 bytes of static RAM is all stack, then, yeah, Rust won't be very comfortable on that hardware. On the other hand C isn't exactly a barrel of laughs either. In my mind if I can sensibly chart what each byte of RAM is used for on a whiteboard then we should write machine code by hand and skip "high level" languages entirely.
I don't know where I've lied about this, supposedly. Unless you say that, because of this implementation exception (which is based on target, not std vs no_std by the way) as meaning that "there's no UB in safe Rust" to be a lie.
I would still stand by that statement generally. Implementation issues on specific platforms are generally not considered to be what's being discussed when talking about things like this. It's similar to how cvs-rs doesn't make this a lie; a bug isn't in scope to what we're talking about 99% of the time.
In context, I'd have no reason to deny that this is something you'd want to watch out for.
> For an example, if a function in no_std overflows, it can result in undefined behavior, no unsafe required. And stack overflows are easy in Rust, like they are easy in most other systems languages.
This is true, no_std has no Rust runtime so it doesn't provide stack protection. I am aware of efforts to address this for embedded, but they're not available at the moment.
> Steve Klabnik has lied about that in the past, as he is wont to do.
1) I don't know what Steve has to do with anything I asked so it is bizarre to bring up and 2) I find this is to be a ridiculous statement.
I learnt the most from bad teachers#, but only when motivated. I was forced to go away and really understand things rather than get a sufficient understanding from the teacher. I had to put much more effort in. Teachers don't replace effort, and I see no reasons LLMs will change that. What they do though is reduced the time to finding the relevant content, but I expect at some poorly defined cost.
# The truly good teachers were primarily motivation agents, providing enough content, but doing so in a way that meant I fully engaged.
I think citation would be needed on this. Obviously any artist producing fully original music or art doesn't.
And many content creators might benefit from an expanded public domain, or they might not... There's already tons of creators, they seem to be getting by? Well, actually, some are getting by and most are probably hobbyists or underwater much like most arts. I'm not sure expanded quantities of available characters would necessarily change much.
> Obviously any artist producing fully original music or art doesn't.
I would suggest that artists who say they're producing fully original works are just poorly educated in art history. Making something that has no prior influences would be extraordinary in the modern world.
Also, the entities most capable of exploiting long copyright terms are corporations. Individuals simply don't have the resources to keep something relevant decade after decade save for a very small handful of exceptions like J.R.R. Tolkien.
I'm not even really advocating for or against the copyright position.
I also think you're missing my point a bit. Just cause you study lots of works and create an original creation which borrows influences isn't the same thing as requiring use of a copyrighted piece of work.
It's pretty silly to suggest I was implying artists have no influences cause I classified works without any copyrighted material as original.
My point was more... just cause a bunch of copyrighted work becomes available does not necessarily imply creators and artists lives will be substantially different or better off.
Oh really? You don't think all the creators who do things like make video essays on 20 year old movies would benefit from not getting the rug pulled out from under them? You don't think they would prefer being legally in the right making money from analysis of media that was a generation ago?
You don't think the Techmoans and Technology connections would prefer having better demonstration material than whatever recordings from 1912 exist, so that they could actually show you what they are trying to demonstrate without having their livelihood threatened by a capricious and byzantine system hell bent on pleasing a few megacorps?
You don't think the creatives who made "The Katering show" for example would prefer that more people watch their artistic output than have it locked behind some business leaving it languishing in a random digital storefront rather than letting more people buy it because they just cannot be assed? Oh, you don't actually have to guess, because they uploaded a youtube video where they encourage people to pirate their work so they can see it.
Creatives and artists tend to enjoy their work being consumed and riffed on (not plagiarized) and well adjusted artists recognize that there's "nothing new under the sun" and that remixing and riffing are essential parts of the creative and artistic process.
Hell, the music industry even understands this, which is why letting songs get licensed out for remixes and future use is common.
Isn't the question whether it's reasonable for people to be rentiers? Clearly lots of the population are, but wouldn't it be better if they carried on creating rather than sitting back and doing nothing for the remainder of their place on earth?
[1] https://www.wheresyoured.at/the-men-who-killed-google/
reply