Hacker News new | past | comments | ask | show | jobs | submit | scratcheee's comments login

I assume they’re referring to ag-gag laws, https://en.wikipedia.org/wiki/Ag-gag Gives a reasonable background by the looks.


And yet their statement makes perfect sense to me.

Caching and lower level calls are generic solutions that work everywhere, but are also generally the last and worst way to optimise (thus why they need such careful analysis since they so often have the opposite effect).

Better is to optimise the algorithms, where actual profiling is a lesser factor. Not a zero factor of course, as a rule of thumb it’s probably still wise to test your improvements, but if you manage to delete an n^2 loop then you really don’t need a profiler to tell you that you’ve made things better.


Which is exactly what I said: I am only arguing against their use of "intuitively".


Given the web’s much wider remit than pdf, it has support for accessibility tools and much better non-visual handling than pdf, so the comparison isn’t entirely fair I think. If a website doesn’t handle lynx well, there’s a good chance it doesn’t handle accessibility well either.


“Newfound”? People have been fighting this debate for decades. I remember having this exact debate at school 20 years ago, except the price arguments back then were bullshit, but people didn’t care enough about the environment so there wasn’t any pressure to change.

Nowadays the price arguments are… complex. But for the first time people actually care enough about the environment that nuclear is no longer competing poorly with coal (except for in Germany).

The exact maths on comparing pricing is complicated, given that energy storage costs vary so much depending on the inputs (try looking up storage costs for a 100% solar/wind grid during a once in a decade lull, it’d make nuclear look great, but obviously for a slightly mixed grid and more typical conditions, storage might be reasonably priced vs nuclear).

Anyway, I’m mildly disinterested in nuclear now that it’s only a side show to renewables, but I think it’s far from being a slam dunk either way. If some country or politian is more interested in nuclear, fair play to them, I say go for it. We’re not in a comfortable position right now so any movement away from fossils is a win regardless of where we end up (within reason of course)

No doubt the debate will only be resolved once and for all once fusion turns up and actually makes fission genuinely irrelevant (even then, fission might be cheaper for quite a while).


Or just accept fossil fuels, biofuels, hydrogen or hydrogen derivatives for the ”once in a decade lull”.

We need to solve climate change, that means maximizing our impact each step at the time.


Only everyone who goes bankrupt, and their creditors.

Rather the opposite for their customers.

I’m sure there’s a few cases where this would excessively hurt a company’s prospects (perhaps where a company has an innovative software product with a thin hardware component), but in the majority of cases losing access to the software of a hardware product as an asset is a relatively small loss, and the benefit to consumers is pretty huge. I’d support the trade off.


I’ve met humans who do that too


That argument works pretty well for any bad api design (assuming sufficient documentation somewhere).

Yet I assume we can agree that regardless of how you can work around bad apis, good api design that prevents misuse is always better.

The safety measures have to stop somewhere of course (short of an api that is a single function which does exactly the thing you want without inputs or outputs, which seems unlikely), but extending type safety to interfaces does not seem like a step too far.


Another aspect of the issue is to consider the asymmetric nature of knowledge of the API author and the API consumer.

When one authors a type in a language like C# they must predict how that type will be used and what interfaces the author promises the type can be.

The API consumer might be unfamiliar with the types in an API at first, but ultimately they will know more about how they want to use the type and what additional contracts they think the type fulfills.

As it is, this knowledge is only useful when inheriting a type. There is no facility to "vouch" for a type in C#, currently. In Go structural typing fulfills this.

I think Extension Interfaces is the best of both worlds.


I agree with most of this, but I’m not sure about tracking the size metadata becoming a required task for the caller.

The cost of storing the size of every allocation is relatively high, at least some of the time, where it isn’t implied by the usage. Meanwhile the caching system for allocations can store it very efficiently, a block of 4KB of 8-byte allocations will contain over 500 allocations that can all share their metadata. Once they’re handed out by the allocator their shared origin is obscured, so they’d need individual tracking.

I do acknowledge that when size is inherent to the context (new or allocating for a specific struct) then maybe an allocator that doesn’t track size could allow for some clever optimisations, though I’m doubtful it could overcome the loss of shared metadata, which is so much more efficient.


The Rust allocator APIs require layout information on dealloc[0]. In the majority of cases it's a non-issue. Take Vec (like std::vector) for instance, it has pointer+length+capacity. It needs the capacity value so it knows when to realloc, so it can equivalently use it to dealloc with size information as well.

The only case I know of where this is an issue is when downsizing from Vec<T> to Box<[T]> (size optimisation for read-only arrays). Box<[T]> only stores the ptr+length, so the first step is calling shrink in order to make the capacity and length equal.

When it comes to type-erasure, it happens to work just fine. A type-erased heap pointer like Box<dyn Any> will have the size+align info stored in the static vtable. Yes it's some extra space stored, but only in the .text data and not as part of any allocations.

On this topic, I've linked a short post on allocator ideas[1] by a rust std-lib maintainer, which lists some of the other things we might add to rust's upcoming (non-global) Allocator trait

[0] https://doc.rust-lang.org/std/alloc/trait.GlobalAlloc.html#t... [1] https://shift.click/blog/allocator-trait-talk/


> The cost of storing the size of every allocation is relatively high

Thus it would be great if we don't push the burden to the allocator. It'll also need to store the size somewhere, adding to the cost for every allocation. Pay for only what you use.

Fortunately C++17 and C23 (free_sized) have already fixed this.


One potential value of CC today is as a final and most flexible solution. Even with the limited and experimental technology of today, which is obviously economically insane, we have a way (with enough money) to make anything carbon neutral. The value is not that we actually do that (humanity cannot afford it, in general), but instead to give us the number to beat, the correct carbon tax, the absolute cost of failing to decarbonise.

It’s easier to justify improved efficiency when the cost of spilling carbon is a big number of dollars rather than a tiny slither of a global problem.

That said, I do agree investment should be limited, it’s obviously only going to be the “best” solution in the rare circumstance that there is no good solution.


I actually disagree, if it can be shown to be safer overall then I don’t think it should be required to be a strict superset of human abilities.

AI will always behave differently to human cognition, we should expect them to have different illusions and different failure cases. That doesn’t mean worse, if they perform better than a legal human driven car on average then I don’t think there’s a justification to exclude them.

That said, they will probably have to be _significantly_ better than human drivers in order to survive the media and public perception, so this might be irrelevant in the end.


To play devil's advocate. Say that on average it was 10x or 100x safer than human drivers, but for whatever reason, there was a 1/1,000,000 chance that the self driving car would plow into traffic at a red light because some black-box instruction told it it was the thing to do, very likely injuring or killing you. But overall it was better in all other cases. Would you take that risk?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: