Hacker Newsnew | past | comments | ask | show | jobs | submit | mqus's commentslogin

Yeah, everyone in the EU is just working on this one law case. The guy next to me just cooked the meals for the guy that made the paper the case was filed on and now has to take an extended break. /s

People can and will do many things at once, like actually pursuing monopoly issues AND trying to improve the situation for everyone else. Its almost like there is only limited amount of one thing: space on page 1 of media outlets.


Great, now I can compare it with other Zsomethings. But as the post shows, thats just one branch of many naming schemes HP employs. Also: This already is a workstation, how could it _not_ be "ultra"? Why the doubling? Or does "workstation" just mean "something I can work with", including office stuff? In that case, I am very interested what other letters they use and what they're for.


For the same reason there is the Z2, Z4, Z8. There are several tiers of workstations, the Zbook Ultra is the best PC laptop you can get in a non-boat-anchor format, bar none.


TLS is not just for encryption, but also for integrity. The content you are seeing is exactly as intended by the owner of the domain or webservice (for whatever that is worth). No easy way to mitm or inject content on the way.


> which is done for IPv4-NAT, and for IPv6 firewalls

Are internet routers that do ipv4 NAT usually also doing an IPv6 firewall (meaning they only let incoming connections in if they are explicitly allowed by some configuration)? Maybe thats the point where the insecurity comes from. A Home NAT cannot work any other way(it fails "safely"), a firewall being absent usually means everything just gets through.


All the ones I've had have had a firewall by default for IPv4 and IPv6, yes. If ISPs are shipping stuff without a firewall by default I'd consider that incompetence given people don't understand this stuff and shitty IoT devices exist.

I do wonder how real the problem is, though. How are people going to discover a random IPv6 device on the internet? Even if you knew some /64 is residential it's still impractical to scan and find anything there (18 quintillion possible addresses). If you scanned an address per millisecond it would take 10^8 years, or about 1/8 the age of the earth, to scan a /64.

Are we just not able to think in such big numbers?


> Are internet routers that do ipv4 NAT usually also doing an IPv6 firewall (meaning they only let incoming connections in if they are explicitly allowed by some configuration)?

Consider the counter-factual: can you list any home routers/CPEs that do not do SPI, regardless of protocol? If someone found such a thing, IMHO there would be a CVE issued quite quickly for it.

And not just residential stuff: $WORK upgraded firewalls earlier in 2025, and in the rules table of the device(s) there is an entry at the bottom that says "Implicit deny all" (for all protocols).

So my question to NAT/IPv6 Truthers is: what are the devices that allow IPv6 connections without SPI?

And even if such a thing exists, a single IPv6 /64 subnet is as large as four billion (2^32) IPv4 Internets (2^32 addresses): good luck trying to find a host to hit in that space (RFC 7721).


Could you share your numbers as well? According to [1], the UK currently needs about 300TWh per year. Lets say we go entirely solar+wind+battery(whatever that means) and assume that battery has to bridge a gap of at most 7 days (meaning no wind and no solar at all during this time, which is at most a few days at a time). This adds up to 300/365*7= 5,8TWh of max capacity. Lets take it safe, round up and say we need 10TWh (which is already not "tens of TWH", but "ten"). [2] Says that grid-scale batteries come at around 350$ per kWh right now. kWh -> TWh is factor 1 billion (10^9), meaning if we want to build 10TWh of storage, it will cost 3,5 Trillion Dollars. Impressive number indeed. But there are multiple asterisks here.

1. This calculation takes into account that there is no exchange with mainland europe and no gas power plants or other sources of power (e.g. hydro or hydro storage). This sharply reduces the need for batteries. 2. Battery costs will fall in the next decades, compared to nuclear, which will take a long time (if ever) until costs will fall.

[1] https://www.statista.com/statistics/322874/electricity-consu... [2] https://docs.nrel.gov/docs/fy25osti/93281.pdf


This just does not work and it has been tested in practice. I can't link studies right now, but as a simple example: How many of these horrible things were said by publicly known people (e.g. politicians, celebrities,...) and there were little to no actual consequences?


> For more comparison, France produces about 2kg of radioactive waste per year,

... per capita. Sure, all other waste is bigger than that, but it is still a whole lot and still, usually, power companies do not have to pay for it, the country does. I wonder why.


Recycled down to ten grams per person per year.


  > power companies do not have to pay for it, the country does.
In the sense that you're using this, doesn't this apply to every power company?

Honestly, I'll pay a higher premium to get a power source with lower amounts of waste. Even if it costs more to store that waste. Just the scale of the waste is so massive. The environmental damage. Leaking into water supplies. All those same problems with nuclear fuel are the same with any other fuel. The difference is that in nuclear there is a greater concentration of damage by volume while having dramatically less volume.

To determine what's the cheapest option here you have to assign that damage per volume and then compare the volumes. How much more dangerous do you think nuclear is? 100x? 100000x? How much do you think any given section of the environment is worth? The CO2? The animals and other life impacted? The health costs of people living nearby?

All these things are part of the equation for every single power source out there.

  > per capita
Did you continue reading and see how that's 200mg of long lived waste? France has 66.7 million people. For long lived waste that's 13k tons total. That's a bit shy of the trade waste per capita. So about 67 million times more. Or let's go back to full. For power reactor they only produce 60% of that 2kg, 1.2kg. So that's 80k tons of waste, total, per year.

Seriously, do you understand the scale we're talking here? I mean there's more literal mass in a 1MW solar power plant. You get a few years of all of the nuclear power in France for the weight of a 1MW solar farm. France's nuclear generates 63GWs. That's 63000 times! Nuclear isn't 10000x as expensive, it's not even 10x. So I'm not exaggerating when I'm asking if you think it's 1000x more dangerous or 1000x more costly to the environment. Because that's still giving us a conservative estimate


I think one of the main problems is the dataset it is trained on, which is written text. How much answers with statements are in a given text, compared to a "I don't know"? I think the "I don't know"s are much less represented. Now go anywhere on the internet where someone asks a question (the typical kind of content LLMs are trained on) and the problem is even bigger. You either get no textual answer or someone that gives some answer (that might even be false). You never get an answer like "I don't know", especially for questions that are shouted into the void (compared to asking a certain person). And it makes sense. I wouldn't start to answer every stackoverflow question with "I don't know" tomorrow, it would just be spam.

For me, as a layman (with no experience at all about how this actually works), this seems to be the cause. Can we work around this? Maybe.


Well, no. The article pretty much says that any arbitrary statement can be mapped to {true, false, I don't know}. This is still not 100% accurate, but at least something that seems reachable. The model should just be able to tell unknowns, not be able to verify every single fact.


Determining a statement's truth (or if it's outside the system's knowledge) is an old problem in machine intelligence, with whole subfields like knowledge graphs and such, and it's NOT a problem LLMs were originally meant to address at all.

LLMs are text generators that are very good at writing a book report based on a prompt and the patterns learned from the training corpus, but it's an entirely separate problem to go through that book report statement by statement and determine if each one is true/false/unknown. And that problem is one that the AI field has already spent 60 years on, so there's a lot of hubris in assuming you can just solve that and bolt it onto the side of GPT-5 by next quarter.


> And that problem is one that the AI field has already spent 60 years on

I hope you don't think that the solutions will be a closed-form expression. The solution should involve exploration and learning. The things that LLMs are instrumental in, you know.


Not the same person but I think the "structure" of what the ML model is learning can have a substantial impact, specially if it then builds on that to produce further output.

Learning to guess the next token is very different from learning to map text to a hypervector representing a graph of concepts. This can be witnessed in image classification tasks involving overlapping objects where the output must describe their relative positioning. Vector-symbolic models perform substantially better than more "brute-force" neural nets of equivalent size.

But this is still different from hardcoding a knowledge graph or using closed-form expressions.

Human intelligence relies on very similar neural structures to those we use for movement. Reference frames are both how we navigate the world and also how we think. There's no reason to limit ourselves to next token prediction. It works great because it's easy to set up with the training data we have, but it's otherwise a very "dumb" way to go about it.


I mostly agree. But, next token prediction is a pretraining phase of an LLM, not all there is to LLMs.


Of course not, expert systems were abandoned decades ago for good reason. But LLMs are only one kind of ANN. Unfortunately, when all you have is a hammer...


Aren't they at least useful for ruling out any anomalies there? Like the die temp being 110°C constantly? Imho the die temperature is very important here, even if not interesting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: