Hacker News new | past | comments | ask | show | jobs | submit login

I stopped using Apple stuff in any capacity because of their regioning of accounts a long time ago, so it being unique to them wouldn't surprise me.

I found Andor incredibly boring. The characters were unlikeable. Dialog was bland. I have nothing against a slow burn, but that show didn't even light a fire in the first place.

I worked at LANL until very recently, so yes, I was associated with the DOE.

I actually agree with your point that "the ability to determine novelty ... is a crapshoot". My point was that the AI system should at least try to provide some sense of how novel the content is (and what parts are more novel than others, etc.). This is important for other review processes like patent examination and is certainly very important for journal editors to determine whether a manuscript it "worthy" of publication. For these reasons, I personally have a low bar as to what qualifies as "novel" in my own reviews.

Most of my advisors in graduate school were also journal editors, and they instilled on me to focus on novelty during peer reviews because that is what they cared about most when making a decision about a manuscript. Editors focus on novelty because journal space is a scarce resource. You see the same issue in the news in general [1]. This is one of the reasons why I have a low bar to evaluate novelty, because a study can be well done and cover new ground without having an unambiguous conclusion or "story being told" (which is something editors might want).

I originally discussed this briefly in my post but edited out immediately after posting this. I'll post it again but add more detail. I think that a lot of peer review as practiced today is theater. It doesn't really serve any purpose other than providing some semblance of oversight and review. I agree with your point about the journal/conference being the wrong place to do peer review. It is too late to change things by then. The right time is "in the lab", as you say.

I wholeheartedly agree that reproduction/replication is the standard that we should seek to achieve but rarely ever do. Perhaps the only "original" ideas that I have had in my career came from trying to replicate what other people did and finding out something during that process.

[1] https://en.wikipedia.org/wiki/News_values


Bots are bots.

OK, fine, when I'm introducing a new cell phone model, I'll do it Jobs's way. But that's not optimal for an in-depth technical presentation with actual content behind it.

The Berkeley Public Library has a tool lending branch: https://www.berkeleypubliclibrary.org/locations/tool-lending...

Mac and MacBooks were not wildly successful back then. iPods were, though.

In a world flooded with AI content, you use ChatGPT to reply to HN comments. Very cute.

Down with this sort of thing!

I can't wait for the "rewritten in Rust" era to end!

On the right (ie. Mirror of the STE 15pin ports) would mean that they could keep the electrical layout ie. Connected to the keyboard, and hence a separate keyboard unit.

I agree it would be difficult to design a correct socket, but from interviews it was always the plan to have a blitter, and a socket as standard would have helped adoption.

The main thing is that the T212 is a great coprocessor, faster than the 68881 fpu and with a 2k cache. Introducing the transputer as a coprocessor would potentially have changed the computing landscape


That's not at all how it works.

A trade takes place when both buyer and seller feels they gain something from the transaction. In general neither side captures all the surplus value, if they did the trade would not happen.


When you have an HTTP/2 connection already open a 'round-trip' is not really a gigantic concern performance-wise. And it gives the client application complete control and ver what nested parts it wants to get and in what order. Remember that the article said it's up to the server what order to stream the parts? That might not necessarily be a good idea on the client side though. It would probably be better for the client to decide what it wants and when. Eg, it can request the header and footer, then swap in a skeleton facade in the main content area, then load the body and swap it in when loaded.


Some say that the Spanish Civil War was the rehearsal for WWII. No doubt, the war in Ukraine is just such a situation.

I think RSC itself is pretty solid by this point but frameworks around it (primarily Next.js) are still somewhat rough (and were much rougher during the initial App Router release). Hard to say more without knowing what kind of issues you were hitting.

I'm posting because I find the technology interesting. I don't have a goal of convincing you to use or adopt it. I sometimes write about specific aspects that I find appealing but YMMV.


Sundiver is most prominent in my recollection.

it's not really a "single tax" then is it?

I agree but some things like pollution are hard to quantify.


> But with plans to bring on more than 300 fellows this year, that approach quickly became unsustainable. At the same time, the rise of ChatGPT was diluting the value of written application materials. “They were all the same,” said Cheralyn Chok, Propel’s co-founder and executive director. “Same syntax, same patterns.”

Welcome to the new cat and mouse game. Soon applicants will be beating these AI job interviews using their own AI-based virtual interview candidates delivering idealized automated answers. The candidate might not even be present but their AI video avatar will be providing perfect answers.

This article claims that processing interviews for 300 fellowship internships became “unsustainable,” but then I have to ask whether that organization is just understaffed and allocating resources poorly?

Either you are understaffed or you’re being far too selective. They are interns, throw all the applications in a lottery and pick 300 of them plus a few in a wait list to account for those who are hired that either ghost or turn out unqualified.


Nice, it also helps me like a Rust developer that has no C++ experience.

To be fair, everything about PCs back then sucked.

DOS was crap, when you had tGEM and Amiga OS.

Windows 1 and 2 was beyond terrible.

They were shit for games

They were bulky

They were slow

They crashed all the time.

They were ugly

They were noisy.

They were hard to manage (autoexec.bat, no OS in ROM, stupidly tricky partitioning tools, incompatible drivers for the same hardware but in different applications, etc)

But IBM lost control of the hardware market so they became cheap, ubiquitous crap.

And that’s literally the only reason we got stuck with PCs.


The Atari’s still look like they’re a great first computer for kids.

The positioning of the cursor keys on the Atari STs is interesting [0]. It arguably makes sense for the cursor block to be located more in the middle rather than at the bottom edge of the keyboard.

[0] https://upload.wikimedia.org/wikipedia/commons/5/54/Atari_10...


Totally fair to call that out, and I get why it might come across that way. But nope, not AI-generated. Just me trying to be clearer and a bit more structured lately. Sorry if it fell off.

A local hub to go get tools is the only way this works, in my opinion. Your current offering is obviously compelling from the renter’s perspective. I am renovating a cottage and would love to go pick up a chainsaw, brush cutter, etc for half the price of Home Depot (they have everything and great service).

But I just don’t see it from the tool owner’s perspective. My suburban aunt has two chainsaws sitting in the garage that she doesn’t use anymore. An extra $150 a month isn’t enough to deal with the hassle of coordinating meetings, dealing with damage, etc. And she definitely wouldn’t be giving a free tank of gas, PPE, etc like Home Depot does. She would gladly drop it off at a local spot, make passive income, maybe go grab it herself once a year when she needs it.

Ps - great website design. Looks beautiful on mobile and works really well. What are you using on the frontend?


I've always used SO as a source of code snippets, like when I forget how to check the length of an array in JavaScript (I change languages too often to keep it straight). So SO is still useful to me. I suppose I could use an LLM for that, but it's such a quick search, it doesn't seem worth changing my workflow.

I did encounter a math problem once that I asked on math overflow. No one answered it. But then a couple years later I faced the same math problem, so I spent a day to solve it. A couple years later, I ran into the question again, and math overflow helpfully still had my answer.


The lifetime of nuclear power is a standard talking point that sounds good if you don't understand economics but doesn't make a significant difference. It's the latest attempt to avoid having to acknowledge the completely bizarre costs of new nuclear built power through bad math.

CSIRO with GenCost included it in this year's report.

Because capital loses so much value over 80 years (60 years + construction time) the only people who refer to the potential lifespan are people who don't understand economics. In this, we of course forget that the average nuclear power plant was in operation for 26 years before it closed.

Table 2.1:

https://www.csiro.au/-/media/Energy/GenCost/GenCost2024-25Co...

The difference a completely absurd lifespan makes is a 10% cost reduction. When each plant requires tens of billions in subsidies a 10% cost reduction is still... tens of billions in subsidies.

We can make it even clearer. Not having to the spend O&M costs from operating a nuclear plant for ~20 years and instead saving it is enough to rebuild the renewable plant with equivalent output in TWh of the nuclear plant.


>People need to live where jobs are! That's the main determinant of where you live!

Retired people need to live where shopping, health resources, and their offspring (if any) are.


It works for a lot of families but not all. You need a high enough degree of sharing of model weights between different queries for that to make sense (memory access being the usual bottleneck nowadays, though smaller models see something similar with matmul batch efficiencies for CPU related reasons).

Fully connected transformers trivially work (every weight for every query). MoE works beyond a certain size or with certain types of mixing (still using every weight, or using a high enough fraction that there's some sharing with batches of 20+ queries). As you push further that direction though (lots of techniques, but the key point being accessing less of the model at once and bypassing some of it for each query), you need larger and larger batches for those efficiency gains to materialize. At some point it becomes untenable because of latency waiting for batches of data, and past that it becomes untenable because of the volume of query data.


There are at least two other alternatives I'd reach for before this.

Probably the simplest one is to refactor the JSON to not be one large object. A lot of "one large objects" have the form {"something": "some small data", "something_else": "some other small data", results: [vast quantities of identically-structured objects]}. In this case you can refactor this to use JSON lines. You send the "small data" header bits as a single object. Ideally this incorporates a count of how many other objects are coming, if you can know that. Then you send each of the vast quantity of identically-structed objects as one-line each. Each of them may have to be parsed in one shot but many times each individual one is below the size of a single packet, at which point streamed parsing is of dubious helpfulness anyhow.

This can also be applied recursively if the objects are then themselves large, though that starts to break the simplicity of the scheme down.

The other thing you can consider is guaranteeing order of attributes going out. JSON attributes are unordered, and it's important to understand that when no guarantees are made you don't have them, but nothing stops you from specifying an API in which you, the server, guarantee that the keys will be in some order useful for progressive parsing. (I would always shy away from specifying incoming parameter order from clients, though.) In the case of the above, you can guarantee that the big array of results comes at the end, so a progressive parser can be used and you will guarantee that all the "header"-type values come out before the "body".

Of course, in the case of a truly large pile of structured data, this won't work. I'm not pitching this as The Solution To All Problems. It's just a couple of tools you can use to solve what is probably the most common case of very large JSON documents. And both of these are a lot simpler than any promise-based approach.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: