Hacker Newsnew | past | comments | ask | show | jobs | submit | more Too's commentslogin

How can one know that's not coming from the pre-trained data. The paper is trying to evaluate whether the LLM has general problem solving ability.


Is it normal to have human operators pointing the camera around like that? It almost look like they expected something to happen.


It's not uncommon, people like planes.


Yeah. It doesn’t take much imagination to figure out that a Coke Zero looks identical to regular Coca Cola, yet they have vastly different calorie content, to put it mildly.

Sauces loaded with butter, sugar or other goodies will of course be the same story.


Index of files stored in git pointing to a remote storage. That sounds exactly like git LFS. Is there any significant difference? In particular in terms of backups.


Definitely similar.

Git LFS is 50k loc, this is 891 loc. There are other differences, but that is the main one.

I don't want a sophisticated backup system. I want one so simple that it disappears into the background.

I want to never fear data loss or my ability to restore with broken tools and a new computer while floating on a raft down a river during a thunder storm. This is what we train for.


Care to explain more?


> combination of the manufacturer, make and serial number.

> patent that involves defining that as a unique identifier for aircraft.

Now i got mighty curious what makes this novel enough to be a patent.


Go work at a big company. The patent lawyers come around and ask what you've been working on, and a month or two later, your name's on 10 patents, none of which make any sense whatsoever. If you're very lucky you might get a dollar bill for each.


For a while at google you would get $5k per patent submission and $10k for each approved(?) one. Given how easy it was, I could have matched my annual salary. It's depressing how easy it is to get a system architecture (unimplemented) patented at bigco.


When I was at Microsoft, years ago, it took more effort to avoid having my name end up on a patent than I'd have had to exert if I'd actually wanted one.


You burrow this simple idea in pages and pages of obfuscated tedium, and that's good enough that everyone is happy. Patent office gets their fee, lawyers get paid, company can say it has a supercharged patented innovation.


It's a unique identifier but now "not on a computer".


I was wondering the same thing. I've had to derive unique identifiers from hundreds of different data sets over the years. What makes it special when it's a plane?


>> it’s on a plane.

Exactly this. The domain space and a couple of good lawyers makes it patentable today.


Maybe its right-click level of patent.


The million dollar question is how far can one get on this trick. Maybe this is exactly how our own brains operate? If not, what fundamental building blocks are missing to get there.


> If not, what fundamental building blocks are missing to get there

If I were to guess, the missing building block is the ability to abstract - which is the ability to create a symbol to represent something. Concrete example of abstraction is seen in the axioms of lambda calculus. 1) ability to posit a variable, 2) ability to define a function using said variable, and 3) the ability to apply functions to things. Abstraction arises from a process in the brain which we have not understood yet and could be outside of computation as we know it per [1]

[1] https://www.amazon.com/Emperors-New-Mind-Concerning-Computer...


No. It's not microtubules. Enough with the g-darn microtubules already. https://www.biorxiv.org/content/10.1101/712794v1

"We used an antimicrotubular agent (parbendazole) and disrupted microtubular dynamics in paramecium to see if microtubules are an integral part of information storage and processing in paramecium’s learning process. We observed that a partial allosteric modulator of GABA (midazolam) could disrupt the learning process in paramecium, but the antimicrotubular agent could not. Therefore, our results suggest that microtubules are probably not vital for the learning behavior in P. caudatum. Consequently, our results call for a further revisitation of the microtubular information processing hypothesis."


What are the most useful features?


What’s even crazier is that nobody learned this lesson and new protocols are created with the same systematic vulnerabilities.

Talking about MCP agents if that’s not obvious.


What does zero-overhead mean here?


Raw protocol, really. No marshaling, no conversions, none of the overhead from type management you get with modern Python, none of the turtles-all-the-way-down dependencies of NodeJS equivalents. I like it, although I would probably port it back to “lightweight” Python in about half the size :)


Interesting to see ppl caring about marshalling overhead when working with LLMs


Some of us still prize compute efficiency, especially those who have been using Python for a long time and are contemplating the new kinds of code patterns that have emerged from data science...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: