Hacker News new | past | comments | ask | show | jobs | submit login
Tinygrad will be the next Linux and LLVM (twitter.com/realgeorgehotz)
29 points by alvivar 54 days ago | hide | past | favorite | 37 comments



Well, neither Linux nor LLVM loudly proclaimed that they would be the next Internet or GUI. So I am inclined to believe that this will not be the case and the person doing the proclamation might be a little full of himself.


Interesting contrast to how Linux itself was first introduced:

"just a hobby, won't be big and professional like gnu"


TinyGrad is GeoHot's system/compiler to map neural networks onto hardware. He consistently points out this one point: Because the exact number of cycles is know in advance, it can be scheduled, there's no need for branch prediction, or that type of thing in a CPU.

Essentially, he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.

The bit about LLMs is a distraction, in my opinion.


> he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.

So how is this different from digital logic synthesis for CPLD/FPGA or chip design we have been doing over the last decades?


FPGAs are (prematurely) optimized for the wrong things, latency and utilization. The hardware is heterogeneous, and there isn't one standard chip. Plus they tend to be expensive.

The idea is to be able to compile/run like you can now with your Von Neuman machine.

FPGA compile runs can sometimes take days! And of course, chips take months and quite a bit of money for each try through the loop.


With FPGAs I can sample a hundred high precision ADCs in parallel and feed them through DSP, process 10Gb ethernet at line rate, etc with deterministic outcomes (necessary given safety and regulatory considerations). They integrate well with CPUs and other coprocessors - heterogeny isn't wrong. Plus training a NN model also takes days! To be fair not always, but for the above applications my build time was hours to many-hours anyway.

I grant the hardware is absurdly expensive at the high end, but I really don't think application wise the comparison is apples to apples.

Hotzs saying literally everything with an io pin or actuator will be driven solely by NN (driven by tinygrad) seems to me maybe 1/3 self promotion, 1/3 mania, some much smaller amount incisive at best.


> While there may be a legacy Linux running in a VM to manage all your cloud phoning spyware, the core functionality of the lifelike device is boot to neural network.

No, I do not think future devices will be "boot to neural network." Traditional algorithms still have a place. Your robot vacuum cleaner (his example) may still use A* to route plan, and Quicksort to display your cleanings in terms of most energy usage.

> Without CPUs, we can be freed from the tyranny of the halting problem.

Not sure what this means but I think it still makes sense to have a CPU directing things as in current architectures. You don't just have your neural engine, you also have your GPU, Audio system, input devices, etc. and those need a controller. Something needs to coordinate.


> Without CPUs, we can be freed from the tyranny of the halting problem.

Can someone please explain to me what this even means in this context?

Serious question.


He also claims that the cardinality of the reals is the same as the integers.

https://news.ycombinator.com/item?id=36074287

You could say he had a history of using big words to talk shit.


A neural network is perfectly deterministic, the runtime is predictable before you run it. Which I don't think is going to be true much longer.

https://news.ycombinator.com/item?id=41623474


It's gibberish. For one thing... https://arxiv.org/abs/1901.03429


Think of it as unwinding a program all the way until it's just a list of instructions. You can know exactly how long that program will take, and it will always take that same time.


But will it always solve the task? Because without that it it is trivially easy to “solve” the halting problem by just declaring that the turing machine halts after X steps.


Wouldn’t this also imply a lack of Turing completeness, and thus not be good for general purpose computing?


He's got the kernel of a good idea. Deterministic data flows are a good thing. We keep almost getting there, with things like data flow architectures, FPGAs, etc. But there's always a premature optimization for the silicon, instead of the whole system. This leads to failure, over, and over.

He's wrong in the idea of using an LLM for general purpose compute. Using math instead of logic isn't a good thing for many use cases. You don't want a database, or an FFT in a Radar System to hallucinate, for example.

My personal focus is on homogeneous, clocked, bit level systolic arrays.[2] I'm starting to get the feeling the idea is really close to being a born secret[1] though, as it might enable anyone to really make high performance chips on any fab node.

[1] https://en.wikipedia.org/wiki/Born_secret

[2] https://github.com/mikewarot/Bitgrid


You could still build a FFT in tinygrad and it would be as deterministic as it's matmuls (so not bitwise deterministic, due to the non-associativity of floating point math and the way GPUs don't guarantee execution order, but we are okay with that). The matmuls in the NNs don't hallucinate.


I don't know why I should switch from PyTorch to Tinygrad as a researcher and practitioner. In terms of kernel fusion, there is torch.compile. Not to say there is a large ecosystem behind PyTorch and almost every paper today is published with a PyTorch implementation. Probably what Tinygrad shines is bare-metal platforms?


I don’t understand the LLVM comparison. Is it somehow a compiler backend for conventional programming languages? Can you run C or Rust code?


Me either, it's like saying ai-dependency is the next freedom.


Makes me wonder if he knows what LLVM does.

If I understand him correctly, if everything becomes a neural network then he expects most neural networks to use Tinygrad


Same.


> tinygrad has a hardware abstraction layer, a scheduler, and memory management. It's an operating system

Doesn't every ML framework have that?


nah not like he's talking about - TF and PT definitely punt all that down to tensorrt or hip or whatever. doesn't mean there's anything novel here - just that TF and PyTorch don't do it.


I generally don't read anything by gh but I think he is cryptically just referring to something like XLA, whereby your NN architecture gets compiled straight to hardware, say to a custom asic, or to an FPGA bit stream, etc.

It's definitely going to happen but I don't think it will replace CPU's much like human brains can't quite replace CPU's and what they are optimised for.

Trying to make out that TinyGrad is leading the charge in this is quite self indulgent.


>Without CPUs, we can be freed from the tyranny of the halting problem.

In the same way that we can be freed of the tyranny of being able to write a for loop.


The only reason neural networks don't have control flow is because they are not very good. They are incredibly inefficient and the only way to properly solve that is to introduce control flow, for example: https://arxiv.org/abs/2311.10770


Great... Does this mean my pc will hallucinate kernel panics when it doesn't even have a kernel?


no it won't, because while hitting ioctls in python is cute

https://github.com/tinygrad/tinygrad/blob/master/extra/hip_g...

it is definitely not shippable



Because it's slow duh


This sounds like prejudice. Have you benchmarked it?


Yes I literally duplicated your approach for my driver stack last week and surprise surprise the FFI overhead into libc is too high.


FFI? This isn't how GPUs work...they are MMIO (mostly)

Those drivers are faster than anything else when used to run fixed command queues (what neural network runs are)


I can't say anything on the performance, but inline assembly in Python is crazy


It's not inline assembly it's just ioctl through ctypes via libc.


Isn’t this the guy who joined Twitter as an intern to “fix” search?


Yeah, good luck with that, lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: