Hacker News new | past | comments | ask | show | jobs | submit | zelcon's comments login

Or just wait for the NVIDIA Digits PC later this year which will cost the ~same amount and can fit on your desk

That one can handle up to 200B parameters according to NVIDIA.

That's a shame. I suppose you'll need 4 of them with RDMA to run a 671B, but somehow that seems better to me than trying to run it on DDR4 RAM like the OP is saying. I have a system with 230G of DDR4 RAM, and running even small models on it is atrociously slow.

That's Zig for you. A ``modern'' systems programming language with no borrow checker or even RAII.

Those statements are mostly true and also worth talking about, but they're not pertinent to that error (remotely provided JS not behaving correctly), or the eventual crash (which you'd cause exactly the same way for the same reason in Rust with a .unwrap() call).

Not exactly the same. `.unwrap()` will never lead to UB, but this can in Zig in release mode.

Also `unwrap()`s are a lot more obvious than just a ?. Dangerous operations should require more ceremony than safe ones. Surprising to see Zig make such a mistake.


> UB vs "safe" panic

Yes, it's not exactly the same if you compile in ReleaseFast instead of ReleaseSafe. Both are bad though, and I'd tend to blame the coding pattern for the observed behavior rather than quibble about which unacceptable scenario is worse.

I see people adopting forced null unwrapping for dumb reasons all the time. For the remaining reasons, do you have a sense of what the language deficiencies are which make that feature helpful? I see it for somewhat sane reasons when crossing language boundaries. Does anything else stand out?

> ceremony

Yes. Thankfully ".?" is greppable, but I wouldn't mind more ceremony there, and less for `try` coding patterns.


you shouldn't be unwrapping, error cases should be properly handled. users shouldn't see null dereference errors without any context, even in cli tools...

That too, as a general coding pattern. I was commenting on the criticism of Zig as a sub-par system's language though, contrasting with a language most people with that opinion seem to like.

You could build the same thing in Rust and have the same exact issue.

If that kind of stuff is always preferable, the nobody would use C over C++, yet to this day many projects still do. Borrow checking isn’t free. It’s a trade-off.

I mean, you could say Rust isn’t a modern language because it doesn’t use garbage collection. But it’s a nonsensical statement. Different languages serve different purposes.

Besides, Zig is focusing a lot more on heavily integrating testing, debug modes, fuzzing, etc. in the compiler itself, which when put together will catch almost all of the bugs a borrow checker catches, but also a whole ton of other classes of bugs that Rust doesn’t have compile time checks for.

I would probably still pick Rust in cases where it’s absolutely critical to avoid bugs that compromise security.

But this project isn’t that kind of project. I’d imagine that the super fast compile times and rapid iteration that Zig provides is much more useful here.


That has absolutely nothing to do with RAII or safety…

Why didn't you just fork Chromium and strip out the renderer? This is guaranteed to bitrot when the web standards change unless you keep up with it forever and have perpetual funding. Yes, modifying Chromium is hard, but this seems harder.

It was my first idea. Forking Chromium has obvious advantages (compatibility). But it's not architectured for that. The renderer is everywhere. I'm not saying it's impossible, just that it did look more difficult to me than starting over.

And starting from scratch has other benefits. We own the codebase and thus it's easier for us to add new features like LLM integrations. Plus reducing binary size and startup time, mandatory for embedding it (as a WASM module or as C lib).


The Chromium/Webkit renderer used to have multiple rendering backends. You might use or add a no-op backend.

> modifying Chromium is hard, but this seems harder

Prove it.


Why do anything: because it shows what's possible, and makes the next effort that much more easier.

I call this process of frontier effort and discovery: "science"


Redoing what others have already done is not what I think of when I hear "frontier effort"

Corps don't want to have to release the source code for their internal forks. They could also potentially be sued for everything they link using it because the linked binaries could be "derivative works" according to a judge who doesn't know anything.

They don't have to release source for internal forks.

They do if they're AGPL licensed and the internal form software is used to provide a user facing service.

But then it isn’t “internal”…

It’s too hard to determine what pieces of your stack interact with public-facing services, particularly in a monorepo with thousands of developers. The effort involved and the legal risk if you get it wrong makes it an easy nope. Just ban AGPL.

The effort involved, and legal risk is exactly the same as for any Copyleft license. If you don't know what your stack is doing, that is the problem -- not the license.

I think you should get new lawyers if this is their understanding of how software licenses work.

See for example https://opensource.google/documentation/reference/using/agpl...

> Code licensed under the GNU Affero General Public License (AGPL) MUST NOT be used at Google.


It’s their loss

Is it? Because open source tools re-licensing themselves to be more permissive would seem to indicate whose loss it really is.

This might indicate moreso that they believe they won't lose anything by the transition and users might ultimately benefit

Embrace, extend, extinguish. it could take about a century, but every software company (hardware maybe next century) is in the process of being swallowed by free software. Thats not to say people can’t carve out a niche and have balling corporate retreats for a while.. until the sleeping giant wakes up and rolls over you.

Free software basically only exists because it’s subsidized by nonfree software. It also has no original ideas. Every piece of good free software is just a copy of something proprietary or some internal tool.

You've just made a pretty outrageous claim without evidence that would require a lot of effort on my part to refute, so I'll just go with: if you say so.

I'm wondering if you've ever actually asked a real corporate lawyer for an opinion on anything relating to GPL licenses. The results are pretty consistent. I've made the trip on three occasions, and the response each time was: "this was not drafted by a lawyer, it's virtually ininterpretable, and it is wildly unpredictable what the consequences of using this software are."

Why do some companies engage with it then?

Eh, all the GNU family of licenses were drafted by lawyers.

Just using any Copyleft software has no legal consequences (copyleft licenses kick in when distributing, not using them).


I try to use paredit-mode, but it never seems to integrate well with evil/vim keybindings. I always end up disabling paredit-mode after a few hours. I'll give this a spin.


Props for somehow convincing all the foundation models to generate charts using your markup. It is guaranteed to survive a very long time now.


Release the weights or buy an ad. This doesn’t deserve front page.


Been running it since rc2. It’s insane how long this took to finally ship.


You need to buy the same exact drive with the same capacity and speed. Your raidz vdev be as small and as slow as your smallest and slowest drive.

btrfs and the new bcachefs can do RAID with mixed drives, but I can’t trust either of them with my data yet.


It doesn't have to be the same exact drive. Mixing drives from different manufacturers (with the same capacity) is often used to prevent correlated failure. ZFS is not using the whole disk, so different disks can be mixed, because the disk often have varying capacity.


You can run raid-z across partitions to utilize the full drive just like synology does with their “hybrid raid” - you just shouldn’t.


> You need to buy the same exact drive

AFAIK you can add larger and faster drives, you will just not get any benefits from it.


You can get read speed benefits with faster drives, but your writes will be limited by your slowest.


Just have backups. I used btrfs and zfs for different purposes. Never had any lost data or downtime with btrfs since 2016. I only use raid 0 and raid 1 and compression. Btrfs does not havr a hungry ram requirement.


Neither does zfs, that’s a widely repeated red herring from people trying to do dedup in the very early days, and people who misunderstood how it used ram to do caching.


Tbh the idea of keeping backups defeats the purpose of using RAIDZ (especially RAIDZ3). I don’t want to buy an LTO drive, so if I backup, it’s either buying more HDDs or S3 Glacier ($$$). I like RAIDZ so I don’t have to buy so many drives. I guess it protects you if your house burns down, but how many people do offsite backups for their personal files? And dormant, unpowered HDDs die a lot faster than live, powered HDDs.


Yes, seriously handling your data is expensive. I am talking about buying new hardrives.


Future headline: How big data helped the shoggoths get a 6x yield increase in training data tokens from pasture humans


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: