Hacker News new | past | comments | ask | show | jobs | submit | StrangeDoctor's comments login

I think the details are actually important to the idea. Mainly this green brag doesn't seem to add up. The battery system is crucial.

While there's a bit about needing to replace it in 10-20 years coming out to a $83/month, this is essentially burying the $15,000 bill coming.

Also, unless their battery system is wildly over provisioned at the moment, you can't just add a bunch of new panels. Are they selling back to the grid? I don't think so, they mention grid independence wrt natural disaster. Do they have some sort of system to heat the water in the day only? Did they just take the name plate capacity of the system and multiply it by 40 years? who knows?

These "details" will make anyone buying into their big idea quite frustrated.

I also don't understand the part about chopping wood. Yes it's probably a idilic bit for the story, but that's almost one of the worst fuel sources. Dangerous, carcinogenic, and polluting.

edit: yes in the footnotes, burning wood. Getting rid of that would be the number one way to improve their impact on the earth and their community.


(not op) I think DMCA specifically should be repealed. We can still have DRM/Copyright/etc if enough people want it, we could look at other systems, but DMCA itself is awful. Repealing it doesn't make any statement about piracy.

I just went through this and they will make the process be as slow as legally possible.

Also if you were using google workspace, that also got transferred to them as a reseller. You have to do this somewhat confusing process of deleting your workspace account at squarespace to get it to transfer back to google. The remainder of the month you paid for at squarespace is just lost, and you’ll need to set things up again with google. I didn’t explicitly reenable Gmail and started to have all my mail bounce.


There has to be an easier way than that?


https://archive.is/Mwf7e

> FedEx said it has bar code and GPS tracking systems and issues regular reminders to drivers about personal and package safety. “This includes remaining vigilant when delivering a package and immediately reporting any unusual activity,” a FedEx spokeswoman said.

cool, shift the blame to the driver


I think linear bookshelf distance is a normal unit for talking about collections. At least as informative as number of books. Guessing 15 meters per bookshelf from photos, 214 bookshelves? doesn't sound as cool to me.


I think the idea is you network them together if you need more and most models can be split nicely.


For that you'd probably be better off removing one of the GPUs, and replacing it with a networking card.

The problem of the form factor will remain. The tinybox is 15U big for compute that you'd normally expect to find in a 4U form factor.


I don't think they're intended for rack usage like that. More like for people to put under their desks... there would be no reason to build the giant case with fancy silent-ish cooling if you're going to put them next to your other jet engines.


Fully agree, and I think the tinybox is great if you put only one of them somewhere in your local office.

I just don't think it makes sense to connect multiple of them into a "cluster" to work with bigger models, as the networking bandwidth isn't good enough and you'd have to fit multiple of these big boxes into your local space. Then I might as well put up a rack in a separate room.


there's an ocp 3.0 mezzanine, so no need to remove a card and you'd get 200gbps, unless I've missed something about needing to remove a card to access it. But yeah stacking these or racking them seems less than ideal.


3kW under your desk... no need to turn on the heat in the winter!


Most models actually can't be split nicely by 6. There's a reason nvidia builds nodes with 4 and 8 GPUs.


I don't see why 6 is inherently worse than 4 or 8, not all of the layers are exactly equal or a power of 2 in count. 2^2, 2^3, vs 2^1*3^1 might give you more options.

The main issue I run into mainly is flops vs ram in any given card/model.


Usually you want to split each layer to run with tensor parallelism, which works optimally if you can assign each kv head to a specific GPU. All currently popular models have a power of 2 number of kv heads.


interesting, thank you for the pointers.


They need support in not attracting the inquiry of oracle auditors.

They’re basically all based on openjdk source with different VMs and GC tech added in or supported to different levels.


In reality, they need support in order to point liability towards Oracle when something goes wrong. Enterprises like having the ability to shift blame. "Yes we know its down, we have a support ticket in with Oracle."


I ultimately gave up playing this game, spent hours testing and tuning. Each core is slightly different (on amd, I’m sure intel has something similar) and stress test for 1-24h each tweak invoking a reboot. Then some new workload comes and kernel panic/bsod.

Stock settings just work, maybe I lost the silicon lottery but too tired to check anymore.


Goodreads is faang?

The problem isn’t manufactured or conspiratorial, it’s just baked into sorting so much content on so few metrics. And needing to account for what the user is currently in the mood for something specific, something generic.


My point is that GoodReads isn't popular enough for it to be profitable to sabotage (yet). And there's still a threat of something more relevant coming along. If they actually wanted to improve discovery for something like prime video/shopping, then they could/would copy what works from GoodReads.


Goodreads is a subsidiary of Amazon.

Edit: I realize I misread your comment. Disregard!


When it’s all in memory you get to amortize the cost of the initial load. Or just pay it when it’s not part of the hot path. When it’s segmented, you’re doing that because memory is full and you need to read in all the segments you don’t have. That’ll completely overwhelm the log n of the search you still get


I was trying to make the point that the dominant factor becomes linear instead of logarithmic, but more accurately it's O(S log N) = O(N log N) because S (number of segments) is proportional to N (number of vectors).


Ah yeah that’s what I wanted to write but I guess I didn’t want to put words in your mouth, and stuck to what I could be certain about happening. We do all this work to throw away the unneeded bits in one situation and when comparing it to a slightly different situation go “huh some of that garbage would be kinda nice here”


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: