Hacker News new | past | comments | ask | show | jobs | submit | ikeashark's comments login

>A lot of white western adult men fantasize about being female Asian children.

wtf


> Youtube to the point where they are able to enforce Chrome standards that prevent adblocks from working.

Youtube needs to profitable somehow, and advertisers are the best way to do this. If Youtube couldn't generate the revenue through advertising, what else can they do?

It's insanely expensive to do video streaming, hence why Google invests a lot in the new compression formats today, WEBP, Brotli, AV1.

Do you just expect them to just do all of this for free?


No, but with big market share comes big responsibility.

Video streaming is extremely unprofitable sure, but in its care it tries to leverage its market share with Chrome browser to benefit. And you are not allowed to do that when your market share is big.


Assuming you already know what context in terms of LLMs, prefilling is the process of converting the current conversation into tokens and passing that into the LLM.


I believe it comes from the original Llama papers where they chose these sizes because it fits each of the standard ML compute GPUs nicely.

Model Size + Overhead (context length, etc...)

7B: 13 GB - fits on T4 (16 GB).

13B: 26 GB - fits on V100 (32 GB).

30B: 65 GB - fits on A100 (80 GB).

65B: 131 GB - fits on 2x A100 (160 GB).

That's it really.


I can't believe Gavin Belson is rejoining Hooli!


Consider the bulldog


Consider the elephant


I'm here for this comment


The bear is sticky with honey.


me: Oh cool, a project like Folding@Home but for AI compute, maybe I'll contribute as we-

> Decentralized training of INTELLECT-1 currently requires 8x H100 SXM5 GPUs.

me: and for that reason, I'm out

Also they state that later they will be adding the ability for you to contribute your own compute but how will they solve the problem of having to back-propagate to all of the remote nodes contributing to the project without egregiously slow training time?


Congress: Use Shuttle engines and SRBs to build a vehicle capable of deep space transportation.

Nasa: No that's too costly.

Congress: lol ok we'll slash funding + you legally can't refuse.


If the customer demands it they'll sell their integrity, and damn the taxpayers. I have little sympathy for this.

If this sort of continued honesty-free space program is what Congress + NASA are going to give us, we'd be better off without a manned space program.


OpenAI confirms they aren't working on or training a a "GPT-5" https://techcrunch.com/2023/06/07/openai-gpt5-sam-altman/#:~....


It sounded like they weren't training but they were trying to figure out what gpt-5 structure, tooling, and training data will look like


4 months is an eternity in AI right now. I would be shocked if they are not training a new GPT (at least in experimentation/research mode, if not full pretraining) - else what are their GPUs spinning on? That is not a capital investment you just let sit idle.

But I concede that I don’t have any concrete proof so I should modulate my certainty of tone.


This was in June and they've continued to improve GPT-4 since then. Surely they're building on GPT-4 to learn how to approach GPT-5


Planes like the F-22 are inherently aerodynamically unstable because it allows then to be highly maneuverable, The stability of the aircraft is handled by the electronics. So back then planes of the that type would be able to just glide down and land but with a F-22 or any modern military jet, it wouldn't.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: