Hacker News new | past | comments | ask | show | jobs | submit | brokensegue's comments login

Large numbers of Stanford students are on financial aid and/or scholarships

I’d lay a bet down with good money behind it that every kid at Stanford with wealthy parents also has or had a scholarship or two.

Not sure that the presence of a scholarship, financial aid, or even a student loan necessarily proves poverty of the student’s parents.


What a weird hypothetical

I'm unclear why we are calling these a new class of life rather than just a new kind of virus. Their shape?


I think this is the journalist playing at the idea that we don't have a universally accepted definitions of "life", "virus", "viroids", "mobile genetic elements", "plasmids", or the other words that describe what I view as the agents of evolutionary games. It is kind of catchy way to raise that conversation though eh!


Viruses have capsids which are protein shells that envelop them. These are just bare RNA strands, which are considered viroids.


Any word on price? I can't find it at https://ai.google.dev/pricing


I've been using Gemini Flash for free through the API using Cline for VS Code. I switch between Claude and Gemini Flash, using Claude for more complicated tasks. Hope that the 2.0 model comes closer to Claude for coding.


Or… just continue using Claude?


Claude is ridiculously expensive and often subject to rate limiting.


lol, and you think Google is going to be less subject limiting?

I will pay more to not feed Google.


I said rate limiting, not subject.


I think they try to conserve costs by only using Claude when needed.


Agreed - tried some sample prompts on our data and the rough vibe check is that flash is now as good as the old pro. If they keep pricing the same, this would be really promising.


£18/month

https://gemini.google/advanced/?Btc=web&Atc=owned&ztc=gemini...

then sign in with Google account and you'll see it


Oh but I only care about api pricing


I think it is free for 1500 requests/day. See the model dropdown on https://aistudio.google.com/prompts/new_chat


What if they offered negative expense funds


average humans don't know what "column spaces" are or what "orthogonal" means


Average humans don't (usually) confidently give you answers to questions they do now know the meaning of. Nor would you ask them.


Ah hum. The discriminant is whether they know that they don't know. If they don't, they will happily spit out whatever comes to their mind.


Sure average humans don’t do that, but this is hackernews where it’s completely normal for commenters to confidently answer questions and opine on topics they know absolutely nothing about.


And why would the "average human" count?!

"Support, the calculator gave a bad result for 345987*14569" // "Yes, well, also your average human would"

...That why we do not ask "average humans"!


"On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

So the result might not necessarily be bad, it's just that the machine _can_ detect that you entered the wrong figures! By the way, the answer is 7.


average human matters here because the OP said

> Can you please give an example of a “completely illogical statement” produced by o1 model? I suspect it would be easier to get an average human to produce an illogical statement.


> because the OP said

And the whole point is nonsensical. If you discussed whether it would be ethically acceptable to canaries it would make more sense.

"The database is losing records...!" // "Also people forget." : that remains not a good point.


Because the cost competitive alternative to llms are often just ordinary humans


Following the trail as you did originally: you do not hire "ordinary humans", you hire "good ones for the job"; going for a "cost competitive" bargain can be suicidal in private enterprise and criminal in public ones.

Sticking instead to the core matter: the architecture is faulty, unsatisfactory by design, and must be fixed. We are playing with the partials of research and getting some results, even some useful tools, but the idea that this is not the real thing must be clear - also since this two years plus old boom brought another horribly ugly cultural degradation ("spitting out prejudice as normal").


I interpreted the op's argument to be that

> For simple tasks where we would alternatively hire only ordinary humans AIs have similar error rates.

Yes if a task requires deep expertise or great care the AI is a bad choice. But lots of tasks don't. And in those kinds of tasks even ordinary humans are already too expensive to be economically viable


Sorry for the delay. If you are still there:

> But lots of tasks

Do you have good examples of tasks in which dubious verbal prompt could be an acceptable outcome?

By the way, I noticed:

> AI

Do not confuse LLMs with general AI. Notably, general AI was also implemented in system where critical failures would be intolerable - i.e., made to be reliable, or part of a finally reliable process.


Yes lots of low importance tasks. E.g. assigning a provisional filename to an in progress document

Checking documents for compliance with a corporate style guide


sometimes i think projects like cyc are like 3n+1 problem for AI. it's so alluring.


So looks like they are trying to win on speed over raw metric performance


Either that, or that’s just where they landed.


I'm very skeptical this technology already exists. Maybe if you vastly change the meaning of "sticker"


Nah, the only thing I believe will turn out inaccurate here is IPv6.


"PCB-with-onboard-battery-and-adhesive-backing-icker"


The raw output value is generally irrelevant. What matters is its position in the distribution of outputs


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: