Hacker News new | past | comments | ask | show | jobs | submit login
Google launched a new AI, and has already admitted at least one demo wasn't real (theverge.com)
76 points by ronron4693 on Dec 8, 2023 | hide | past | favorite | 32 comments



Google is a laughing stock at this point.


Yes ... but so is AI.

Generative AI is an attempt to mimic cognitive ability using statistics ... and it always fails at some point. It's cognitive skills are as fake as Google's demo.

AI can drive a car as long as the roadway environment matches certain expected statistical parameters.

But mismatches are inevitable and unavoidable. In the real world, stuff happens. Emergency vehicles, pedestrians, cyclists, children, illegal u turns, etc. are all part of the real world environment that can't be predicted.

AI's naive response to unexpected conditions is to slam on the brakes and stop --- which in and of itself creates an unexpected road hazard.

There is too much "A" and not enough "I".


Generative AI isn't perfect. It's not really even very intelligent as you say.

But I'm still amazed how much you can do with this very imperfect, unintelligent "AI".

I think we're still at the very beginning of seeing all the possible application. One stiffling factor is the still enormous computation necessary, but that is no doubt going to change.


But I'm still amazed how much you can do with this very imperfect, unintelligent "AI".

I'm amazed at how little you can actually do with it.

It's entertaining --- but it really only "works" when you are willing to manually verify the results or just accept the fact that it may be spewing BS. And as you point out, the imperfect results come at a high cost which makes the economics of it questionable for many applications.

Drinking bleach doesn't cure COVID --- but AI may suggest it because it found it on the internet.


Verification of a result is often much easier then coming up with the result. Even though chatgpt4 sometimes gives me bs I am much faster in some tasks with it.


Often implies statistics which are not available.

One counter example is summary verification where you can't do better then summarize yourself.


There are definitely things where chatgpt doesn't help much or is worse then doing it yourself. Part of using it effectively is to get a sense when it is effective.


The real question --- Do the benefits really justify the cost?

We don't know the answer to this yet.


> but it really only "works" when you are willing to manually verify the results or just accept the fact that it may be spewing BS

Verifying a solution is often much easier than coming up with a solution. GPT-4 has produced a lot of "hallucinations" when I asked about my programming problems, but they were easy to discount. OTOH it did help me a lot with coming up (or inspiring) solutions which would take me a long time to find otherwise.

Same thing for Dall-E / Midjourney. I'm unable to produce any usable graphics on my own, with these tools I'm able to produce interesting illustrations for my needs, even if I need to discount many results. It literally provides me with a capability I didn't have before. Notice the similar pattern - it's much easier to critique a solution (generated image) than to come up with it.

I believe there are so many applications where this holds true. You can create domain-specific chatbots helping out knowledge workers in almost any discipline.


Regarding autonomous cars, it'll only make sense once every car is in/on the network and roadways are hermetically sealed. So never.

I really hope governments get their act together and start developing/planning/testing improved public transportation. Trains are already nearly fully autonomous, and just imagine the countless benefits having cities/towns/countries designed around something other than the wasteful self owned car.


In other words, the autonomous transportation that is most practical and best suited for the real world is called a train.


We've had autonomous trains here in London since 1987 and trains since about 1850 but still there is the usual traffic. I think self driving cars may add something new.


> I think self driving cars may add something new.

I'll bite -- what new thing would it add?


Potential upsides include less road deaths if self driving gets good, and cheap robotaxis from train stations. Also maybe robotic goods delivery overnight rather than the streets being clogged with trucks and vans during the workday.


Or ditch cars and do the last mile by e-bikes and scooters.


This seems like complete misdirection from the fact that OpenAI has a better product than Google.


Their spam filter literally marks email sent by Google corporate, email the user asked for and subscribed to, as phishing.

And they can’t understand that I already said “skip” to their offer of TV shows on YouTube. Many times.

And as someone else said, I saw some of the greatest minds of my generation put to work doing ad revenue optimization.

And yet, the ads have never been anywhere close to relevant to me, or my interests, or my purchases or plans or needs at all.

Yes, laughingstock but also… it’s sad.


> And yet, the ads have never been anywhere close to relevant to me, or my interests, or my purchases or plans or needs at all.

They’re not trying to do that. They are working to maximize revenue.

I think it’s a myth that their goal is “personalized ads” as that wouldn’t make them money. They make much more money off thousands of impressions with 4% conversion than dozens with 90% conversion.

These dark patterns don’t show lack of knowledge. They just demonstrate that they don’t care and know they will eventually grind you down if they keep asking “do you want fries with that” even though they know you hate fries and are on a diet.


The point of advertising is to convince you to buy things regardless if you actually need them. Why try to get people to only buy things they already think they need or want? Personalized ads have always been the Trojan horse for data mining.


Personalized ads are the problem ... not the solution.


Its stock is still significantly up since the Gemini announcement -- and still not too far from its all-time high -- so the market would definitely disagree.

If you believe in wisdom of the crowds and all...


Several parts got my spidey sense tingling. They admitted the duck video was entirely fake - the model was actually fed stills of ducks, then they edited in the video.

The science youtube guy had really strange prompts. It seemed To me like he had tried a bunch of things each time and then they spliced together the ones which worked. Which is …ok? I guess?

The one I watched this morning was the dude with the heavy Italian accent demonstrating multimodal AI by playing the AI sound clips of a woman speaking VERY CLEARLY and precisely and definitely without any sort of accent. So for that one I was like “ok I get what’s happening here” but it seemed super obvious.



If this is true, this is really really freakin sad, why would they do this?


The did it for the same reason every megacorp does anything: money. Their stock price jumped over 5% when they showed this fake demo. It has since come back down a bit, but it is still up about 3.5%.


"Don't be evil" was really a warning of intent.

   The Devil went down to Google
   He was lookin' for a soul to steal
   Google was in a bind 'cause they were way behind 
   And willin' to make a deal


Firebase on the Mountain Run, boys, run! The Google’s in the house of the rising sun


Now I'm curious which part wasn't real. My best guess would be on the video interpretation


It's described in the article -- it wasn't a real time conversation, they simply took still images from videos and sent them to Gemini along with a text prompt, and then the resulting responses were trimmed for effect.


I believe they’ll get there eventually, but why demo something that so clearly doesn’t exist yet? I feel this will only serve to set false expectations on features & timelines. I just don’t see the upside. Perverse internal incentives? Straight up temp stock manipulation?


The Verge loves drama. These gotcha-style posts brings them a lot of traffic.

This doesn't need to be malicious. Google has demonstrated similar things elsewhere.

Someone wanted to do a cool video based on the stuff that are working on. The blog posts explain everything in detail if you want to look behind the curtains


> Google launched a new AI, and has already admitted at least one demo wasn't real

They learned from Android game developers how to make a demo. /s




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: