I think this is mostly just a problem of not having good reasons to sell AI products to consumers in the first place.
I recently saw some Ray Ban Meta glasses ads.
One of them had a guy ask the glasses to describe what was in front of his face, and then he remarked “wow that’s accurate” (there are people skateboarding). The guy wasn’t blind. His use of the glasses made little sense.
Another ad has a young man asking his glasses how to dress for fall and then blindly following the suggestions like they’ve never dressed themselves before. It was embarrassing to watch.
A third ad has someone ask their glasses how to decorate for a disco theme party, and then they implement the very mediocre suggestions.
None of these things required AI, it’s just kind of “there”, and companies are like “idk maybe people will use our AI to like… dress themselves? or something?”
No way! There are definitely legitimate use-cases for all of the features demoed in these ads, but Apple's marketing took a darker path.
Email to boss: could've been someone who had a genuine struggle with language using it to finally get their boss to notice the effort they put into their work.
Remembering a name: could've used it to get the name and left it at them being impressed rather than making it a lie.
Summarizing email: could've used it when in a hurry at work whilst someone sent an extremely long email.
Video memories: could've used it for two people to share a nostalgic moment together without making it a lie.
The fact that lying is a core element of the ads is what makes them so gross to me.
The guys says he's surprised she remembers him. She could just look straight at the camera, say what can I say, I'm very intelligent, wink, and it wouldn't be gross.
My writing ability might not be good enough to make them flashy here on HN, but I am sure that each of the scenarios could be made very flashy or moving with a bit of imagination & Apple's still-excellent production quality.
One of my favourite ads is a very simple story that is made gut-wrenching by creating an emotional connection and some great production value: https://www.youtube.com/watch?v=a2lv_Xl1e4U
If I try to watch a technical review on a smartphone, they’ll still talk about camera megapixels, screen size/brightness, corners, etc. No one talks about scroll sensitivity and jankiness, [bad] position of buttons on screen and on the frame, sensor nuances, how it feels in a pocket, can it work as a display, notification sounds and sound level separation, etc.
All marketing, including third-party based, focuses on absolutely basic features for abstract people who do nothing and have no problems to solve apart from taking pics of themselves and other two-digit iq activities. I guess it’s only logical for this image to walk into a skate park and ask their glasses about what these guys are doing.
> One of them had a guy ask the glasses to describe what was in front of his face, and then he remarked “wow that’s accurate” (there are people skateboarding). The guy wasn’t blind. His use of the glasses made little sense.
On the other hand, this is a pretty common scenario: user is surprised when AI gets something right :) Not sure it’s the best showcase of a product, though …
> mostly just a problem of not having good reasons
I disagree, I think this is bad ethics, and bad marketing people, working for Apple...what other explanation - same people crushed musical instruments and books, human craft-work -- using a hydraulic press, in a recent Apple advert.
Advertising certainly can show outrageous ways to behave, and it's "okay". Calling someone and simply shouting WAZZZAAAAAAP! into the cell phone, for the famous Budweiser advert during the superbowl...derives into a crew of 3-5 people shouting AAAAAAAAAAAZZZAAA into their phones, oddly. That was cute ....
However this is about enabling through lying. In one ADVERT it makes a Manager believe an Employee is more engaged than they truly are, rewriting their unprofessional language using the new "Professional" button, reasonably leading to a future misallocation of resources by the manager to the irresponsible seemingly under-skilled or simply lazy unethical employee.
What's worse is the people who are NOT using Apple products to lie. The Employees who did not lie about their grip on written language now have to compete with AI, wielded by their ill-behaved coworker. It stratifies society into Idiocracy.
This is a TERRIFYING series of advertisements chosen by Apple.
The signaling value of knowing how to writey words good is dead. More dead even than a coffin-nail, and assuredly that of a door (thanks, Dickens).
Signaling through speaking will become even more important. Get thy children to debate club, seminar-based classes (hope you're rich!), theater, and hell, I dunno, ToastMasters Junior or whatever.
But millions of people have been using Grammarly, et al. for this for years. Managers have actually paid to deploy tools like this to their employees. The ship you're talking about has already sailed.
Nonsense. Are people who use spell checkers also lying when the computer helps them not sound like an illiterate 5th grader? What about Word’s pre-AI grammar correction?
Further, it’s pretty clear from the watching the commercial that the boss is not fooled by this. The humor of the commercial is supposed to derive from the absolute contrast between the well established and known behavior of the employee and the content of their email. Humor doesn’t always land for everyone to be sure but this sounds like the same sort of handwringing over tech replacing human effort we’ve seen for years. Calculators would let people bad at math sneak their way into jobs where you need math, IDEs will let people bad at coding sneak their way into places where you need to code. Now it’s “ai grammar editing” will allow people bad at writing professionally sneak into places where they need to edit professionally.
I think this is one of those things where the root cause is a social problem rather than a technical one, and trying to use technical solutions is somewhat helpful at best and masking huge issues at worst. If people can't write professionally, then the proper solution is better education, perhaps some education through onboarding in the job, and/or the boss being more flexible when reading. At least the common usage of spell checkers I see don't meaningfully change the tone of the text. The LLM-powered spell checker, akin to a different human writing the email for the employee, is unacceptable. It has perverse incentives and outcomes. Minor touch ups are one thing, but at some point it becomes deception.
I think there is only deception if you believe the original words written in the original tone was the intended message (or I suppose if the intent of the communication is to demonstrate your personal ability to write in a given style). If a coworker does something extremely stupid that harms our project and I sit down at my desk and write an angry email full of invective and spit and fury, save it in my drafts and go for a walk, then come back and rewrite the email to be constructive and professional, have I been deceptive? When I started the email I certainly intended to write the things I wrote in that first draft. But sending that would have been counter productive.
I might even still think my co-worker is a flaming moron who shouldn't be allowed out of the house unsupervised. But if I know that sending that in an email isn't going to solve anything and just make things worse, am I being deceptive if I remove that sentence from the email?
Or consider an alternative scenario. I attended a conference where a speaker made a reference to the Alamo. The speaker was older, and the reference would have been the same sort of "make a stand" reference any number of speakers would have made over and over in the 90's. But after their talk, I was talking with some younger attendees, folks born after the turn of the millennium. Among them, the speaker's metaphor was a hot topic. Specifically the "yikes" factor of referencing the Alamo in any way that in any way implied the defenders should have been looked up to. The speaker's intended message was lost in the specific details of their chosen metaphor because the words which they absolutely intended to speak did not hold the same meaning for the audience to which they were spoken. If some hypothetical AI speech editor existed where you could punch in the age range of your intended audience and it would edit out metaphors and references that would land wrong with the audience, is that being deceptive or is that good editing and "reading the room"?
If you start to write an angry email, pause, and genuinely think of valid logical arguments (knowing that the previous anger may still be biasing the reasoning), that isn't deceptive. If you're masking the anger and not trying to reason calmly, which implies that you're using motivated reasoning, that is deceptive. Similarly with an (AI) speech editor, it depends on whether you're just trying to score points easily or whether you have the genuine intention to connect your experiences to your audiences' and give a thought out speech. Unfortunately, the results might be similar, but we should all aim to encourage the latter and discourage the former where we can.
Yesterday I was walking in Greece and saw a sign I couldn't read so asked my Meta glasses and it gave me a translation and short explanation quickly, which was very helpful.
But generally, yes the uses aren't there. In the Apple AI video, the worst is that the 'more professional' text, actually reads much less professional.
If someone in a business situation sends you an AI generated email (and it's obviously AI it's so easy to tell) it makes it seem like they are unable to write English properly, giving the opposite impression than intended.
> Yesterday I was walking in Greece and saw a sign I couldn't read so asked my Meta glasses and it gave me a translation and short explanation quickly, which was very helpful.
Like other attempts at AI wearables like the Rabbit and Humane pins, I think that falls under "maybe useful but why wouldn't I just do the same thing with the phone that I'm carrying anyway".
- AR is very promising for both work and play, but is worthless shit if you have to hold a device to use it. Instantly 100x better if you've got hypothetical near-future non-terrible AR glasses on most of the day.
- Half of what people use the Web for is looking up trivia that would have been too high-cost to look up before, to bother with (most of the other half is posting on social media about it). Glasses further lower the cost of looking up many categories of trivia, making even more trivial pieces of knowledge cheap enough to look up. On a revealed-preference basis, this seems to be something people really like and find irritating when denied—nobody used to feel an itch when they couldn't find an answer to some fleeting trivial question that entered their mind. Once exposed to the Web, and even more so when smartphones were introduced and the Web became something you carried everywhere, they do, and it'll be the same for answering questions about stuff they're looking at, once used to glasses.
- The use case of capturing otherwise-missed (getting out the phone is too slow) fleeting once-ever moments is going to be very compelling for people.
Yes, Google Translate has been able to do this for a while now, but I still find the experience very yanky and high friction so that I don't do it on a whim, usually.
Everything-translating AR glasses would have been something I'd have really appreciated on a trip to Japan, for example.
> In the Apple AI video, the worst is that the 'more professional' text, actually reads much less professional.
The original text is:
> Hey J,
> Been thinking, this project might need a bit of zhuhzing. But you're the big enchilada.
> Holler back
followed by the sender's first name in lower case surrounded by flexed bicep emoji.
The AI rewritten text keeps the "Hey J,". It changes the rest to
> Upon further consideration, I believe this project may require some refinement. However, you are the most capable individual to undertake this task.
> Please let me know your thoughts.
> Best regards,
followed by the sender's first name capitalized and with no emoji.
I don't see how the second could be considered less professional than the first.
From the other comments here it seems like some people may be seeing different ads when they follow the links, possibly depending on their location. The text above is what was in the ad I get to from the link in the article. Are you getting something different?
Honestly I wouldn’t hesitate to steal and break any meta ray bands I see on the street in real life. I don’t want to see that creepy always recording without permission kind of shit normalised.
Not trying to be snarky, I think I very much understand your reaction to this constantly monitored and surveilled world we're in. I think it's arguable that these types of glasses will make it worse too.
But people are recording without permission on their phones since several years now - what has been your reaction there?
I'm not a very confrontational person, so I tend to grumble and say nothing.
This is what always happens when you have a technology and then desperately try to find problems to solve with it. Rather than starting with a problem and then applying the most appropriate technology to solve it. The exact same thing happened with Blockchain, but "AI" has about 100x more hype behind it.
I think people present day do not understand how to really use AI.. understand how it can benefit their life. The commercial the OP speaks and the ad agency sounds lame as they arent helping.
Personally Ive been enjoying wearing my Ray Bans for the past year and have used them to do things no other glasses can like...
- Translate.... was in Canada recently and a sign about Jasper the town i was in was in French. I asked Meta to translate it... it took a pic and audibly translated it for me.
- Was in Harpers Ferry WV on an overlook ..looking down into the town of Harpers Ferry which has a huge church which i have no idea it's name yet ask Meta what church is that over there. Took a pic and told me.
- Was in line at HersheyPark up on a platform and my friend wondered how many people you think are in line so i asked Meta and it took a pic and gave me an estimate.
As with the Apple ads, I remain unconvinced these are benefits. Translation, yes, but your last two examples feel oddly isolating and dehumanizing. Like, estimating the number of people in a line is a trivial task, and something fun to discuss and argue about with your friends! If your friend had the glasses to, do you think it would have come up in conversation at all or would they just ask theirs?
I dunno, I'm picturing a future with everyone standing around absorbed in their own devices (not too different from today) but where we struggle to ask each other normal questions due to offloading our collective intelligence and social skills to the cloud. Destined to become Daphne totally reliant on glasses to interact with the world around us.
I'm probably overly dramatic here. You're out having fun at a park with friends! I'm just trying to imagine how future generations will interact with this tech, particularly the youth who won't know a world without.
It enhanced my conversation as when both people do not know something you are looking at and or discussing you can now find the answer quickly via glasses on your face.
We all use Google to find knowledge/answers but most are not pulling out our phones during conversations cause that's rude. Yet it's not rude to get the answer from your glasses you are wearing on your face and still present in the conversation your having as well people will think wow that's cool (per my experience).
Yeah I don't think you're still present in the conversation while taking a photo and listening to or reading an explanation of what you're looking at. You might think it's less interruptive than pulling out a phone, but your attention is on the smart glasses. People think they can multi task too. I can see when someone just zones out during a conversation and they don't even realize it. I'm gonna be annoyed when I see you looking at the shit coming out of your glasses.
I appreciate you might find value in these things, but each example I read just gets less and less compelling. Translation does seem mildly useful, the other things feels like information I can live with or without.
E.g. these aren't burning problems in my life. Even the translation, I'm happy to just be in the dark, or take a phone pic and translate if I really really need to.
Having everpresent AI tech on my face doesn't seem worth it if these are the kinds of problems I get solved.
Well if you use sunglasses and pull out your phone to take pics, videos and see the time ... using the glasses to do all those things (even without having your phone on you) makes a lot more sense (quicker and easier).
Quicker & easier isn't the end all be all–having distracting tech on my face is likely a net drag on my happiness, focus; given how distracting smartphones already are. It would have to be _so_ good that I'm willing to live with that extreme downside.
My problem with all three of these features (besides the terrible privacy impact that other commenters have already addressed) is: As the end user, how do you know and verify the results are actually correct? How do you know the road sign was translated correctly? How do you know that the name of the church was actually correct? How do you know that the estimate given was even close?
Your examples were all pretty low-stakes, so I guess "it doesn't matter if it's correct" is fine for them. But what if you actually ended up relying on an answer to be correct and it wasn't? Would you independently verify the correctness, or just blindly operate on bad info?
How do you know any information you get from a source that you can’t personally and independently verify is correct? How do we know a tourist guide book is correct? How do we know our language dictionary is correct? You don’t. People will need to learn to what degree they can actually trust a given source of information and decide how much the risk is worth taking. We do this all the time today, and every time new technology comes along there’s always an adjustment period as we sort out the boundaries of trust (see also people driving off roads and into ponds when GPS was new). The danger to me is not that it can be wrong, it’s that people don’t yet understand it can be wrong. That’s not helped by AI boosters trying to use AI in inappropriate ways and situations, but it’s also not helped by AI doomers that treat every failing of the new technology as proof it’s worthless and will never be useful. Both extremes are speed running a “boy who cried wolf” scenario for losing public trust in what they have to say.
We all seem to trust Google and the first search result link is the truth (i guess) yet now they are showing their AI results... are we trusting those the same as the first search result links we have always?
Personally i think those who are concerned about whether it's correct or not are not ones to jump all into new technology yet will when enough of their friends, family and co-workers have done so. There they find that level of trust they seek.
If we have trusted Google all the years then the AI that becomes the most trusted becomes the next Google. Open AI needs to create their own smart devices (phone and or glasses) and with that they could become the next Google. The information GPT provides me seems correct and matches Google's AI. Siri even with 18.1 is still terrible ... just asked it what day is November 9th it forwarded me to a Google search (lame).
Are those glasses with camera always on sticking it into other people's faces, just like before Google glass had it?
That's an idiotic toy to use among other people, and I am keeping things polite here. Unless you ask each of them if its OK they will be recorded and evaluated by Meta company (and who knows who else).
Your convenience stops right where rights of me and my family starts. Bring it next to any small kids and expect some well deserved non-nice feedback coming your way.
The privacy issue in time Im sure Meta and or Apple will figure out (i have some ideas to fix that) .. they need to cause that's half of peoples reaction to them while the other half dont care.
I could care less about other people I do not know I am using the glasses to capture my life (not theirs) and for
- Sunglasses
- To listen to music and take phone calls
- To take pics and videos of what Im experiencing in my life not others
- To enhance my conversations as pulling out my phone is rude to get the answer in a social conversation .. using my glasses no one has said anything rather thinking it's cool
> The privacy issue in time Im sure Meta and or Apple will figure out (i have some ideas to fix that) .. they need to cause that's half of peoples reaction to them while the other half dont care.
They don't need to figure it out, they just have to wait.
People holding their camera phones up everywhere and (maybe! You can't know for sure!) recording stuff used to be off-putting in about the same way, but we got over it within a few years. Ditto various other socially-shitty things, like talking on a phone in public or using nearly-invisible earbuds that make it hard for people to tell where your attention is or even notice that it might be somewhere else.
I kinda wish we hadn't gotten over those things, but we did, and having watched it happen a few times I'm sure we will again for this.
The problem with "AI" is that when it is well integrated, and useful, it is often invisible to the user. Many people, for example, benefit from automatic (and instant) transcriptions in live Zoom meetings. This is pretty much complete magic -- and yet, boring. You don't notice it. You focus on the result, which you need, rather than the fact that it's "AI."
This is my current belief as well. Ive designed many features with AI models and it wasn't until we recently branded those features with an AI personified name that users (and wall street) noticed.
Sounds like a repeat of blockchain hype. Biggest threat here is replacing the cultural zeitgeist of "let me google that for you" with "let me GPT that for you".
I feel the ads I've seen are precisely what you say. I will say this though I have a kid on the spectrum and if I could teach her how to integrate her thoughts with the glasses it may help her in social settings. But for others who simply already know I, like you, don't see the value.
I recently saw some Ray Ban Meta glasses ads.
One of them had a guy ask the glasses to describe what was in front of his face, and then he remarked “wow that’s accurate” (there are people skateboarding). The guy wasn’t blind. His use of the glasses made little sense.
Another ad has a young man asking his glasses how to dress for fall and then blindly following the suggestions like they’ve never dressed themselves before. It was embarrassing to watch.
A third ad has someone ask their glasses how to decorate for a disco theme party, and then they implement the very mediocre suggestions.
None of these things required AI, it’s just kind of “there”, and companies are like “idk maybe people will use our AI to like… dress themselves? or something?”