To me this reads like marketing fluff. A lot of the purported use cases amount to "better Siri/Alexa". It also fails to convincingly answer the most obvious question:
> The primary argument against all these AI gadgets so far has been that the smartphone exists. Why, you might ask, do I need special hardware to access all this stuff?
The proposed answer is because smartphones are too hard to use(???)
> To do almost anything on your phone, you have to take the device out of your pocket, look at it, unlock it, open an app, wait for the app to load, tap between one and 40,000 times, switch to another app, and repeat over and over again.
And an allusion to app stores being bad:
> And they’re not going to get better, not as long as the app store business model stays the way it is.
The part that's left unsaid is that none of these AI devices promise to be some new open model of computing. Instead it's a play at the same or more lock in than with app stores. The Humane pin, for example, requires an expensive ($24/mo) subscription to even use the hardware. The lock in is just in a new playing field where the incumbents don't have a dominant position yet.
Yeah. I think the first half of the article is asking the right questions and highlighting some real problems, but part way through went in a wonky direction.
Those complexities and problems are real, but the obvious value of a context-aware AI is reducing friction by hiding those complexities from the user. So they can state a goal and have the right tools to achieve that goal selected and/or used for them.
Forcing someone to use a half-dozen devices instead of a half-dozen apps is doing the opposite: It's increasing complexity and friction for the user, by adding extra steps between "this is my goal" and "goal accomplished".
> Smartphones are great! None of these devices will kill or replace your phone, and anyone who says otherwise is lying to you. But after so many years of using our phones, we’ve forgotten how much friction they actually contain. To do almost anything on your phone, you have to take the device out of your pocket, look at it, unlock it, open an app, wait for the app to load, tap between one and 40,000 times, switch to another app, and repeat over and over again.
I find the less I use my phone and technology, the higher quality life I have. Life seems quite nice with less data and no social media :)
Yeah, friction makes it easier to sort out what's important vs. what's not. Probably the most life-friendly device I own is my smartwatch: it's just a fancy beeper for skimming through (and not replying to) high-priority messages that make it through my strict notification controls.
Companies need to learn there's a lot of value in restraint and not acting like they're the most important thing in the world.
Given the current idea of LLM's still struggle with confidently giving wrong information, all of these products are seriously misguided. (It doesn't help that "AI" has become such a blanket term in the last year or so that does "AI" mean "LLM" or more traditional purpose built AI).
It isn't clear to me that given the nature of LLM's can we actually solve this problem. It isn't thinking critically and never will without it being a different tech. (Someone correct me if I am wrong, but it seems like this is just a fundamental problem with this type of tech).
There have been very very few actual use cases of what we are now calling "AI" that actually seem to provide any real benefit. The only one that I find myself using on a daily basis is helping look through and summarize my personal notes. Something that the nature of an LLM is ver well suited for.
I agree. LLM almost-solve a traditional NLP task which text generation. Calling that intelligence gives the wrong idea about the tech application. That's no reasoning in generated text, what is amazing is that the output is linguistically correct.
Prolog, ontologies, computer vision, deep learning, classifiers, etc. all have been called AI despite being very different things and their inherent limitations. At this point AI is just a label thrown at the newest cool tech.
In a world where AirPods are ubiquitous, what value does such a niche product provide? Apple could easily add the same feature to something a huge number of people already own.
The things I really wish I could use AI to do are the things I have the least trust in AI doing correctly. Things that are too easy to screw up and have the worst consequences. It's funny when the AI texts your mother that you "stopped to let a group of fucks cross the road". It's not funny if it deletes the wrong files or misconfigures a remote device's network settings.
Just like tablets became a secondary computational or recreational device for some, there may be another type of device that could find its way into our lives. I am optimistic about AR/MR glasses and AI pins as potential options. Perhaps a combination of the two could also be promising.
> The primary argument against all these AI gadgets so far has been that the smartphone exists. Why, you might ask, do I need special hardware to access all this stuff?
The proposed answer is because smartphones are too hard to use(???)
> To do almost anything on your phone, you have to take the device out of your pocket, look at it, unlock it, open an app, wait for the app to load, tap between one and 40,000 times, switch to another app, and repeat over and over again.
And an allusion to app stores being bad:
> And they’re not going to get better, not as long as the app store business model stays the way it is.
The part that's left unsaid is that none of these AI devices promise to be some new open model of computing. Instead it's a play at the same or more lock in than with app stores. The Humane pin, for example, requires an expensive ($24/mo) subscription to even use the hardware. The lock in is just in a new playing field where the incumbents don't have a dominant position yet.