Using alexa as a timer is about 99% of what I do with mine. Occasionally I'll ask it the weather. The push for engagement is what is killing it for me. Just about everytime I ask it for something, it follows up with suggestions for other commands. Most of my commands now end with me having to tell alexa to shut up. I've toggled the flag in the settings to reduce this behavior but I guess the setting is all but ignored. Very frustrating when there is a lot of potential
In addition, and not considering privacy concerns, I've found relegating them to kids entertainment to be a pretty good use case. Conversations go into all sorts of random directions, it misunderstand most of the intent which can be very funny at time, answers incorrectly, but the kids don't mind at bit and simply keep on going.
Telegram chat bots are pretty useful, because you're supposed to tell them commands, not natural language. Alexa wouldn't need sophisticated software if you interacted with it with a series of limited commands instead of natural language.
This is why I stopped using my Google home devices. No matter what I did, any time I tried to use it, I had to endure an extra minute of dialog telling me how to use the thing. Every. Single. Time. No exceptions, no matter what command or what settings, I got these "hints" every time.
Sure, it makes sense to help your customers understand the abilities of the device, but trying to annoy them into using it more is just never going to work.
Every single time? Mine don't do that. I get the occasional suggestion, that's it. Maybe the suggestions were more frequent when my Google home devices were new and I needed to be "trained." How long did you use yours before giving up?
Asking for specific albums, songs, and playlists seems to be somewhere on the edge of Alexa's (and Siri's) capabilities and can easily get into the frustration zone.
I understand your point, but I have heard this discussed many times in person and there's seemingly a big fraction of people who just don't care what Alexa/etc listen to...the nothing to hide motif.
I haven't used Alexa in sometime, but the current state of Siri leads me to believe that after a decade of heavy investment, core technology just isn't mature enough for voice assistants to provide much value. Ex, siri is great with a simple command like, "hey siri, play something I like." But struggles mightily to do anything beyond...
I find Alexa/Echoes great for a highly limited set of use cases. Like, I use it all the time to set timers in my kitchen (it's really nice when you've got like four timers going and when they go off they say their name).
Adding things to grocery lists is great.
We use it to set the light level of our "smart" bedroom light.
Checking weather before you go out when your hands are full of other stuff is nice. A few other things.
Grocery list is my no1 user of Google Assistant. The only thing that works everytime.
Sometimes I ask general questions while driving and I get a good answer, things that answers questions that come up while taking to another passenger in the car. Single answer questions work well e.g. "what is the diameter of the moon?" (don't judge we talk about weird stuff during long drives).
Spot on. I can vouch for all those use cases plus I enjoy my Amazon Music sub on it. Everything else they've added often makes the experience for those very core things worse.
I don't know about Spotify but you can set things up so that Apple Music is it's default music service, i.e. you can pretty much play music in the same manner as if you had an Amazon subscription. It's still pretty unreliable but you can do it.
i agree with the comments about the kitchen timers. being able to set timers with out having to wash my hands or put down what i'm holding is so nice. but beyond that and play music, my siri sphere gets little use.
Ive used Alexa to order our regular dog food a handful of times, and it ended up being easier to just get my phone out, find it, and buy it there. If there’s any background noise, Alexa struggles - TV, music, kids playing, etc. The interactions are super slow.
Voice command stuff won’t be broadly useful to people until they can damn near read your mind. I can translate my thoughts into actions far faster with buttons and a device than I can explaining it to a voice assistant. That’s the real technical challenge here.
Alexa doesn’t have to be better than Siri. It has to be better than me using my phone or a computer.
Naively, a voice assistant with the comprehension of old Wolfram Alpha could be cool:
Wolfram used to understand questions like “what’s the mass of the Earth divided by the mass of the solar system?”
I think a voice assistant that can take such natural language questions and answer would be fun — asking Alexa is less disruptive to a conversation than taking out a phone; but somehow, Wolfram has regressed and voice assistants never reached that level.
Yes, not being able to filter out background noise is the one thing that makes voice assistants useless.
Whenever I am on the go and tried to get Google Assistant to play me a tune on Spotify, it got it right perhaps 1/20. Worse, around 15/20, it will play me something completely unrelated, that I have no interest in listening to. In the end, I have up on GA, and just stop for a minute, pull the phone out of my pocket and manually pick what I want.
Alexa is the best voice interface for filtering out background noise by a considerable margin, in my experience. It's very flexible in what it accepts. Google is very rigid. Try asking Google Assistant to shut the fuck up. Siri isn't bad but it's too device focused.
Alexa devices have had some of the most iteration on improving the hardware and some impressive back end systems. The integration with Audible, amazon music, and many other streaming systems is impressive. Part of that is because the architecture is very much the "brains" hosted in the cloud allowing AWS to other service to be better made.
Google is very much a "search google" and interact with Google's tools... which don't seem to be as well integrated. The "I want to add X to my calendar" is fine, but things like setting up Sirius XM to stream to Google or the Hue lighting setup was awkward (at the time) and didn't work as well when I was comparing the systems.
At one time (it no longer appears to be the case - which is disappointing), Alexa appeared to have a knowledge engine behind it. You could do things like "what color is a light red flower" and then get back "a pink flower is pink". Incidentally asking what color a blue bird was came back with "a blue bird is blue, brown, and red" giving a hint at that knowledge engine worked from some system behind it rather than tautological responses for other silly questions that I asked. Now that comes back with "According to an Alexa Answers contributor..." Asking "Who invented the C programming language" used to respond back "Denis Ritchie" and "what did Denis Ritchie invent" returned back "Denis Ritchie invited the C programming language."
This Alexa knowledge engine appears to have reverted to "search Wikipedia."
Siri doesn't appear to be trying to be a general assistant and is instead focusing on HomeKit automation and special "this is what you can do with it."
For general queries, Siri used to partner with Wolfram - but this doesn't appear to be the case anymore. It was a meh integration when it was in place.
These differences in how the device behaves comes from some fairly fundamental architectural decisions.
Siri is fairly self supporting, even if limited. Alexa has its brains in AWS and as things become less profitable, Alexa becomes less useful for those things. Google has (to me) always been an also ran because I'm not invested in the google ecosystem for stuff and getting responses where it just reads the first page in google searches is boring.
I’m surprised by that. Pretty much the only thing I use Siri for is voice control of music in the car, and it nails it most times. I’d say there is a transcription error maybe 10% if the time, or less.
Reading more comments, I think in the end the problem is not so much in background noise filtering or voice recognition, but in the lack of context. Primary access to contextual information may explain why Siri gives better results.
E.g. for "play X on Spotify", GA first interprets X independently, and then asks Spotify to play it. On the other hand, when you ask Siri to "play X" (presumably on Apple Music), Siri can take into account your listening profile when interpreting X.
This may in fact be at the bottom of Apple trying to restrict integration of Spotify with Siri: The experience will be worse because of the contextual gap, and it will make Siri appear dumb.
For what it's worth, I find the Google Assistant better than Siri, but not much better. The functionality of the product hasn't improved much since it was delivered, and this includes the "smart" speaker hardware that they regularly refresh.
I bought a GPS with Alexa because it was over $50 cheaper then the same model without the Alexa integration. Big mistake. The device doesn’t function at all without enabling Alexa so that thwarted my A plan. Instead Alexa is always listening and always trying to inject itself into conversations. My B plan was to never say the A word (Alexa!) but apparently there are a long list of other words that trigger it. I can always tell when I’m someplace without cell service because Alexa is quiet. North Dakota, Wyoming, and Montana are all Alexa free zones. My C plan is to train everyone in the car to yell “Go away Alexa!!!” As loud as they can whenever it talks and that does seem to make it abort whatever search it is running at the moment and keep it quiet for an hour.
The next time I buy a GPS I’ll spend the extra $50 to not have Alexa.
Paired mobile phone which is also the source of weather and gas prices, and a secondary source for traffic data, so if I disable that connection to kill Alexa I also lose those other three.
Amazon missed the boat on their voice assistant. They could have absolutely dominated. They were first to market with a general in-home appliance with home automation integration.
They did a good job with shopping and music (and timers and junk) but even today Alexa is still far from a full personal assistant.
Amazon should've launched an email service: forward or CC alexa@amazon.com and have your assistant keep track of it. Flights, meetings, reminders, even just responding to emails. The voice-to-text transcription is excellent but so underutilized.
They introduced device to device calling, but as far as I can tell true VoIP never showed up -- why? This would be a killer feature.
Podcast and ebook support was always such a mess and clearly an afterthought.
Amazon's devices were clearly always sold at cost or at a loss. Maybe there's a universe where this is anti-competitive, but there was nothing substantial ever there.
They hired like crazy, but for what? Why doesn't my car have Alexa built-in? Why didn't they obliterate Sonos?
I wish I could use Alexa as a personal assistant, but it never really materialized.
My best guess is that leadership and product vision really, brutally fumbled here. I don't think they knew what they had, and were unprepared to harness its true potential.
>I wish I could use Alexa as a personal assistant, but it never really materialized.
And neither did Siri. Neither did Google Home. I guess neither did Nuance if you want to go further back.
It's like autonomous driving. A partial solution isn't exactly useless. But it's a long way from opening up the possibilities of, in this case, a computer that could do the tasks that even a very subpar human secretary could.
I don't think so. I've worked a lot in NLP and spoken with one of the founders of Alexa. Imo there is a lack of core science about language which leads to crazy scaling difficulties in voice applications. (eg.Alexa had over 15000 engineers working on it...) until someone pushes that core science forward, the same issues will remain.
About the only thing we use it for is shopping list management for the grocery/costco/home improvement etc and reminders, timers, and alarms. We routinely run into simple usage woes.
See a yellow ring. “Alexa, give me my messages.” “You have no messages. <pause> You have one notification; would you like me to read it to you?” “Oh FFS, yes.”
Google Assistant has this with Alarms and Timers. They are different systems with some different constraints but no user on the planet would actually consider them to be different concepts.
An alarm is something you set for a specific time of day, that you might want to go off every day (eg, to wake you up for work). It is important that it have options for snoozing for a relatively short period of time.
A timer is something you set for a specific amount of time, that you might want several of at once (eg, when you're cooking), and you might want to repeat the same duration a few times (eg, as a focus/pomodoro timer), but you're unlikely to want repeated daily. It's important that it be very clear that the timer went off, but you rarely want to snooze a timer.
They are very similar concepts, but they have fairly clear and distinct use cases, and it makes sense for the software to treat them as two different things.
I get what you're saying, but Google still fucked it up. If you tell Google "set a timer for 2 minutes" it counts down 120 seconds. If you tell Google "set an alarm for 2 minutes" it creates a clock alarm at a minute boundary between 1 and 3 minutes ahead, and is unlikely to be what you want.
The point being, if you set an alarm/timer for "N minutes" it should be a countdown. If you set an alarm/timer for "3pm" it should be a clock setting. It shouldn't matter which word you use.
Alexa used to directly sum times. When you ask for a timer for "1 minute 30 seconds", that's exactly the right thing.
If you are deciding or stuttering and ask for a timer for "1 minute 1 minute", it used to give you a timer for 2 minutes (probably not what you wanted). It now gives you a timer for 1 minute.
I think it can maybe make sense for the software to treat them as different.
But "you don't have any alarms" when I ask about my alarms but I had actually set a timer is an unacceptable response, in my opinion. Both kinds of things encode "make a noise a time in the future I specified." Differences beyond that are minor.
For a long time I've been thinking a key to these thing is maintaining enough context to identify pronouns. For example, in the above question the key word is "that" or "that alarm". Being able to get a reference to the correct alarm should make it easy to answer. Not being able to do it certainly makes it impossible to answer.
We have at least 8 of the things around the house and use them all the time. They're vastly superior to Google/Apple offerings for us.
It would be superduper if AMZN would work with customers to ensure Alexa solves problems that actually exist in ways that are genuinely satisfying, instead of trying to be clever or whatever it is they're doing.
The "By the way" intrusion, for example, has nearly evicted the whole concept.
The "by the way" thing is awful. I would legit pay a $50 premium for a voice assistant that only tried to do the few things that voice assistants are good at and never bothered me with Amazon's attempts to get me to use it for useless crap.
If Amazon is going to kill Alexa, they should be forced to unlock their (Linux?) firmware to avoid millions of devices going into landfills. The open-source community will come up with a replacement that can use their stellar hardware.
The really off thing for me with the idea of it is I've got multiple devices. I've got an Alexa device in six different rooms that I got back in (looking up order) 2016(!!) for $260. Looking at about $40 per device that's been running for over half a decade each.
For me, the price to replace these all would be be a few thousand dollars.
For me, if Amazon was to discontinue Alexa (not a prediction), putting home pods in the rooms that lack them (yes, I have Alexa and Siri sitting next to each other) would be less than the cost of a single Mycroft.
Echo: "Okay. By the way, did you know I can do some other random thing wholly unrelated to the task you gave me, the description of which interrupts something that matters and now you've lost it because you are screaming at me to shut up, please god shut up, please never ever make suggestions?"
I have very few use cases for a voice assistant, but like them quite a lot. Setting timers and controlling the lights without having to touch anything and while moving around the house is fantastic.
Having the voice assistant actually talk back and say anything at all more than the absolutely minimum possible response is unwelcome. Getting a 'by the way' response makes me want to chuck the hardware into the pool. That whole ecosystem is on my list of things to evict and replace with a self-hosted solution, but I've had too-limited time lately to go down that rabbit hole.
Voice assistants will truly thrive when users have full control and all the data is local. We’ll never see this coming from a large player like Amazon.
With my Google Home speaker, I’ll tell it to turn off the lights (which it probably has maybe 95% accuracy of getting right in the first place…). But it will then say “Okay, turning off 5 lights. By the way, you can ask me how long your morning commute will take!” which is just annoying when you’re trying to get to bed.
It advertises other functionality. Typically functionality that allows you to spend money with Amazon.
Siri has started telling me the company that provides it’s weather data every ~10th time I ask for the weather and I feel like even that crosses a line. I do hope Apple have a metric for how many customers swear at HomePods and have someone trying to minimise that.
Sometimes, after answering your actual query, Alexa continues to blather on about something useless, starting with "By the way..." I wish I knew how to stop it, but telling her "don't say by the way anymore" returns blank stares. Maddening.
Like... Gmail? Netflix? Chrome? Android? iOS? TVs? Radio? Newspapers? Just about every line of communication in your house is some big company or another.
> No ... not like these. I can control any communication through these.
How do you control what other people say to you through emails they send you to your Gmail account?
I get the point you tried to make: Alexa is a listening device customers intentionally plant in their home. But if you do care about corporations listening in, you're missing the forest for the trees. Gmail alone is far more incidious and has the potential to reach far more personal information than Alexa.
This explains why I don't have a Gmail account or a smart speaker.
But even if I had a Gmail account, it would take some time/effort to learn my identity. Buying a smart speaker surrenders your identity and location right off the bat.
"By 2018, the division was already a money-losing pit. That year, the New York Times reported that it lost roughly $5 billion. This year, an employee familiar with the hardware team said the company is on pace to lose around $10 billion on Alexa and other devices."
Clearly anticompetitive practices. It's about time the Amazon hardware division dramatically increases prices or gets fined out of existence.
Voice assistants, chat bots etc. are all premature technologies that are dying slow deaths.
The primary reason is quality control. The way these devices are tested can never truly represent the massive variation which would impact their ability to process and parse sound. For example, the wide range of accents for a language like English. The variations in ambient noise in real world environments etc.
Beyond that, generative language models have only recently become powerful, but they need server side processing which is incredibly expensive for the majority of contexts where an AI is useful. Think of call centers. I HATE when companies try to use voice AI in call centers, thinking it's a good way to save money.
Bank Call Center Phone Cal example:
Voice AI: "tell me, how can I help?"
Me: "I'd like to request my final statements for a recently closed account."
Voice AI: "I'm not sure I heard that correctly"
Me: "Statements for a closed account"
Voice AI: "Do you want to close an account?"
Me: "Statements"
Voice AI: "I'm not sure I can help with that, let me get you to a customer care representative. Please enter or say your 16 digit account number"
What was the point of that? The vast majority of customers know how to use online banking to get information at this point. Why did you make me do this? And then, imagine I get disconnected and need to call back. Go through the same process again. The bank may have saved some money (questionable, as they have already outsourced the call center anyway to somewhere cheap), but they've irked me so much, I'm always ready to switch. To bad all banks are the same where I live.
Point being, the tech is too premature, unfinished and hard to build and it offers questionable value.
Voice AI is mostly useful in situations where I need to be handsfree. I think what SoundHound is doing makes the most sense. Sell your Voice AI as an API to manufacturers who build good quality speakers.
My problem with any voice assistant is that it doesn’t have a UI, so it’s hard to know what it’s capable of. You have to try a command and have it fail. And then you don’t know if maybe you worded your request wrong. Or you can go to the marketing website and try to memorize all the commands it understands which is too difficult. So that’s why we all just use it for timers and weather and nothing else.
So what were the 10,000 people working on? How does that staffing number compare to Google Assistant or Apple Siri?
Now that Google and Apple have offline on-device speech models, Amazon could provide offline speech recognition plus the ability to script 3rd-party APIs without going through their cloud. They could even limit that feature to Prime or other subscription, if they want to retain a surveillance business model for non-subscription users.
For people with visual or physical impairments, Echo/Alexa can be life changing, as it can control lights, sound/video, TV, internet radio, fans, microwave and landline phones. Yet, it could do so much more, if their stellar far-field microphone platform was a trustable (offline) voice interface to any home/IoT device or cloud service with an open API.
New Echo devices even include motion detection, another under-utilized capability in a sea of sketchy IoT devices with questionable firmware. Amazon already has robust security for AWS and mindshare with developers, they are missing the opportunity to offer the most secure, most hackable voice platform for home security and automation.
Have big companies forgotten the cardinal rule of platforms? First you let creativity and innovation explode, THEN you steal the best use cases and incorporate them into the platform. It doesn't work in the other direction, no matter how many PMs, whiteboards and lego-filled rooms you throw at early markets. See the Twitter API for prior lost opportunity.
Voice UX is a combination of CLI grammar and a long tail of use cases which are unique yet frequent enough to memorize commands. It's not too late to make Alexa an offline-first, developer friendly voice platform for ... everything, not just whitelisted partner devices and interfaces.
The vast majority of people don't care one whit about online vs. offline. At least outside of niches such as mobility and visual impairments. All those companies--and doubtless others to a lesser degree--have just spent billions developing what are loosely smarthome capabilities that most just don't care about or find useful. It's a case of solving the relatively half of the problem doesn't necessarily buy you much.
Yes, we know that people only care about what they can do. Right now, they don't care about what (little) they can do with Alexa.
Local-first would open up existing home automation systems, enabling new functions (e.g. composite, cross-device and cross-service actions) that are not possible today. Without removing restrictions on developers, people can't find out what they can do.
Most people don't have existing home automation systems to any significant degree and have very little interest in same. Devices are not remotely able to do the vast majority of the housekeeping cleanup/organizing/etc. things that I would like someone/something to do for me. I only have a single smart light because it doesn't have a convenient wall switch.
One could argue that Echo/Alexa was the first mass-market home automation system, in terms of the number of controllable devices. It was certainly bigger than anything that came before. But that doesn't cover niche devices, for which open interfaces are the most likely path to gaining support, e.g. HomeAssistant integrates with 1000 devices. It's not about the general ability of devices to solve your household problems, it's about whether all of your favorite devices (which you already have) can be controlled by your chosen voice platform. Same issue with "universal" remotes.
The issue is not speech to text but rather "now that you've got the text, what do you do with it?"
How do you make "play my favorite songs" to "play some music that I like" to do similar (if not identical) things? What infrastructure behind the voice assistant will need to be made accessible and how do you hook that up?
This is where Siri and Alexa work well. They've got operating system level access to the rest of their eco system. Alexa has access to all your Amazon stuff. Siri has access to all your apple stuff.
I'm pretty sure that speech-to-text is still a major issue. Most solutions available right now are all too heavy to run on-device, there's a reason only wakeword detection is done on device and the audio is actually transcribed on the cloud. Even the one the parent linked, OpenAI Whisper, seems to require a beefy machine to run.
When we get models/apis that can run on a raspberry pi in real time then we can say the issue is ecosystem access. Right now i'm unable to build my own custom assistant, simply because I have not yet found a speech-to-text model that can run on my low power devices - even the one on my top-of-the-line android phone needs to call out to Google for recognition!
If you've got an iPhone, kick it into airplane made and open up "notes" and tap the microphone to transcribe.
It even gets "one third plus one half is five sixths" transcribed as `1/3 + 1/2 is 5/6` and "four feet two inches" becomes `4'2"` without a network connection.
So I believe its doable... I also believe that Apple spent bit to make it that way. Note that the voice to text on the device cuts down on data use (I believe - I haven't sniffed the traffic to verify) and cloud side processing (its working on text rather than audio - its a few pennies less per request).
But beyond that... every home voice assistant that I've seen (other than Apple, Amazon, and Google) has worked on pre-programed phrases that need to be spoken rather than the more free form way people speak... and again, that's even without trying to translate that into other infrastructure beyond the device.
I'd honestly be happy/impressed to have a RPi based system that can handle taking "create a reminder for next Wednesday at 9:30 am to thaw the turkey" and have that feed into other useful systems.
I'd be very impressed if I could do a "create a reminder" (what is the reminder for) "go to parents" (when should I remind you) "Day after thanksgiving" (what time on Friday should I remind you) "half past one" (is that one thirty in the morning or in the afternoon) "afternoon" -- this works with Alexa. Note the recognition of holidays, the 'wizard' style entry and the alternate time format.
Simple: the openai assistant would play songs from your jellyfin/Plex/mpd server hooked with last.fm/discogs api to find similar artists. Its not there yet but it is certainly possible.
If its programmed so that the phrase "play my favorite songs" with exactly those words triggers this sequence of calls - yes, that's quite doable.
The difficulty is making to so that "play my favorite songs" along with "play some music I like" or any of the other possible variations do that same thing - that's where it gets difficult... at least quite a bit more difficult on a RPi.
Even beyond that, the "you need to spin up a Plex server locally on your home intranet, create an account on last.fm and enter these values into this application" isn't there yet. Its doable for a person with a moderate home lab and the familiarity with these systems, but its not a product that you can have your elderly relative who is in the range of "not computer literate" to "Ok with using Windows but uncomfortable writing something along the lines of `Get-Date | ConvertTo-Json` in powershell."
Possible? With sufficient knowledge, acceptance of specific incantations rather than intents, and troubleshooting skill - certainly.
Productized for regular use outside of the techie circles at a reasonable price? A long way from it.
"By then Alexa was getting a billion interactions per week, but most of those conversations were trivial, commands to play music or ask about the weather. That meant less opportunities to monetize. Amazon can't make money from Alexa telling you the weather — and playing music through the Echo only gives Amazon a small piece of the proceeds."
I get that Amazon doesn't make money on those things, but those are the things that I value. I might be willing to pay them to stop saying "Did you know that I can also <thing I don't care about>? Would you like to try that now?".
Having founded and flopped on 2 voice-based startups (sonictrade & verbl) I can attest to the difficulty getting user adoption and the importance of being really, really good at simple, but useful tasks. If Alexa gets the plug pulled it won't be because of a small userbase, but rather failure to stop annoying the userbase with recommendations...something Siri does well. And Siri on the Apple Watch is absolutely killer. It is the ultimate functionality on a wearable device, with voice search being the best utility. Command-based voice AI is still in its infancy. The ability to summon cars, book flights, trade stocks, donante funds, turn on the lights, bet on sports, etc. all are doable now and improving in quality of execution daily. But 15,000 engineers working on Alexa? That's where your $10B went!
The other morning, Alexa had a yellow notification. I asked her to read it, but I was still groggy in bed, so barely listening to what she was saying.
I only registered the phrase 'do you want me to add this to your basket?' at the end. No idea what it was, but she was trying to randomly sell me stuff while I'm chilling at home.
They thought they’d get you to put an Alexa in your home (perhaps at a loss) but that it would bring you into the ecosystem and you’d buy more stuff. But most people just use it for the 1 or 2 things they bought it for like alarm clock and weather.
Like others, I find Alexa and its ilk are handy for some specific tasks. Setting the alarm in my bedroom or sometimes timers in the kitchen. A quick, cursory weather forecast. Playing an album or playlist in bed or in car though confusion is more likely at that point. Turning the one switchless light in my bedroom off and on.
The key is that you basically need to get the exact wizard incantation right.
That's not useless for a fairly low $$ device. But it's certainly not going off and booking my travel to San Francisco for me. Or even ordering from Amazon given I probably want to check prices.
I don't actually have an issue with using Alexa etc. in general. The same data is going to be in a search engine or somewhere in any case.
There's just not much it's good for and therefore it's really hard to see thousands of highly paid engineers working on this stuff. For my uses, Alexa hasn't improved one iota for the past n years. How much money has been pissed away on salaries and other expenses in that time?
More of a meta question related to the working of HN...I crossed posted in one of the submissions mentioned below. This one was posted 2 hours ago. Earlier posts with the same link and some comments appeared 3 hours and 5 hours ago and are also visible:
My experience is that posting something that was posted before, even sometimes within days, will link to the same post. Or get's, sometimes within seconds,
noted as a "[dupe]" . Even if not the same page, but might be the same subject or event.
Amazon succeeded in making a viable, well-maintained Fire OS that used Android Open Source Project (AOSP) as an upstream. Unfortunately, that and $4.99 will get you a venti latte. The only reward for hard work in doing that is more hard work.
There is an alternate strategy, which is to create a family of app-layer and system software that fits Google's OEM model, and become an Android OEM, more like what Samsung did (though that isn't the best example because a lot of Samsung's software is terrible). Maybe this would be a viable backstop for Amazon.
Google can throw more money at the problem because Google's upside is everything from phones (a notable Amazon fail) to TVs to cars, in addition to smart home devices.
The vibe I get from Amazon Devices is that all the innovation and drive came directly from Bezos. Not much tends to happen after he loses interest (or worse, retires).
The Kindle is another example of this situation.
> By then Alexa was getting a billion interactions per week, but most of those conversations were trivial, commands to play music or ask about the weather. That meant less opportunities to monetize.
This really cuts to the heart of the matter. I use Alexa for a number of trivial things, but if I start hearing ads, I’ll dump all my Echo devices and switch to a voice service without ads.
(I would miss some Alexa features that I haven’t been able to replicate with any other voice assistant; the form factor of the Echo Flex, the Echo Clock, the ability to set Sonos speakers as preferred output devices.)
I think we all got sold on the "possibilities" of voice assistants, then promptly realized the software is crap and reverted to using it for simple tasks like timers, weather, and music.
There's an open source project called Mycroft that does the simple tasks, and runs on real hardware you physically own and keep on your desk. It's a wild concept, and not super mature, but I think most people just want some automation anyway
I think for me it’s a privacy issue with Alexa more than anything. People I know have similar feelings about it.
I’m also not seeing a lot of need for voice assistants anymore. The ones I use at home are good for turning things off and on and setting timers for food I’m cooking. Outside of that nothing else.
Although they have never come out and said it, it's very clear that the same has happened to Google Home products. Nothing has improved in the product for well over 2 years and annoying regressions never get resolved. It's looking like another abandoned Google product.
Not really, no. Some of the oldest devices can be unlocked, but the rest use weird chips with encrypted bootloaders. It's not really possible to get them to run unsigned code.
I suspect this these mass job layoffs have less to do with the individual corporate balance sheets and more with a coordinated effort by shareholders and executives to align with the Fed's program to cut inflation by jacking up unemployment nationally:
> "Federal Reserve Chairman Jerome Powell said Wednesday it will be almost impossible for the central bank to beat inflation without hundreds of thousands of Americans losing their jobs."
That's what happens when your actual political system is oligarchic rule by a cabal of financial oligarchs, who own the vast majority of the politicians and bureaucrats.
The numbers are too small. You're not going to drive unemployment up meaningfully by laying off 10k high-end workers. You do it by raising costs throughout the entire market, so hundreds of thousands of low-end workers are laid off.
10k high-end workers at multiple tech firms adds up, getting into the 100K range:
> "The move follows recent workforce reductions by other technology companies such as Meta, Twitter, Redfin, Flyhomes, Convoy, Lyft and more as companies look to cut costs amid current economic headwinds."
There's a handful of big tech companies that are laying off ~10k each, a few more laying off in the low thousands or hundreds. All told, they might hit the 100k range. If you're in tech, you know there's a huge number of unfilled positions, so likely most of these 100k will find a new job, and won't count towards unemployment. Let's be very cautious and estimate 50k of them will remain unemployed; that's still only 10% of the number of jobs the expected recession next year will cost:
> Bank of America is projecting that the US economy will lose over 500,000 jobs in 2023.