Hacker News new | past | comments | ask | show | jobs | submit | more sosodev's comments login

A couple of years ago these vehicles had a similar fire risk recall due to electrical issues with the trailer hitch connector. Probably best to steer clear of Kia if you don't want your car to burn. :/


Relevant Recall: https://www.nhtsa.gov/vehicle/2020/KIA/TELLURIDE/SUV/AWD#rec...

NHTSA still gives that year make and model a 5 out of 5 on the safety rating, probably doesn't account for this latest recall.

I'd be curious to see these recalls in the context of other year, make, and models. Are 6 recalls, 2 of which can cause it to catch on fire, typical on a 4 year old car?


My car is a 2021 Toyota SUV and it only has one recall for a minor issue where the rear turn signals could dim or fail prematurely.


Kia Seltos also had a recall this year for 2021-2023 cars because AWD control unit under the driver's seat may become contaminated by a mixture of salt and moisture inside the vehicle and catch fire.


anecdotally I've had multiple non-Kia cars with recalls for fire hazards, it happens


Ideally a LLM would have every text in existence in its training set, right? I doubt we’re anywhere near that now.


Ideally a LLM would have every text in existence in its training set, right?

Only if every text in existence provides good source material to factor into generating the response. Given that half of all texts are even worse than average, this seems unlikely, and Karpathy’s argument seems very reasonable to me.

The final paragraph from the original tweet, after the quote in the GP comment, mentions another interesting aspect. Even if we do have sufficient expert-level source material in a particular field to train a useful generative AI model and that in itself produces more desirable responses than training on a larger but more average-quality data set, is there still potentially useful information that could be extracted from the larger training set as well? How can we classify which aspects of a larger training set are desirable to keep while filtering out noise that competes with higher-quality source material? It feels like the progress of generative AI over the next few years might be defined more by these kinds of questions than just trying to build ever larger models using ever larger sets of training data.


No, I don't think that would help. Andrej certainly doesn't seem to think so.


What about the numerous other email providers?


The numerous other email providers are... numerous. Every discussion like this ignores, to an absurd extent, how hard it is for non-tech people to gather information on these topics and make an informed choice: Information about which email providers are care about which aspects of privacy, which aspects of privacy and information security even exists, which email providers even exist, what they are doind with your data, what parts of what they are doing is a problem...

You can't even ask tech people to make a choice for you because they all say different things.

Other domains like cars, medicine, construction, whatever have established standards because they have recognized that individuals simply _cannot_ make an informed choice, even if they want. I'm eager to say that only information technology likes to call the user "unwilling" and "lazy" instead, but actually individuals from other domains do that too. Luckily, the established standards are mandatory, so their opinion doesn't count.


Rounding error that doesn't matter, because the recipients of any e-mail sent from those providers are likely on mailboxes backed by Google or MSFT anyway.


Yes, Compass Pathways has been synthesizing it for quite a while. It’s been used in several university studies over the past few years and we’re now seeing the results.


It seems like spam. I wonder if the content is stolen or if it’s AI generated.

This topic is real and there’s lots of great publications that should be linked here instead.

Like Nature: https://www.nature.com/articles/s41386-023-01648-7.pdf



Looks like the operating system is only for managing browser windows. There are no native apps of any kind. Plenty of great browser apps exist but that's worse than a chromebook in terms of usability.


Hell, it would be great if it where a Chromebook because they can run linux apps.


Aren’t most of the parts tied to the motherboard?


Each generation seems to add an additional part. I know they serialize Face ID module, maybe the screen, the battery gives an indicator in the settings menu that its not OEM but it still works. I doubt the case or volume buttons are tied to the phone but what else is left? The Antenna or USB-C port?


Screen is locked. If you do your own replacement you have to call Apple to get it relinked to the mobo.


Who would the competitor be in this case? The bots hurt the TF2 community but Valve obviously earns the vast majority of their cash elsewhere.

My understanding is that the bots are run by teams of trolls on secondhand hardware that would otherwise be useless.


Yes, it’s their right to let it rot but they should be honest with their players if that is the intention.

In actuality TF2 has a façade of support. Valve is still regularly releasing their gambling based micro transactions and earning millions of dollars while ignoring the needs of the players.

You can say “just don’t buy it” but one could easily think that supporting the game financially means that it will continue to be supported by Valve.


unfortunate assumption, the whole thing is probably automated with a skeleton crew of non-tech people lol


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: