Hacker News new | past | comments | ask | show | jobs | submit | davej's comments login

Here you go, FSD in Italy: https://youtu.be/tsE089adyoQ?si=Uo72mxf63DQNn7qG

It generalises quite well even though it’s only trained on US roads AFAIK.


> Another recent change on Tesla's website is to remove old blog posts, including a 2016 blog post in which Tesla claimed …

Perhaps unintended but this is a bit misleading. Tesla changed their blog system and didn’t migrate older posts. My initial reading of your comment was that they selectively removed some older posts which they wanted to hide.


My reading of your post is they changed systems for plausible deniability

Even more since migrating content from one CMS to another is a trivial engineering effort.

Considering they decided it was worth upgrading, yeah. A marginal amount of time to backup, normalize, and import the data isn't a lot.

Not worth keeping the posts? No readership? Why upgrade?

Sus, as the kids say, but plausible.


For context, Helen Tonor [0] was a board member of OpenAI before they tried to fire Sam Altman. She claimed that Sam was fired by YC in a recent interview [1]. In the interview, she implied that Sam's firing at YC was kept quiet and that there was something underhanded about it.

[0] https://x.com/hlntnr

[1] https://link.chtbl.com/TEDAI


To be fair to Helen Toner, she was probably was going off the Washington Post/WSJ articles that were discussed here 6 months ago.[0] And pg has been trying to de-sensationalize the issue ever since, and often doing a pretty terrible job at it by complimenting Altman without directly denying certain statements.

The WP article implied that there was a drop in Altman's performance and hands-on presence due to multi-tasking of his other interests including OpenAI, whereas pg seems to imply that jl gave the ultimatum to Altman before there were any performance complaints.

It's also a little strange that pg doesn't mention the for-profit Worldcoin at all, which announced a 4 mil seed round a few months prior to Altman exiting YC and for which Altman was already CEO.

I'm not sure pg is aware how much he's risking, or how much he's putting Jessica's reputation at risk. He often posts touting Jessica as being a great judge of character.[1] The world is witnessing in real time just how great a character his prince really is. But at least he had the courtesy to mention that Jessica was the one that gave Altman the ultimatum.

There was something missing in his post though. He forgot to add "Sam and Paul" at the end of his statement.

[0] https://news.ycombinator.com/item?id=38378216

[1] To be fair, it's usually for determining whether the person has characteristics that make a good startup founder, like resilience or co-founder compatibility. "Having moral fiber" might be at the bottom of the list in terms of priority.


“To be fair Helen was going off of “articles” from WaPo” is some kind of defence. What kind of competence did she have if she just forwards stuff without thinking or investigating first? I would say this solidifies why she wasn’t fit for the job


The WaPo article states unambiguously that Altman was fired from YC for dropping the ball. It apparently cites three anonymous sources from YC, not pg. Why would she bother investigating whether that was true or not when she was already fired from OpenAI? You would only know that was disputed if you were actively following pg's twitter account, or somebody quoting pg's tweets.


Because she stating it as fact, she’s easily prone to info influence for someone that had a very important role dealing with data


I read there was additional drama related to Sam leaving YC; unilaterally declaring himself Chairman of YC, including a YC blog announcement that was quickly deleted. [0]

[0] https://archive.is/Vl3VR


Of course theres additional drama and context. PG is retconning it to make himself look less incompetent and absent.


Paul Graham would have been officially retired from YC at the time. Jessica Livingston still worked full-time at YC for some years after Paul Graham hired Sam Altman to replace him as president and hired Dan Gackle to replace him as moderator. If Paul Graham had not been retired, this entire conversation wouldn't exist. His retirement is why Altman was president of YC.

Accusing Graham of being "absent" sounds silly.


What does it mean to be officially retired in the YC firm world view anyway... if you have a significant ownership stake are you actually ever really retired? Are major decisions not vetted by the stakeholders? YC was founded by JL and PG (I'd assume equally). And this decision is now described as a JL decision.

Anyway, there's a Hollywood movie in this drama... maybe I'll write a script using ChatGPT... :)


As a guess: It means he got to see his kids grow up instead of working 100 hours a week.

He handed off a lot of the day-to-day scut work. He didn't go "I'm just a shareholder who reads the annual report and counts my pennies from the DRIP."


And yet here he is talking about how he was making the decisions.


He was still one of the two main founders and married to the other main founder. He wasn't totally uninvolved with the company.

He still did Office Hours, at least for a time. He described that as "ten percent of what he did" and hired at least two people to divide up the other 90 percent.

I imagine he and Livingston discussed the company over breakfast/dinner and a lot of decisions were likely joint decisions privately hashed out. It's a company founded by a dating couple who later married. There is probably no clear, bright dividing line between "her" decisions and "his."


No, we're talking about Jessica Livingston making a decision. It's right there in the statement.


Lol no in the statement he says "we" in the wsj its just his wife. The buck stops...somewhere?


Well, if you want to read it tendentiously, I guess your choices are the buck stopping with Jessica, with Paul, or with Jessica and Paul. Seems straightforward to reason about.


Honestly, I thought her Ted ai interview was balanced and reasonable. I don't recall her mentioning yc, but I might have missed it.

That said, the interviewer tries to sensationalize the upcoming interview as much as possible in the intro, so I didn't love that


But the original post says otherwise, who do I believe?


I would give more credibility to the firsthand account (PG & Jessica) rather than speculations from a fired board member.


I think that the split seems amicable, but from a 10k view “we had a convo telling Sam he couldn’t do both at once” leading to him leaving rhymes with a firing. Sometimes this stuff can be amicable!


He had a choice to either go to work the next day or not as he preferred. That isn't a firing in the usual sense of the word. As described it is an amicable end to his time at YC that was agreed on by both parties.

If people really want to describe that as "fired" there is no stopping them. But it isn't. PG is more correct than that quadrant of the backseat managers.


Paul explicitly states they wanted him to stay.

Firing implies you want somebody gone.


Paul said they'd have "been fine" with Sam staying, which is different than wanting him to stay:

> For several years [Sam] was running both YC and OpenAI, but when OpenAI announced that it was going to have a for-profit subsidiary and that Sam was going to be the CEO, we (specifically Jessica) told him that if he was going to work full-time on OpenAI, we should find someone else to run YC, and he agreed. If he'd said that he was going to find someone else to be CEO of OpenAI so that he could focus 100% on YC, we'd have been fine with that too. We didn't want him to leave, just to choose one or the other.

It's interesting that YC had to raise the issue, rather than Sam saying to YC, "Hey, I've found this other thing I want to do full-time, can we start looking for my replacement?"


and a fired board member who didn't have anything to do with YC


Jessica and by extension PG are early investors in OpenAI.

So it's not like they are impartial parties either.


No one, it's all PR game at play here, and there's no reason that anyone is being fully transparent.


I was fired from Taco Bell as a kid and I would talk trash about the management and the company to anyone who asked.

I can't imaging being fired from a company like OpenAI and being asked my thoughts about the people responsible and the company and people taking it seriously! LOL


Apple will also introduce the "Pro" line of their M4 chips later in the year and I expect that they will improve the Neural Engine further.


Llama 3 is tuned very nicely for English answers. What is most surprising to me is that the 8B model is performing similarly to Mistral's large model and the original GPT4 model (in English answers). Easily the most efficient model currently available.


Parameter count seems to only matter for range of skills, but these smaller models can be tuned to be more than competitive with far larger models.

I suspect the future is going to be owned by lots of smaller more specific models, possibly trained by much larger models.

These smaller models have the advantage of faster and cheaper inference.


Probably why MoE models are so competitive now. Basically that idea within a single model.


I don't think MoE is the way forward. The bottleneck is memory, and MoE trades MORE memory consumption for lower inference times at a given performance level.

Before too long we're going to see architectures where a model decomposes a prompt into a DAG of LLM calls based on expertise, fans out sub-prompts then reconstitutes the answer from the embeddings they return.


Please, what is an MoE model?



Mixture of Experts. A popular example is Mixtral.


This is misleading. Please read the actual source in context rather than just the excerpt (it's at the bottom of the blog). They are talking about AI safety and not maximizing profit.

Here's the previous sentence for context:

> “a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”

As an aside: it's frustrating that the parent comment gets upvoted heavily for 7 hours and nobody has bothered to read the relevant context.


I find it good that no one until now provided this bullshit lie as context. The "we need it to be closed for it to be safe" claptrap is the same reasoning people used for feudalism: You see, only certain people are great enough to use AGI responsibly/can be trusted to govern, so we cannot just give the power freely to the unwashed masses, which would use it irresponsibly. You can certainly trust us to use it to help humanity - pinky swear.


This is from an internal email, it was not written for PR. Whether he is correct or not about his concerns, it's clear that this is an honestly held belief by Ilya.


I think you're misreading the intention here. The intention of closing it up as they approach AGI is to protect against dangerous applications of the technology.

That is how I read it anyway and I don't see a reason to interpret it in a nefarious way.


Two things that jump out at me here.

First, this assumes that they will know when they approach AGI. Meaning they'll be able to reliably predict it far enough out to change how the business and/or the open models are setup. I will be very surprised if a breakthrough that creates what most would consider AGI is that predictable. By their own definition, they would need to predict when a model will be economically equivalent to or better than humans in most tasks - how can you predict that?

Second, it seems fundamentally nefarious to say they want to build AGI for the good of all, but that the AGI will be walled off and controlled entirely by OpenAI. Effectively, it will benefit us all even though we'll be entirely at the mercy of what OpenAI allows us to use. We would always be at a disadvantage and will never know what the AGI is really capable of.

This whole idea also assumes that the greater good of an AGI breakthrough is using the AGI itself rather than the science behind how they got there. I'm not sure that makes sense. It would be like developing nukes and making sure the science behind them never leaks - claiming that we're all benefiting from the nukes produced even though we never get to modify the tech for something like nuclear power.


Read the sentence before, it provides good context. I don't know if Ilya is correct, but it's a sincerely held belief.

> “a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”


Many people consider what OpenAI is building to be the dangerous application. They don't seem nefarious to me per se, just full of hubris, and somewhat clueless about the consequences of Altman's relationship with Microsoft. That's all it takes though. The board had these concerns and now they're gone.


"Tools for me, but not for thee."


I think the fundamental conflict here is that OpenAI was started as a counter-balance to google AI and all other future resource-rich cos that decide to pursue AI BUT at the same time they needed a socially responsible / ethical vector to piggyback off of to be able to raise money and recruit talent as a non profit.

So, they cant release science that the googles of the world can use to their advantage BUT they kind of have to because that's their whole mission.

The whole thing was sort of dead on arrival and Ilya's email dating to 2016 (!!!!) only amplifies that.


When the tools are (believed to be) more dangerous than nuclear weapons, and the "thee" is potentially irresponsible and/or antagonists, then... yes? This is a valid (and moral) position.


If so, then they shouldn’t have started down that path by refusing to open source 1.5B for a long time while citing safety concerns. It’s obvious that it never posed any kind of threat, and to date no language model has. None have even been close to threatening.

The comparison to nuclear weapons has always been mistaken.


Oh I'm talking about the ideal, not what they're actually doing.


Sadly one can’t be separated from the other. I’d agree if it was true. But there’s no evidence it ever has been.

One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.


> One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.

I mean, I'm totally with them on the fear of AI safety. I'm definitely in the "we need to be very scared of AI" camp. Actually the alien thought experiment is nice - because if we credibly believed aliens would come to earth in the next 50 years, I think there's a lot of things we would/should do differently, and I think it's hard to argue that there's no credible fear of reaching AGI within 50 years.

That said, I think OpenAI is still problematic, since they're effectively hastening the arrival of the thing they supposedly fear. :shrug:


It makes people feel mistrusted (which they are, and in general should be.) it’s a bit challenging to overcome that.


Ultra Wideband (UWB) is the solution for keyless entry and regulators should make it a requirement that new cars use it if they want to support keyless entry.

Tesla just rolled out an OTA update to support UWB. It uses Time-of-Flight (ToF) Measurement to calculate distance which is much more secure than simply using signal strength.


I asked it about the number fatalities in the tornadoes in the US in December 2021 and it gave me a correct answer.

> In December 2021, a particularly devastating outbreak of tornadoes occurred in the central United States, especially impacting Kentucky. As of my last update in January 2022, the death toll from this outbreak was over 80 people, with the majority of those deaths occurring in Kentucky.

https://chat.openai.com/share/2315803e-96d5-4980-b31c-5b9377...


I think it depends on world region. When I asked the same "when is your cutoff-date", I get "September 2021" as a reply. They probably choose to test the US market first.


Original poster here. I’m in Ireland.


If it has the data to know that the answer is greater than 80, why not be more exact?


Because it was trained on news sources that say "Over 80 fatalities in tornado outbreak"?

I can't say I'm sure, you'd have to know the training data involved, but it is quite common for mass casualty events to have "more than" or "at least" in their subjects along with multiple articles where the count increases over time. Remember and LLM is not wikipedia. If it has confidence of a more exact answer it will most likely give you that, but it's not guaranteed.


Source?


NOAA numbers at https://www.spc.noaa.gov/climo/torn/STATIJ21.txt which I've filtered to the relevant outbreak:

  #    DATE TIME-CST   COUNTIES  STATE DEATHS   A B C D  WATCH EF LOCATION
  --   ---- --------  ---------  ----- ------   -------  ----- -- --------
  08 DEC 10   1905    CRAIGHEAD     AR      1   1 - - -  WT552  4 01P
                    MISSISSIPPI     AR      1   1 - - -  WT552  4 01P
                       PEMISCOT     MO      2   2 - - -  WT552  4 01H 01V
                           LAKE     TN      3   3 - - -  WT552  4 03P
                          OBION     TN      1   1 - - -  WT552  4 01V
  09 DEC 10   1935   ST CHARLES     MO      1   1 - - -  WT553  3 01H
  10 DEC 10   2030      MADISON     IL      6   6 - - -  WT553  3 06P
  11 DEC 10   2050       GRAVES     KY     24  24 - - -  WT552  4 09M 09P
                                                                  03H 03U
                        HOPKINS     KY     15  15 - - -  WT552  4 12H 02U
                                                                  01M
                     MUHLENBERG     KY     11  11 - - -  WT554  4 07H 03M
                                                                  01P
                       CALDWELL     KY      4   4 - - -  WT552  4 02H 02M
                       MARSHALL     KY      1   1 - - -  WT552  4 01H
                         FULTON     KY      1   1 - - -  WT552  4 01M
                           LYON     KY      1   1 - - -  WT552  4 01H
  12 DEC 11   0110       WARREN     KY     16  16 - - -  WT554  3 13P 03U
  13 DEC 11   0320       TAYLOR     KY      1   1 - - -  WT561  3 01M
Total: 89

See the link for column definitions.


Firestore is very underrated by devs as a document-based DB. It works very well for a variety of use cases and is super reliable in my experience.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: