> Another recent change on Tesla's website is to remove old blog posts, including a 2016 blog post in which Tesla claimed …
Perhaps unintended but this is a bit misleading. Tesla changed their blog system and didn’t migrate older posts. My initial reading of your comment was that they selectively removed some older posts which they wanted to hide.
For context, Helen Tonor [0] was a board member of OpenAI before they tried to fire Sam Altman. She claimed that Sam was fired by YC in a recent interview [1]. In the interview, she implied that Sam's firing at YC was kept quiet and that there was something underhanded about it.
To be fair to Helen Toner, she was probably was going off the Washington Post/WSJ articles that were discussed here 6 months ago.[0] And pg has been trying to de-sensationalize the issue ever since, and often doing a pretty terrible job at it by complimenting Altman without directly denying certain statements.
The WP article implied that there was a drop in Altman's performance and hands-on presence due to multi-tasking of his other interests including OpenAI, whereas pg seems to imply that jl gave the ultimatum to Altman before there were any performance complaints.
It's also a little strange that pg doesn't mention the for-profit Worldcoin at all, which announced a 4 mil seed round a few months prior to Altman exiting YC and for which Altman was already CEO.
I'm not sure pg is aware how much he's risking, or how much he's putting Jessica's reputation at risk. He often posts touting Jessica as being a great judge of character.[1] The world is witnessing in real time just how great a character his prince really is. But at least he had the courtesy to mention that Jessica was the one that gave Altman the ultimatum.
There was something missing in his post though. He forgot to add "Sam and Paul" at the end of his statement.
[1] To be fair, it's usually for determining whether the person has characteristics that make a good startup founder, like resilience or co-founder compatibility. "Having moral fiber" might be at the bottom of the list in terms of priority.
“To be fair Helen was going off of “articles” from WaPo” is some kind of defence. What kind of competence did she have if she just forwards stuff without thinking or investigating first? I would say this solidifies why she wasn’t fit for the job
The WaPo article states unambiguously that Altman was fired from YC for dropping the ball. It apparently cites three anonymous sources from YC, not pg. Why would she bother investigating whether that was true or not when she was already fired from OpenAI? You would only know that was disputed if you were actively following pg's twitter account, or somebody quoting pg's tweets.
I read there was additional drama related to Sam leaving YC; unilaterally declaring himself Chairman of YC, including a YC blog announcement that was quickly deleted. [0]
Paul Graham would have been officially retired from YC at the time. Jessica Livingston still worked full-time at YC for some years after Paul Graham hired Sam Altman to replace him as president and hired Dan Gackle to replace him as moderator. If Paul Graham had not been retired, this entire conversation wouldn't exist. His retirement is why Altman was president of YC.
What does it mean to be officially retired in the YC firm world view anyway... if you have a significant ownership stake are you actually ever really retired? Are major decisions not vetted by the stakeholders? YC was founded by JL and PG (I'd assume equally). And this decision is now described as a JL decision.
Anyway, there's a Hollywood movie in this drama... maybe I'll write a script using ChatGPT... :)
As a guess: It means he got to see his kids grow up instead of working 100 hours a week.
He handed off a lot of the day-to-day scut work. He didn't go "I'm just a shareholder who reads the annual report and counts my pennies from the DRIP."
He was still one of the two main founders and married to the other main founder. He wasn't totally uninvolved with the company.
He still did Office Hours, at least for a time. He described that as "ten percent of what he did" and hired at least two people to divide up the other 90 percent.
I imagine he and Livingston discussed the company over breakfast/dinner and a lot of decisions were likely joint decisions privately hashed out. It's a company founded by a dating couple who later married. There is probably no clear, bright dividing line between "her" decisions and "his."
Well, if you want to read it tendentiously, I guess your choices are the buck stopping with Jessica, with Paul, or with Jessica and Paul. Seems straightforward to reason about.
I think that the split seems amicable, but from a 10k view “we had a convo telling Sam he couldn’t do both at once” leading to him leaving rhymes with a firing. Sometimes this stuff can be amicable!
He had a choice to either go to work the next day or not as he preferred. That isn't a firing in the usual sense of the word. As described it is an amicable end to his time at YC that was agreed on by both parties.
If people really want to describe that as "fired" there is no stopping them. But it isn't. PG is more correct than that quadrant of the backseat managers.
Paul said they'd have "been fine" with Sam staying, which is different than wanting him to stay:
> For several years [Sam] was running both YC and OpenAI, but when OpenAI announced that it was going to have a for-profit subsidiary and that Sam was going to be the CEO, we (specifically Jessica) told him that if he was going to work full-time on OpenAI, we should find someone else to run YC, and he agreed. If he'd said that he was going to find someone else to be CEO of OpenAI so that he could focus 100% on YC, we'd have been fine with that too. We didn't want him to leave, just to choose one or the other.
It's interesting that YC had to raise the issue, rather than Sam saying to YC, "Hey, I've found this other thing I want to do full-time, can we start looking for my replacement?"
I was fired from Taco Bell as a kid and I would talk trash about the management and the company to anyone who asked.
I can't imaging being fired from a company like OpenAI and being asked my thoughts about the people responsible and the company and people taking it seriously! LOL
Llama 3 is tuned very nicely for English answers. What is most surprising to me is that the 8B model is performing similarly to Mistral's large model and the original GPT4 model (in English answers). Easily the most efficient model currently available.
I don't think MoE is the way forward. The bottleneck is memory, and MoE trades MORE memory consumption for lower inference times at a given performance level.
Before too long we're going to see architectures where a model decomposes a prompt into a DAG of LLM calls based on expertise, fans out sub-prompts then reconstitutes the answer from the embeddings they return.
This is misleading. Please read the actual source in context rather than just the excerpt (it's at the bottom of the blog). They are talking about AI safety and not maximizing profit.
Here's the previous sentence for context:
> “a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”
As an aside: it's frustrating that the parent comment gets upvoted heavily for 7 hours and nobody has bothered to read the relevant context.
I find it good that no one until now provided this bullshit lie as context. The "we need it to be closed for it to be safe" claptrap is the same reasoning people used for feudalism: You see, only certain people are great enough to use AGI responsibly/can be trusted to govern, so we cannot just give the power freely to the unwashed masses, which would use it irresponsibly. You can certainly trust us to use it to help humanity - pinky swear.
This is from an internal email, it was not written for PR. Whether he is correct or not about his concerns, it's clear that this is an honestly held belief by Ilya.
I think you're misreading the intention here. The intention of closing it up as they approach AGI is to protect against dangerous applications of the technology.
That is how I read it anyway and I don't see a reason to interpret it in a nefarious way.
First, this assumes that they will know when they approach AGI. Meaning they'll be able to reliably predict it far enough out to change how the business and/or the open models are setup. I will be very surprised if a breakthrough that creates what most would consider AGI is that predictable. By their own definition, they would need to predict when a model will be economically equivalent to or better than humans in most tasks - how can you predict that?
Second, it seems fundamentally nefarious to say they want to build AGI for the good of all, but that the AGI will be walled off and controlled entirely by OpenAI. Effectively, it will benefit us all even though we'll be entirely at the mercy of what OpenAI allows us to use. We would always be at a disadvantage and will never know what the AGI is really capable of.
This whole idea also assumes that the greater good of an AGI breakthrough is using the AGI itself rather than the science behind how they got there. I'm not sure that makes sense. It would be like developing nukes and making sure the science behind them never leaks - claiming that we're all benefiting from the nukes produced even though we never get to modify the tech for something like nuclear power.
Read the sentence before, it provides good context. I don't know if Ilya is correct, but it's a sincerely held belief.
> “a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”
Many people consider what OpenAI is building to be the dangerous application. They don't seem nefarious to me per se, just full of hubris, and somewhat clueless about the consequences of Altman's relationship with Microsoft. That's all it takes though. The board had these concerns and now they're gone.
I think the fundamental conflict here is that OpenAI was started as a counter-balance to google AI and all other future resource-rich cos that decide to pursue AI BUT at the same time they needed a socially responsible / ethical vector to piggyback off of to be able to raise money and recruit talent as a non profit.
So, they cant release science that the googles of the world can use to their advantage BUT they kind of have to because that's their whole mission.
The whole thing was sort of dead on arrival and Ilya's email dating to 2016 (!!!!) only amplifies that.
When the tools are (believed to be) more dangerous than nuclear weapons, and the "thee" is potentially irresponsible and/or antagonists, then... yes? This is a valid (and moral) position.
If so, then they shouldn’t have started down that path by refusing to open source 1.5B for a long time while citing safety concerns. It’s obvious that it never posed any kind of threat, and to date no language model has. None have even been close to threatening.
The comparison to nuclear weapons has always been mistaken.
Sadly one can’t be separated from the other. I’d agree if it was true. But there’s no evidence it ever has been.
One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.
> One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.
I mean, I'm totally with them on the fear of AI safety. I'm definitely in the "we need to be very scared of AI" camp. Actually the alien thought experiment is nice - because if we credibly believed aliens would come to earth in the next 50 years, I think there's a lot of things we would/should do differently, and I think it's hard to argue that there's no credible fear of reaching AGI within 50 years.
That said, I think OpenAI is still problematic, since they're effectively hastening the arrival of the thing they supposedly fear. :shrug:
Ultra Wideband (UWB) is the solution for keyless entry and regulators should make it a requirement that new cars use it if they want to support keyless entry.
Tesla just rolled out an OTA update to support UWB. It uses Time-of-Flight (ToF) Measurement to calculate distance which is much more secure than simply using signal strength.
I asked it about the number fatalities in the tornadoes in the US in December 2021 and it gave me a correct answer.
> In December 2021, a particularly devastating outbreak of tornadoes occurred in the central United States, especially impacting Kentucky. As of my last update in January 2022, the death toll from this outbreak was over 80 people, with the majority of those deaths occurring in Kentucky.
I think it depends on world region. When I asked the same "when is your cutoff-date", I get "September 2021" as a reply. They probably choose to test the US market first.
Because it was trained on news sources that say "Over 80 fatalities in tornado outbreak"?
I can't say I'm sure, you'd have to know the training data involved, but it is quite common for mass casualty events to have "more than" or "at least" in their subjects along with multiple articles where the count increases over time. Remember and LLM is not wikipedia. If it has confidence of a more exact answer it will most likely give you that, but it's not guaranteed.
It generalises quite well even though it’s only trained on US roads AFAIK.
reply