Following this out of interest. I built an AI clone app for iOS, and faced a similar challenge around looking for a business case.
We are taking it into the visualization / interview prep space, where I think the tech offers a 10x improvement over a mirror or someone's imagination, instead of replacing another human, where it's 10^9X worse.
Andy Rachleff (Wealthfront, Benchmark) says venture-scale startups are about developing new tech and then finding business cases for them. It's helpful as founders to pattern match, watching how it's done in AI drones, AI iOS apps, etc.
Check out The Surrender Experiment by Michael Singer. He talks about his discovery of and relationship with his inner voice / consciousness. He's also the tech founder of a public EMR startup. Super interesting read.
This week I actually did a 48 hour fast and then two 24 hour fasts. I ate between all of them, and a lot before. I personally find that hunger comes in waves. If you wait out the first wave, it doesn't continue to build. It will leave and then return, like someone interested in you at a bar.
It makes me think that our 3 meals a day culture responds to those initial waves of hunger, and likely leaves of eating more than we need to. Would make sense that there's an overall toll from that on lifespan.
I did one meal in between. I just make sure to eat a balanced meal (fruit and vegetables, proteins, some healthy carbs, healthy fats).
I'm thinking of moving to OMAD (one meal a day). I love eating big meals, and I find it helps my focus to not be thinking about eating during the day. Only challenge is not eating post workout
From my own experience trying to build an intelligent digital twin startup based on the breakthrough in LLM's, I agree with LeCunn that LLMs are actually quite far from demonstrating the intelligence of house cats, and I myself likely jumped the gun by trying to emulate intelligent humans with the current stage of AI.
His AI predictions remind me of Prof. Rodney Brooks (MIT, Roomba) and his similarly cautious timelines for AI development. Brooks has a very strong track record over decades of being pretty accurate with his timelines.
I would suspect any possible future AGI-like progress would be some sort of ensemble. LLMs may be a piece of the puzzle, but they aren't a single model to AGI solution.
What if LLMs are effective enough at assisting coders that they're spending less time on SO and instead pushing more open source code, which is more valuable for everyone?
Being able (or theoretically able!) to produce more of a thing per day has never made anyone’s job easier. It just means they’re expected to meet that new output level or some unreasonable level above that.
And here I was blown away by the ignorance of many of the questions. Why no landing legs? Why is this more impressive than the other rocket landings? What’s the thingy that fell off during separation?
Sometimes I need to remind myself that other people have lives, I guess. XD
Just a programmer, though been interested in physics since I was a teen and did take a bachelor degree in simulation (mainly physics).
Long ago though so rusty, $dayjob doesn't involve any advanced math at all.
edit: To expand, the "rough spreadsheet integration" was just the Euler method[1] assuming a constant acceleration. So
v(t+dt) = v(t) + a * dt
The acceleration comes from F=ma as mentioned, where F is the force of the engines (Newtons), m is the mass of the rocket (kg) and a is the acceleration (m/s^2). Solving for a we get a = F/m and we get
v(t+dt) = v(t) + F/m(t) * dt
To make things easy I assumed the weight of the rocket was constant at each timestep, but if we take dt to be small enough it's a decent enough approximation. For each timestep I also updated the mass using the estimated mass flow:
m(t+dt) = m(t) - 650 * dt
I started with m(0) = 377000 kg, v(0) = 1250 km/h = 347 m/s, and a constant -31850000 N force from the engines.
Using dt = 0.1 seconds, I got almost exactly 4 seconds until the velocity reached zero.
Newton's laws of mechanics are high-school physics IIRC; my son studied them at 8th grade or so. They are really simple; an evening with Wikipedia or 3blue1brown or whatever floats your boat will let you get sufficient understanding, provided you're also comfortable with high-school math.