If there is any truth to OpenAi having filters for the Rothschilds, I'd guess that OpenAI wants to stay clear of repeating or even hallucinating additions to conspiracy theories. I would hope at least.
Always interesting to compare how things work across industries, but comparing tech with sports I think is both common and problematic.
Problematic because it leads ICs to think of themselves as the quarterback throwing or wide receiver making the game winning catch with 1 minute left, getting lifted up into the air, renegotiating a contract for millions more, retiring early, getting inducted to the hall of fame etc.
Why is this so problematic? I think it leads engineers to overvalue the short term wins (getting a particularly tricky implementation correct) sometimes at the cost of their health and wellbeing if they work nights and weekends to get it done. And no one is watching it live at the edge of their seats. Even more distressing is that having more junior ICs have to pull heroics to keep the business alive is a tremendous anti-pattern. The point of management is make the right decisions so the organization steers clear of asking their least experienced contributors to damage their health on a regular basis (...really at all!).
A well run org couples decision making and seniority. If an IC is making a lot of org impacting decisions they are probably as senior or more-so than most line managers (or should be promoted to be if this happens regularly) and are comped the same way. Now it would be nice if decision making was as transparent as coaches making the right/wrong play (the offensive coordinator calling a running play no one watching thought was a good idea). In an ideal org failures or inaction would be more transparent. If a manager is basically not making any decisions (adding no value) there are certainly management failures at several levels (maybe up to the top) preventing folks from getting upset about it. Again ideally audibles are a very occasional exception.
Where does money comes from? In sports, teams competing are the product. Players are kinda like features and marketing all built into one. Companies spend a significant portion of all their money on building product and marketing it. I think IC engineers are probably closer to the support staff than the players on the field...the comparison is problematic.
And which hall of fame? - the engineers in our hall of fame are the folks doing things for the first time, trailblazing, largely researchers (Turing awards etc). There is no "got 4 hours of sleep for months delivering a poorly planned product and got an autoimmune disease" hall of fame.
The convo has been a bit light on examples - I think a canonical example of how to achieve this can be found in ACID databases and high uptime mainframes. Definitely doable, but if performance and large scale data is also a concern, doing it from scratch (eg. starting with file handles) is probably a 5-10+ year process to write your own stable db.
There is a lot to be said for getting outside the four walls of a business (or org) to evaluate things. If it's not visible outside those walls (software buggy enough to lose customers) and doesn't introduce significant future risk to the business (competition can move faster than you) it's probably good enough. The real trick of course is predicting and communicating why you think one of these is true. It's an essential problem of commercial software dev.
Not to pick on you specifically, but I tend to agree with other posters that testing (automated or otherwise) is just an element of programming. Like all elements it needs to be done to taste, but it's pretty essential.
A line of questioning: Do you have time to write clear code? Time for comments? Time to manually test your changes hundreds of times? Time to refactor existing code when adding new code? Time to remove dead code? Time to automate the testing while you still have the little state machine you are working on in your head? Time to add observability to spot performance issues? Time to consider your rollout plan? Time to keep on top of changes which are in use? etc...
Automated tests (including unit tests) are just one part of writing correct code. If you are asked, "How long is it going to take?" that implies finishing all aspects of coding required to get something correct into use. You prioritize that, not anyone else.
I fully agree with all your points. My comment was based in my observations that, even given the truth of everything you said, "time to prepare automated tests" is usually one of the first things to shrink as unexpected events cause projects to slip on their timelines.
That's not a good thing, but it is a real thing: healthy organizations should respond by protecting that time, extending deadlines, and assessing what about their process and environment is causing planned timelines to not match reality ... but that doesn't always happen. When it doesn't happen, testing discipline can slip from (for example) "we need unit tests for complex logic modules, and integration tests for real networked service interactions, and at least 70% coverage" to "just get the happy path tested however you can so this feature makes it into the next release".
Given the reality that this will sometimes occur, engineers should remember to always prioritize the highest-value (that is, the maxima between time spent creating tests and defects that those tests can catch) testing work and methodologies. In good conditions, that will result in them writing the most important tests first and then writing any other tests they feel they need. But in bad conditions, this approach will still ensure that the most important automated verification is present.
Take it with a grain of salt, since I don't own the hardware - but did have two points of feedback 1) you may want a more complex/interesting thumbnail- I almost scrolled by since it looked fake/computer generated. The 3rd photo on the lander (red-ish foreground, blue-ish background) looks more engaging to me. 2) The lander would be a good bit more engaging if I could click to play the videos.
+1 to this. Focusing on usecases would be great. ex: I wanted to see what features were available for your sheets module.
Was willing to spend 5 minutes on a walk. Tried the web app - ios safari is not supported :( downloaded the ios app and registered. Got a totally blank app - no onboarding, no template / samples, no obvious way to import from my existing google sheets to see how things scaled. I added a datasource and a few fields (which felt confusing) and my walk was over.
Google Chrome can't even use it's rendering engine on iOS, you're comparing WebView implimentations (which Apple will always be superior at... since they control the OS and every API entitlement).
I'm not sure delegating would have saved him either. I think his boredom about marketing shows a missing essential curiosity about things. Boredom kills!