> So the Astronauts should be able to produce working systems that pass functional tests.
Nah. This is just a new version of an old mistake, one people have been repeating for decades. People keep trying to come up with ways for people to "code" without understanding what's going on. Code generation wizards. Visual programming tools. Model driven architecture. And a bunch more.
The hard part about making software isn't writing a bit of starter code. What we saw with the "code wizard" approach is that somebody clueless could click some buttons and get something working. But then they couldn't maintain it. It just kicked the "understand what's going on" problem down the road from "I don't know how to start" to "I am now trapped in a hell of generated code and people are yelling at me".
If some particular application is so standard that it can be produced with only a shallow understanding of how software works, then the right thing isn't using ChatGPT to produce source code. It's when people who have a deeper understanding produce an app that is configurable in the right ways.
Those aren't decades-old "prognostications". They are things that happened.
Could it be different this time? Maybe! But if you want to say that it will, then you have to make the argument. Or maybe you can get ChatGPT to do it for you?
I have not, but others have detailed examples where it produces rubbish. Certainly not the kind of risk to take with safety critical systems like what an astronaut may rely on.
Nah. This is just a new version of an old mistake, one people have been repeating for decades. People keep trying to come up with ways for people to "code" without understanding what's going on. Code generation wizards. Visual programming tools. Model driven architecture. And a bunch more.
The hard part about making software isn't writing a bit of starter code. What we saw with the "code wizard" approach is that somebody clueless could click some buttons and get something working. But then they couldn't maintain it. It just kicked the "understand what's going on" problem down the road from "I don't know how to start" to "I am now trapped in a hell of generated code and people are yelling at me".
If some particular application is so standard that it can be produced with only a shallow understanding of how software works, then the right thing isn't using ChatGPT to produce source code. It's when people who have a deeper understanding produce an app that is configurable in the right ways.