well yes and no. This AI would be more street-smart than GPT but only because it would grasp the concept of what "danger" actually is!
I think of it more in a way that learning is more abstract than fact learning. From experience, we think that there are fact learners and principle learners but there are also a mixtures of the two!
The general accepted model entails that in order to do high level math, for instance, you need to understand the basics, but for me much of those concept actually clicked in college. This did not stop me from applying them with success a lot earlier though. For instance multiplication in Kindergarten is fact learning too!
In Germany we also have the term "Fachidioten", which loosely translates to people that are so smart in their field, that they are unable to see problems from different directions. This is more of less what i think a mega gtp model turns into. especially because of selection bias in the training data.
Validity of output (truth) can only be achieved through the trust of the source which is always relative to the context of the topic. Henceforth a selectively trained model will always return the data you feed it including all biases. Even if you have it crawl all of the internet and the library of Alexandria and every written word on the planed you can find, it will still return to you the general accepted consensus.