You are correct in many ways namely on a technical/compatibility level. Having no fundamental understanding of how AGI is structured or operates on a technical level renders most efforts & policies on safety mute. If more efforts were focused on the fundamental underpinnings of AGI and a more broad based funding mechanism was established for those doing so, there would have been the possibility for steering all along development. Having not done so in order to capture lower hanging fruit and funding for oneself now leaves many scrambling to align themselves towards work that will no doubt become unveiled suddenly (as no spotlight or funding) is giving any notice to it in the short/medium term.
Also, safety is an easily addressable issue when the system is truly intelligent. When the systems are dumb and statistical in nature, a lot of work is done on 'safety' as a pseudo-intelligent-control system for an otherwise dumb black box
Also, safety is an easily addressable issue when the system is truly intelligent. When the systems are dumb and statistical in nature, a lot of work is done on 'safety' as a pseudo-intelligent-control system for an otherwise dumb black box