Selfishly the hype for AGI is good for an ML engineer like myself. But I have to say that there is no hope in solving a problem (especially by 2030!) that one cannot even define.
Problems of the form “create a machine that can do X” are tractable. AGI is not because no one can agree on what intelligence is.
Problems of the form “create a machine that can do X” are tractable. AGI is not because no one can agree on what intelligence is.