The current approach to AGI, as seen with projects like AutoGPT and BabyAGI, is to call an LLM (GPT-4 is the current choice) to solve and generate useful content recursively. Tasks are output, so for every task generated the LLM is asked to recursively solve that task, which could involve creating new tasks.
So far some small wins have been seen. Common problems involve looping and doing nothing. It's still early for these projects.
So far some small wins have been seen. Common problems involve looping and doing nothing. It's still early for these projects.