You can park a lot there. No offence but I love how AGI doesn't mean anything. It used to be that AI was a goal post. Now it is AGI. We could use characters from sci-fi culture to describe milestones. In order to achieve robocop level, we must solve the instruction vs data problem.
Well, yeah… turns out that goal wasn’t a good indicator for AGI, so we re-evaluated. That’s changing your hypothesis in the face of evidence, not “moving the goalposts” in the fallacious sense.
What’s the indicator for AGI now? We are so far past the Turing Test it isn’t funny. In fact the models now are too intelligent, you would never think a human would have that much knowledge quickly about a subject you chose at random.