glguy: I actually think turning more and more processes over to AI in the long-term is a good thing. There's an argument that you're already a cyborg as a tool user and a language speaker (languages aren't genetic, well, mostly anyways, there's some genetic adaptations to tonal languages), and I guess we're all technically Haskell cyborgs! But a human using an AI is already a cyborg. It's useful to be aware of it. The question is whether we do this quickly or slowly, however. My opposition to AGI nonsense is that it's promising a hype machine that, if it actually manages to deliver, has a high risk of going wrong, either socially or existentially. ANI and broader meta-ANI is already the present, is causing social impacts, and it's better to consider the ANI reality instead of the AGI future, although ANI will eventually lead people... ... to AGIs.