You say we shouldn't anthropomorphize GPT. Then you proceed to use an example that implicitly anthropomorphizes it again so you can ultimately deflect some broad generalizations made against it.
All of this seemingly avoids the real questions either side of this poses. Is this technology actually useful to humanity? If not, how much expense is required to make it useful? Finally, would that be better directly invested into people?
I find solving problems and facing challenges to be the most satisfying human activity. I honestly can't comprehend the miles of annoyed apologia written in favor of 'talking' to ChatGPT.
All of this seemingly avoids the real questions either side of this poses. Is this technology actually useful to humanity? If not, how much expense is required to make it useful? Finally, would that be better directly invested into people?
I find solving problems and facing challenges to be the most satisfying human activity. I honestly can't comprehend the miles of annoyed apologia written in favor of 'talking' to ChatGPT.