the use of commas and how it concludes statements is what usually gives it away
the current work use cases for GPT is almost worse than crypto mining in terms of wasted compute resources:
>manager uses GPT to make an overly long email
>readers use GPT to summarize and respond
then on the search front:
>Microsoft and Google add these tools into their office suites
>will then have to use more resources with Bing and Google Search to try and analyze web content to see if it was written with AI
Huge amounts of wasted energy on this stuff. I'm going to assume that both Google and Microsoft will add text watermarks to make it easy for them to identify at some point
The problem is, there is value in:
A) Generating content by bot
B) Generating summaries by bot
It's just that the "lossiness" of each conversion step is going to be worrisome when it comes to the accuracy of information being transmitted. I suppose you can make the same argument when it's real humans in the chain.
However, my fear is that we get into this self-feedback loop of bot-written articles that are wrong in some non-obvious way being fed back into knowledge databases for AIs, which in turn are used to generate articles about the given topic, which in turn are used in summaries, etc.
I think traditionally referring back to primary sources was a way of avoiding this game of telephone, but I worry that even "primary sources" are going to start being AI-cowritten by default.
Speaking of primary sources, if you ask the chatbot to reference some facts, it might very well make up plausable sounding sources. Maybe the reference doesn't exist at all. Maybe the reference exists but its by a different author. Maybe the reference exists and its the correct author, but the quote isn't found in the book at all, and to verify now you need to get ahold of the book in some form. It just seems like a chore, all to end up not entirely confident that what you have is true signal anyhow.
Many moons ago when I worked in the finance sector, I noticed that a huge amount of work in the industry appear to comprise many groups of humans writing a elaborate stories around a few tables of numbers, while a bunch of other groups were trying to extract the numbers from the text again into some more usable tabular form again. Always seemed like a huge waste of human time and energy to me, best if it can be efficiently automated.
the current work use cases for GPT is almost worse than crypto mining in terms of wasted compute resources:
>manager uses GPT to make an overly long email
>readers use GPT to summarize and respond
then on the search front:
>Microsoft and Google add these tools into their office suites
>will then have to use more resources with Bing and Google Search to try and analyze web content to see if it was written with AI
Huge amounts of wasted energy on this stuff. I'm going to assume that both Google and Microsoft will add text watermarks to make it easy for them to identify at some point