As a person who tends to write very detailed responses and can churn out long essays quickly, one thing I’ve learned is how important it is to precede the essay with a terse summary.
“BLUF”, or “bottom line up front”. Similar to a TL;DR.
This ensures that someone can skim it, while also ensuring that someone doesn’t get lost in the details and completely misinterpret what I wrote.
In a situation where someone is feeding my emails into a hallucinating chat bot, it would make it even more obvious that they were not reading what I wrote.
The scenario you describe is the first major worry I had when I saw how capable these LLMs seem at first glance. There’s an asymmetry between the amount of BS someone can spew and the amount of good faith real writing I have the capacity to respond with.
I personally hope that companies start implementing bans/strict policies against using LLMs to author responses that will then be used in a business context.
Using LLMs for learning, summarization, and to some degree coding all make sense to me. But the purpose of email or chat is to align two or more human brains. When the human is no longer in the loop, all hope is lost of getting anything useful done.
Unfortunately I can't take credit [0], and I think I originally heard this term from a military friend. But it stuck with me, and it has definitely improved my communications.
And I wholly agree re: the last paragraph. It's surprising how often the last thing in a very long missive turns out to be a perfect summary/BLUF.
“BLUF”, or “bottom line up front”. Similar to a TL;DR.
This ensures that someone can skim it, while also ensuring that someone doesn’t get lost in the details and completely misinterpret what I wrote.
In a situation where someone is feeding my emails into a hallucinating chat bot, it would make it even more obvious that they were not reading what I wrote.
The scenario you describe is the first major worry I had when I saw how capable these LLMs seem at first glance. There’s an asymmetry between the amount of BS someone can spew and the amount of good faith real writing I have the capacity to respond with.
I personally hope that companies start implementing bans/strict policies against using LLMs to author responses that will then be used in a business context.
Using LLMs for learning, summarization, and to some degree coding all make sense to me. But the purpose of email or chat is to align two or more human brains. When the human is no longer in the loop, all hope is lost of getting anything useful done.