My first read of this was they made a joke (not wise when scheduling for interviews sure but maybe funny) by intentionally responding that way.
This is because my brain couldn't fathom what is likely the reality here -- that someone was just pumping your email thru AI and pumping the response back unedited and unsanitized, and so the first thing you got back was just the first "part" of the AI response.
I'm with you. Looking at the way people respond online to things now since LLMs and GenAI went mainstream is baffling. So many comments along the lines of "this is AI" when there are more ordinary explanations.
Yeah I don't know about this specific situation, but as someone who is on the job market, is a good developer, but can come off as a little odd sometimes, I often wonder how often I roll a natural 1 on my Cha check and get perceived as an AI imposter.
That's a good point. The major LLMs are all tilted so much towards a weird blend of corpo-speak with third-world underpaid English speaker influence (e.g. "delve", from common Nigerian usage) that having any quirks at all outside that is a good sign.
Your perception of the reality is spot on. For this round I was hiring for entry level technical support and we had limited time to properly vet candidates.
Unfortunately what we end up doing is have to make some assumptions. If something seems remotely fishy, like that “Memory updated” or typeface change (ChatGPT doesn’t follow your text formatting when pasting into your email compose window), it raises a lot of eyebrows and very quickly leads to a rejection. There’s other cases where your written English is flawless but your phone interview indicates you don’t understand the English language compared to when we correspond over email/Indeed/etc.
Mind you, this is all before we even get to the technical knowledge part of any interview.
On a related hire, I am also in the unfortunate position where we may have to let a new CS grad go because it seemed like every code change and task we gave him was fully copy/pasted through ChatGPT. When presented with a simple code performance and optimization bug, he was completely lost on general debugging practices which led our team to question his previous work while onboarding. Using AI isn’t against company policy (see: small team with limited resources), but personally I see over reliance on ChatGPT as much, much worse than blindly following Stack Overflow.
A friend of mine works with industrial machines, and once was tasked with translating machine's user's manual, even though he doesn't speak English. I do, and I had some free time, so I helped him. As an example, I was given user manual for a different, but similar machine.
1. The manual was mostly a bunch of phrases that were grammatically correct, but didn't really convey much meaning
2. The second half of the manual talked about a different machine than the first half
3. It was full of exceptionally bad mistranslations, and to this day "trained signaturee of the employee" is our inside joke
Imagine asking ChatGPT to write a manual except ChatGPT has down syndrome and a heart attack so it gives you five pages of complete bullshit. That was real manual that got shipped a 100 000€ or so machine. And nobody bothered to proofread it even once.
I once worked in the US for a Japanese company that had their manuals "translated" into English and then sent on for polishing. Like the parent, it would be mostly "a bunch of phrases that were grammatically correct, but didn't really convey much meaning" . I couldn't spend more than an hour a day on that kind of thing; more than that and it would start to make sense.
The candidate’s first response? “Memory updated”. That led to some laughs internally and then a clear rejection email.