On the other side of the coin: have a kid with epilepsy. After learning about possible effects of K448/Mozart's sonata in D Major, we keep a copy of it on all our phones, and it does seem to relax him when he is having a seizure.
Always thought it was funny that the only other song they had found (up until 2021) with a similar audio signature was from "Yanni Live at the Acropolis"
Also, I found and watched the porygon episode in the last year, and it's certainly pretty intense.
I work for a vector database company (Pinecone) and can confirm that most of the mind-blowing built-with-ChatGPT products you see launching every eight'ish hours are using this technique that Steve describes. That is, embedding internal data using an LLM, loading it into a vector database like Pinecone, then query the vector DB for the most relevant information to add into the context window. And since adding more context with each prompt results in higher ChatGPT costs and latencies, you really want to find the smallest and most relevant bits of context to include. In other words, search quality matters a lot.
Edit to add: This was an aside in the post but actually a big deal... With this setup you can basically use an off-the-shelf LLM (like GPT)! No fine-tuning (and therefore no data labeling shenanigans), no searching for an open-source equivalent (and therefore no model-hosting shenanigans), no messing around with any of that. In case you're wondering how, say, Shopify and Hubspot can launch their chatbots into production in practically a week.
https://www.nature.com/articles/s41598-021-95922-7
Always thought it was funny that the only other song they had found (up until 2021) with a similar audio signature was from "Yanni Live at the Acropolis"
Also, I found and watched the porygon episode in the last year, and it's certainly pretty intense.