Hacker News new | past | comments | ask | show | jobs | submit login

GREAT QUESTIN!

My theory is that since LLMs (Large Language Models) are the ultimate generalists when it comes to knowledge, they "know" a decent amount about every single topic and concept known to the entire human race. I do not believe there is a single human alive today who possesses knowledge about so many different subjects. For example, a physicist may know 100 times more about physics and mathematics than an LLM, but the LLM probably knows a decent amount about 10,000 more disciplines (like plant biology) that the physicist has little to no understanding of.

I believe this multidisciplinary ability makes LLMs uniquely qualified to pick stocks based on the millions of variables that may impact stock prices.

For example, one of the most successful strategies I deployed was when I asked GPT-4 to come up with the attributes of a hypothetical "most investable stock on the NYSE and NASDAQ based on current market conditions." Once it generated the attributes, I used another instance of GPT-4 to find me a stock that matched these attributes. It came up with NVIDIA. The actual prompt I used was more detailed, but you get the idea.




So… an LLM doesn’t actually “know” anything. It’s not actually calculating any of the variables that affect stock prices.

Have you double checked GPT-4s work? The stock most matching the pre-defined attributes really was Nvidia?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: