As we have witnessed episodes like https://arstechnica.com/?p=1942936 witch pose a big and so far unsolvable problem. Personally I tried khoj on my org-mode notes just for curiosity, it was able to produce some meaningful results, but fails to find much while a mere rg (ripgrep) succeed without special tricks. Similarly a classic Google Search tend to produce meaningful results quickly and with much less computation needed than ChatGPT.
If that's the current bar well, LLMs cost way too much for the results they produce. Of course things change so at a certain point in future we might get much better results and to achieve such goals research, so data, experiments etc is needed but from research and release early and often vs "Artificial Intelligence is here" from PR there is a big gap in the middle.
If that's the current bar well, LLMs cost way too much for the results they produce. Of course things change so at a certain point in future we might get much better results and to achieve such goals research, so data, experiments etc is needed but from research and release early and often vs "Artificial Intelligence is here" from PR there is a big gap in the middle.