Hacker News new | past | comments | ask | show | jobs | submit login

I had the same issue as OP. Initially Bard seemed clueless about Gemini, then:

Me: I see. Google made an announcment today saying that Bard was now using a fine-tuned version of their "Gemini" model

Bard: That's correct! As of December 6, 2023, I am using a fine-tuned version of Google's Gemini model ...




So Bard found the blog post from Google and returned the information in it. No new information was get.

The LLM itself does not KNOW anything.


You're arguing against a point that wasn't being made. I expect an accurate answer using the tools it has available to it. I don't care what details are trained in and which parts are Internet-accessible as long as it gets to the right answer with a user-friendly UX.

The issue is that it failed to employ chain-of-reasoning. It knows who "it" is - its initial seed prompt tells it is Bard. Therefore, asking it, "Are you running Gemini Pro?" should be ~equivalent to "Is Bard running Gemini Pro?" but it interpreted one of those as having such ambiguity it couldn't answer.

Whether it needed to search the Internet or not for the answer is irrelevant.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: