well, the initial twitter rant was pretty bombastic:
"The cat is finally out of the bag – Google relied heavily on @ShareGPT
's data when training Bard.
This was also why we took down ShareGPT's Explore page – which has over 112K shared conversations – last week.
Insanity."
Fine-tunning is not exactly the same as "relying heavily", I bet they got way more fine-tunning data from simply asking their 100k employees to pre-beta test for a couple of months
"The cat is finally out of the bag – Google relied heavily on @ShareGPT 's data when training Bard.
This was also why we took down ShareGPT's Explore page – which has over 112K shared conversations – last week.
Insanity."
Fine-tunning is not exactly the same as "relying heavily", I bet they got way more fine-tunning data from simply asking their 100k employees to pre-beta test for a couple of months