Gemini 2.5 pro is as powerful as everybody says. I still also use Claude Sonnet 3.7 only because the Gemini web UI has issues... (Imagine creating the best AI and then not allowing to attach Python or C files if not renamed .txt) but the way the model is better than anyone else is a "that's another league" experience. They have the biggest search engine and YouTube to leverage the power of the AI they are developing. At this point I believe too that they are likely to win the race.
Will there be a winner at all? Perhaps it's going to be like cars where there are dozens of world class manufacturers, or like Linux, where there's just one thing, but its free and impossible to monetize directly.
Linux works because network effects pressure everyone to upstream their changes. There's no such upstreaming possible with the open-weight models, and new sets of base weights can only be generated with millions of dollars of compute. Companies could conceivably collaborate on architectures and data sets, but with the amount of compute and data involved, only a handful of organizations would ever have the resources to be able to contribute.
Unlike Linux, which was started by a cranky Finn on his home computer, and can still be built and improved by anyone who can afford a Raspberry Pi.
I thought for cars it was because certain countries decided at state level that car making was strategically their thing? That combined with fashion, meaning some percentage of people want different looking cars.
Instead of renaming files to .txt, you should try Gemini 2.5 pro through OpenRouter with roo, Cline or using Github Copilot. I've been testing GH Copilot [0] and it's been working really well.
I know perfectly I can use the API with any wrapper. I don't do that for choice, my human+AI development style is in the form of the chat, and since I discovered that many models behave differently (especially Gemini 2.5) based on where you invoke them (I don't know what Google is doing internally, if they change temperature / context size / ...) I stick with using the default way a model is provided to the public by a given provider. Besides, while I write a lot of code with the assistance of AI, my use case is mainly code reviews, design verification / brainstorming, and so forth, not much "write this code for me" (not that I believe there is anything wrong with it, just a matter of preferences -- I do it for things like tests, or to have a template when the coding task is just library calls that are boring to put together: typical use case, "generate the boilerplate to load a JPEG file with libjpeg"). So I keep using the web chat :)
I am not even sure how to use Gemini 2.5 pro ergonomically right now. Cursor and Windsurf both obviously have issues, probably optimized too much around Claude, but what else is there?
Is everyone copy pasting into the Google AI studio or what?
> At this point I believe too that they are likely to win the race.
I'm not so sure.
In the mid 2010s they looked like they were ahead of everyone else in the AI race, too. Remember the (well-deserved!) spectacle around AlphaGo? Then they lost steam for a while.
So I wouldn't bet that any momentary lead will last.
apart from those weird file attach issues I actually think they've got a much better UI than anthropic as well - much much snappier even with extremely long chats (in addition to much higher limits obviously, totally different league). I love using it