Re: Google is Winning on Every AI Front
“Google is Winning on Every AI Front”.
Let me preface this by saying that I’m sure the Gemini team is very proud of their very real success, and I’m glad for them. It’s something to be lauded — especially when the stakes around these sorts of things are so high.
What Does “Winning” Even Mean?
However, I don’t actually care. And by that I mean, “winning” is kind of a meaningless metric. There was a time when Firefox was “winning,” too — look how that turned out.
The endless box-ticking and benchmarking just don’t sell me anymore. It could write like Hemingway and reason like Einstein — but if all I’m ever using it for is writing help, basic coding, and getting familiar with new topics, then “winning” becomes very subjective.
Google’s Track Record Still Matters
And then there’s the 800lb elephant in the room in the form of Alphabet (née Google). Over the last few years, I haven’t felt like Google has the consumer’s best interests at heart — to put it mildly. The ad-serving arm of the company is indisputably (if not directly) in charge of products at Google. This makes it very concerning to use any of their products at all — let alone feeding their LLM private questions and thoughts that will undoubtedly be funneled into their ad system to build a larger profile of you.
Google has a lot to prove to me in order to earn my trust again — if that’s even possible.
We Already Have Excellent Models
So, good for them. I’m already quite happy with ChatGPT and Claude. Both have helped me with everything I’ve asked of them — whether it’s writing advice, code generation, or just being a sounding board for my inane rambling.
I bet most people use them in much the same way. So, sure, if you’re doing some very specific research, or need advanced reasoning or larger contexts, then maybe — for those values of “best” — Gemini is your guy.
How to “Win” in My Eyes
What would really convince me that someone’s “winning” is if they built LLMs that aren’t covert tools for information gathering. Respecting users’ data and privacy goes a long way in making AI more trustworthy.
Give me tools that help me inspect how the model was trained, what data it was trained on, and how my queries are processed and used. Transparency shouldn’t be an afterthought — it should be the standard.
Working With AI, Not For It
I tend to treat LLMs like people — helpful people who carry lots of knowledge that I can lean on when I’m stuck or exploring something new. I don’t need them to “serve” me. I want to work with them — to talk through a problem, build understanding, and create something useful together.
That sense of collaboration builds trust. It makes the tool better for me, and it makes me better, too.
Trust Is Important
Until then, I’ll keep choosing the tools that I feel like I can trust. They may not be the most advanced, but they respect me more than Google does.
And honestly? That makes all the difference.
Respond via email.