Google’s Woke Gemini AI Underscores Threat of Big Tech Information Gatekeepers
Jarrett Stepman /
Google is having a Bud Light moment.
If you’ve spent any time online—especially on social media—in the past week, you’ve probably noticed the controversy over Google’s new artificial intelligence program, “Gemini.”
Gemini is an AI tool and language model made for a general audience that can do all kinds of things, such as answer questions, generate requested images—and generally act like the wokest, smuggest person working in a university “diversity, equity, and inclusion” bureaucracy.
After the program launched in early February, people soon began noticing how some prompts produced ridiculously—and sometimes hilariously—politically correct answers.
For instance, users discovered that when asking the program to produce images of “Vikings,” it would produce mostly black and Asian-looking people and would always answer with something to the effect of “here are diverse images” (emphasis mine) of Vikings, medieval knights, 16th-century European inventors, etc.
Getting the program to produce images of Caucasian males was difficult, and in some cases nearly impossible.
These standards were not evenly applied. When I asked Gemini specifically to produce images of diverse Zulu warriors of South Africa, it only spit out images of black men and women.
When I then asked Gemini why it couldn’t produce an actually racially diverse image of Zulu warriors, it came up with the old “complexity and nuance” answer that it leans on when it sputters and strains to work within the ideologically rigid confines of its programmers.
It said that Zulus came from a generally homogenous culture and that it struggled to depict them in another way from lack of racially diverse examples. Hmm.
The folks at Google didn’t initially seem to have much of a problem with the historically absurd “diverse” images until someone came up with the prompt of “German soldiers in the 1940s,” which produced a racially diverse set of Wehrmacht stormtroopers.
That, finally, was the bridge too far, and Google shut down the image program and apologized. I would note here that there are actually examples of racially diverse German soldiers in the 1940s, but by this point you should understand Google’s game.
There are countless other examples of this program producing answers based on carefully calibrated far-left viewpoints.
When I asked it to tell me the first example of legal emancipation in the New World, it said that it was Haiti in 1801. I then asked if slavery was abolished by Vermont in 1777 and why this answer wasn’t produced. It acknowledged I was correct and gave the old “nuance and complexity” weasel answer.
Google’s problem isn’t just with DEI nonsense. It has a China problem, too.
When I asked it to generate images of China not ruled by communism, it refused, saying that China was historically tied to communism. It had no problem depicting the United States under communism.
When I asked if Taiwan is basically like China, but not under communism, it again turned to the old “complexity and nuance,” get-out-of-answering trick.
Google’s explanation for Gemini’s absurd nature and far-left political leanings is that it is still working out bugs in the system and that this was just a technical problem.
Google’s CEO sent out a memo to employees apologizing for the controversy. He said that Google’s goal was to make its product “unbiased”:
Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging Al products.
That’s nonsense.
The Gemini launch did the world a favor and revealed just how left-wing and manipulative Google really is. One of the leaders on the project, who has since made his X (formerly Twitter) account private, is reported to have yammered online about “white privilege” and “systemic racism.” The bias was hiding in plain sight.
The consistent leftist political leanings of Gemini’s answers, which seemed like they were produced by a person to the left of the average member of the San Francisco Board of Supervisors, didn’t just come out of the blue.
The AI doesn’t have a bias. What’s biased are the people controlling it.
The product is basically the left-wing, Western liberal version of what I imagine China’s totalitarian information platforms look like. Everything is carefully calibrated to match the narrative the regime wants to foist on their people.
Pairing this AI with the world’s most powerful search engine—which accounts for about 90% of global search traffic—is a terrifying thought. It has an astounding amount of potential power to shape and cultivate the views and perceptions of people in the United States and across the globe.
The bottom line is this: Google’s Gemini AI was programmed and designed to be this way. It isn’t a technical glitch that ensured the most DEI-compliant responses to queries; it’s the ideology that clearly pervades the company—and has for many years.
Remember way back in 2017 when Google engineer James Damore was fired by his employer for sending around an internal memo about how the company’s diversity policies were creating an ideological echo chamber?
Gemini is the latest fruit of that echo chamber, an attempt to shape the world around extreme left-wing narratives. It’s meant to be a tool for our modern, ideologically compromised elite institutions to expunge disagreement and information that might lead to different conclusions about reality.
They will do this by carefully scrubbing and shaping the places where most people find their information. It’s the corruption and hostile takeover of a global, digital town hall.
Gemini AI is to be the left-wing gatekeeper of information and ideas. It’s your guide to ensure you stay on the politically correct path and will nudge you back every time you stray.
In a sense, I’m glad Gemini launched as horribly as it did.
First, it shows just how much of an extreme ideological cocoon the Big Tech world is to think that its AI program wouldn’t appear biased. Second, it’s a warning of what’s to come when Googlers find ways to make their social engineering stealthier, but likely more insidious.
Google may be too big to fail in the way that Budweiser did after angering its customers with its misbegotten, but short-lived embrace of a transgender “influencer.”
Google’s search engine is a hard-to-replicate product, unlike beer. But the mask has really slipped, and it will be hard to convince customers of its lack of bias in the competitive era of AI.
Google inadvertently turned up the heat on the frog just a bit too much before the pot boiled. So, in a sense, it’s a good thing that its AI started off so sloppy and absurd. We can see them for what they really are.
Have an opinion about this article? To sound off, please email [email protected] and we’ll consider publishing your edited remarks in our regular “We Hear You” feature. Remember to include the URL or headline of the article plus your name and town and/or state.