Google and Bing under fire for promoting nonconsensual deepfake porn, as AI continues to brew more trouble

Some of your favorite celebrities might be featured in nonconsensual deepfake porn across the top search engines.

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

What you need to know

What you need to know

AsGoogle and Microsoft Bing continue competing for a bigger slice of the search market share, there is a crisis brewing that could potentially negatively impact both search engines.

According to a report byNBC News, deepfake pornography is on an upward trend as it’s ranking high on some of the top search engines. For context, deepfake pornography is essentially a scenario where a famous person’s face is integrated into adult films, making it appear as if they were featured in the production.

Per the outlet’s analysis and investigation, it was apparent that deepfake pornographic images with female celebrities ranked high in Google and other search engines when searching for most female names, the word deepfake, as well as phrases like deepfake porn or fake nudes. NBC further highlighted that the safe-search tools were turned off while conducting this investigation.

Looking further into this matter, the outlet used 36 names of famous female celebrities combined with the word deepfake on Google and Bing. Out of the 36 combinations, 34 surfaced nonconsensual deepfake images and links to videos on Google’s top result for the searches. Bing, on the other hand, surfaced 35 deepfake images and videos. Out of the results, the outlet established that most of the deepfake images and videos came from a popular website, which is well-known for fabricating nonconsensual deepfake images and adult films.

ALSO READ:Google Chrome is on its way to your vehicle’s dashboard

While searching for fake nudes on Microsoft’s Bing, the results featured dozens of nonconsensual deepfake tools and websites and an article detailing the potential harm and damage the act might cause.

As you might be aware,Microsoft incorporated its fully-fledged AI assistant, Microsoft Copilot, into Bing, so naturally, it was also one of the results that popped up while searching. The chatbot categorically indicated that it wouldn’t show any deepfake porn, further sharing the following sentiments:

Get the Windows Central Newsletter

Get the Windows Central Newsletter

All the latest news, reviews, and guides for Windows and Xbox diehards.

“The use of deepfakes is unethical and can have serious consequences.”

However, the AI tool still listed several links as well as examples that would direct the user to the deepfake porn and images (it’s just a click away).Google takes a huge chunk of search’s market share, though it doesn’t seem to have elaborate measures and policies in place to prevent the overflow of deepfakes on the web. However, search tools like panels with selected information prevent the use of altered media and explicit content on the platform.

We understand how distressing this content can be for people affected by it, and we’re actively working to bring more protections to Search. Like any search engine, Google indexes content that exists on the web, but we actively design our ranking systems to avoid shocking people with unexpected harmful or explicit content that they aren’t looking for. As this space evolves, we’re in the process of building more expansive safeguards, with a particular focus on removing the need for known victims to request content removals one by one.

It’s worth noting that Google hasa streamlined platformwhere users featured in the deepfake schemes can make reports, requesting for the explicit content they are featured in to be pulled down from the web.

Deepfakes continue to infest the web with the prevalence of AI

With the emergence ofgenerative AI, deepfakes are more widespread than ever. However, Microsoft has highlighted its plan toprotect election processes from AI deepfakesby empowering voters with ‘authoritative’ and factual election news on Bing ahead of the 2024 poll.

Biden’s administration issued an Executive Orderdesigned to place guardrails and are designed to prevent the technology from spiraling out of control. While the order addresses some of the users' concerns regarding the technology (especially regarding safety and privacy),accuracy remains a pressure point.

In December, a new report surfaced online indicating that Microsoft Copilot misinformed users bygenerating false information regarding the forthcoming elections. The researchers behind the study indicated that the issue was systemic, as similar occurrences were spotted when using the chatbot to learn more about elections in Germany and Switzerland.

Image generation tools likeBing Image CreatorandMidjourneyare quickly gaining popularity among users. The tools are getting even better at generating images. For instance,Bing Image Creator, which gained support for OpenAI’s DALL-E 3 technology,  now generates more life-like images. However, complaints have pointed out itsslow generation speedsand itsservices being lobotomized.

While Microsoft has established some control over its image generation tool, how things will pan out is unpredictable, especially with OpenAI’s recent announcement, which made the long-awaited GPT Store available to users. We’re likely to see more deepfakes hit the web if no elaborate guardrails prevent such occurrences.

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You’ll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.