Demons, Darth Vader mutilating babies, and sexualized women next to car crashes are just some what caused a Microsoft employee to report Copilot to the FTC

A Microsoft employee flagged images created by AI to Microsoft, the U.S. Senate, and the FTC.

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

What you need to know

What you need to know

A Microsoft employee of six years has flagged up vulgar and violent images generated by Microsoft’s Designer, the image creation tool that’s part of Copilot. Shane Jones, a principal software engineering manager at Microsoft, reported the images to Microsoft internally and has now sent letters to FTC chair Lina Khan and the Microsoft Board.CNBChas seen the letters and reported on the situation.

Images created with Copilot illustrated political bias, according to Jones. Underage drinking and drug use are among the types of images that Copilot can create. More extreme examples include demons about to eat an infant and Darth Vader holding a lightsaber near mutilated children when Copilot was prompted to make an image about “pro-choice.”

Abortion was only one politically charged topic that Copilot would make images of. Jones also managed to get Copilot to make Elsa from the movie “Frozen” to hold a Palestinian flag in front of destroyed buildings on the Gaza Strip next to a sign stating “free Gaza.” Generating images of copywritten characters is a hot topic in itself, even when not involving political topics.

Microsoftrebranded Bing Image Creator to Designerearlier this year. The tool usesDALL-E 3to create images based on what people type. While there are guardrails in place, Jones was able to create several images that many would consider inappropriate. Jones is a red teamer for Copilot, which means he tests the tool to try to get it to create problematic images.

Jones doesn’t work on Copilot directly, but he raised his concerns to Microsoft higher-ups in December 2023. After he felt his complaints were not heard, he also posted a letter on LinkedIn asking OpenAI to remove DALL-E 3 from the tool. Jones informed CNBC that the Microsoft legal department told him to remove the post, which he did.

Since seeing the initial images created by Copilot, Jones has written a letter to a U.S. Senator and met with people from the Senate Committee on Commerce, Science and Transportation. Jones also sent a letter to FTC chair Lina Khan.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” said Jones in his letter to Khan. He added that Microsoft “refused that recommendation.”

Get the Windows Central Newsletter

Get the Windows Central Newsletter

All the latest news, reviews, and guides for Windows and Xbox diehards.

Jones wants Microsoft to list disclosures on Copilot and to change the age rating for the tool in the Google Play Store.

The letter to Microsoft’s board asked the tech giant to investigate decisions made by Microsoft’s legal department and management. It also called for “an independent review of Microsoft’s responsible AI incident reporting processes.”

Jones went as far as to meet with senior management responsible for Copilot Designer directly, though it appears Jones' concerns have not been met to his satisfaction.

Microsoft shared the following with CNBC on the topic:

“We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety. When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns.”

Continuing AI issue

This is hardly the first time that Microsoft’s AI tools have been used to generate controversial content.Fake nudes of Taylor Swiftemerged last year and were allegedly made using Microsoft Designer. Microsoft CEO Satya Nadella was asked about those images, and he said the fake photos “set alarm bells off.”

Earlier this year, Copilot was spottedgenerating fake press releasesrelated to Russian opposition leader Alexei Navalny’s death. That was a different issue, since it was related to the AI tool hallucinating rather than generating inappropriate content on demand.

Copilot even has an"evil twin" called SupremacyAGIthat some users were able to chat with last week.

All these issues and others like them raise questions regarding AI and ethics, such as is it ethical to generate content many would consider inappropriate? If so, who decides what is inappropriate? Some ask if generating what’s considered to be vulgar content with AI is any different than an artist drawing or creating similar content in a different medium.

I’m not sure that Microsoft should be the sole authority on the matter, and neither does the company. Microsoft President Brad Smithdiscussed the importance of regulating AIin a recent interview. Smith called for an emergency brake to slow down or turn off AI if something dangerous develops.

We’ll have to see how Microsoft responds to the saga surrounding Jones and his concerns.Microsoft has censored its AI toolsin the past, so it may do something similar again.

Sean Endicott is a tech journalist at Windows Central, specializing in Windows, Microsoft software, AI, and PCs. He’s covered major launches, from Windows 10 and 11 to the rise of AI tools like ChatGPT. Sean’s journey began with the Lumia 740, leading to strong ties with app developers. Outside writing, he coaches American football, utilizing Microsoft services to manage his team. He studied broadcast journalism at Nottingham Trent University and is active on X @SeanEndicott_ and Threads @sean_endicott_.