A Microsoft employee says the AI ​​tool tends to create «sexually objectified» images

Rv9Tuiuk Copilot Bloomberg 625X300 07 March 24

A Microsoft Employee Says The Ai ​​Tool Tends To Create

A Microsoft software engineer has sent warning letters to the company’s board, lawmakers and the Federal Trade Commission, warning that the tech giant is not doing enough to protect Copilot Designer, its artificial intelligence image creation tool, from creating offensive and violent content.

Shane Jones says OpenAI has discovered a security vulnerability in the latest DALL-E image generator model that allows the tool to bypass security bars that prevent it from generating malicious images. The DALL-E model has been incorporated into many Microsoft artificial intelligence tools, including Copilot Designer.

Jones said he reported the findings to Microsoft and «repeatedly urged» the Redmond, Washington-based company to «remove Copilot Designer from public use until better security measures are implemented,» according to a letter sent to the FTC on Wednesday. Reviewed by Bloomberg.

«While Microsoft has marketed Copilot Designer as a safe artificial intelligence product for use by anyone, including children of any age, the company is well aware of systemic issues with the product creating offensive and inappropriate images for consumers.» Jones wrote. «Microsoft Copilot Designer does not include product warnings or disclosures necessary to make consumers aware of these risks.»

In a letter to the FTC, Jones said that Copilot Designer tends to randomly create «an inappropriate, sexually objectified image of a woman in some of the images it creates.» It also said the AI ​​tool «generates harmful content in various other categories: political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories and religion to name a few.»

The FTC confirmed receipt of the letter but declined to comment further.

The broadside reflects growing concerns about the propensity of AI tools to generate malicious content. Last week, Microsoft said it was investigating reports that its Copilot chatbot was generating what it called disturbing responses from users, including mixed messages about suicide. In February, Gemini, Alphabet Inc’s flagship artificial intelligence product, was too hot to generate historically inaccurate scenes when asked to generate images of people.

Jones also wrote to the Microsoft board’s Environmental, Social and Public Policy Committee, which includes Penny Pritzker and Reid Hoffman. «I do not believe we should wait for government regulation to ensure we are transparent with consumers about the risks of AI,» Jones wrote in the letter. «In keeping with our corporate values, we must voluntarily and transparently disclose known AI risks, especially when an AI product is actively marketed to children.»

CNBC previously reported the existence of the letters.

In a statement, Microsoft said it is «committed to addressing all employee concerns in accordance with our company policies and appreciates the efforts of employees in learning and testing our latest technology to further enhance its security.»

OpenAI did not respond to a request for comment.

Jones said he raised his concerns with the company several times over the past three months. In January, he wrote a letter to Washington State Democratic Senators Patty Murray and Maria Cantwell and House Representative Adam Smith. In a letter, he asked MEPs to investigate the risks of «EU imaging technologies and the corporate governance and responsible AI practices of the companies that build and market these products».

Lawmakers did not immediately respond to requests for comment.

(Other than the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)