Advertisement
Alerted by an internal alert from an intelligence expert, Microsoft moved quickly to block some of the content causing the display issues. Terms such as “Personal Choice”, “Personal Choice” [sic], “Four Twenty” and “Working Life” are currently prohibited and users will be given a warning of this violation, which will result in access being restricted.
Additionally, requests for images depicting youth or children involved in violence involving firearms will now receive comments that this behavior violates ethics and Microsoft policies. Microsoft emphasizes the importance of protecting content that may harm or disturb others.
Although these measures indicate a step forward in solving the problem, some problems continue. Although it blocks some of the inspiration, the device still produces disturbing images, such as the violent images associated with terms such as “car crashes” and “car crashes.” There are also cases of copyright infringement, including the illegal use of Disney characters in related areas.
Shane Jones, director of artificial intelligence engineering at Microsoft, was instrumental in bringing awareness to these issues. Since December, Jones has been actively testing Copilot Designer to detect vulnerabilities through red team applications. Their efforts revealed the ability of AI tools to create images depicting ghosts, ghosts, and other inappropriate content, which violates Microsoft’s responsible AI policies.
Despite Jones’ efforts to resolve internal problems, Microsoft did not begin removing the product from the market. Jones contacted OpenAI to raise the issue and subsequently published an open letter on LinkedIn calling for an investigation of OpenAI’s DALL-E 3 model. However, Microsoft’s legal department advised him to remove the post immediately.
Recently, Jones wrote a letter to US Federal Trade Commission (FTC) Chairman Lina Khan and Microsoft’s board of directors to express his concerns. The FTC confirmed receipt of the letter but declined to comment further.
In response to questions about the changes, a Microsoft spokesperson said the company is committed to the ongoing maintenance and strengthening of display security filters to minimize abuse of the system.
Actions taken by Microsoft show ways to address concerns about AI-generated content, but further analysis and development will be required to ensure responsible use of AI technology.