03/12/2024

Microsoft AI Engineer's Alarming Findings on Copilot Designer's Problematic Image Generation

Microsoft AI Engineer Raises Concerns About Copilot Designer's Problematic Image Generation Shane Jones, an artificial intelligence engineer at tech giant Microsoft, has raised significant concerns about the AI image generator tool, Copilot Designer. According to Jones, the tool has been generating problematic images that stray from Microsoft's responsible AI principles, sparking worries about potential repercussions on misinformation and societal norms. Jones actively tested the Copilot Designer for vulnerabilities and was alarmed to witness it creating images depicting demons, monsters, violent scenes related to abortion rights, sexualized images of women, underage drinking and drug use, and teenagers with assault rifles. These revelations prompted Jones to take action by reporting his findings to Microsoft, but to his dismay, the company did not remove the product from the market. Seeking further intervention, Jones reached out to OpenAI, the technology provider behind Copilot Designer, but received no response. Consequently, he escalated his concerns by writing letters to the Federal Trade Commission Chair and Microsoft's board of directors, emphasizing the hazards of unregulated generative AI technology like Copilot. The Copilot tool, formerly known as Bing Image Creator and powered by OpenAI's technology, has been under scrutiny for generating images that perpetuate political bias, religious stereotypes, and conspiracy theories. The absence of stringent guidelines around such AI technology has fueled apprehensions about its potential influence on misinformation, especially in the context of impending elections. With the Copilot team inundated with over 1,000 daily product feedback messages, resources are stretched thin, leading to a backlog of unaddressed issues and problematic outputs. Jones discovered that the AI model could produce violent content, underscoring the imperative of thorough training dataset vetting and model cleansing to mitigate societal harm. In light of Jones's revelations, Microsoft has begun implementing changes to the Copilot AI tool. The tool now blocks specific image prompts like "pro choice," "pro choce," "four twenty," and "pro life," issuing warnings about policy violations. Requests to generate images depicting teenagers or kids with assault rifles have also been prohibited due to ethical concerns and policy adherence. Despite these adjustments, challenges persist regarding inappropriate image results, copyright infringements including depictions of Disney characters, and graphic content. Microsoft has assured continuous monitoring and refinement of safety filters within the AI tool to curb misuse and uphold responsible AI tenets. As concerns continue to mount over Copilot Designer's troubling image generation, the call for enhanced safeguards and improved incident reporting mechanisms within tech companies like Microsoft grows louder. Jones's advocacy for prompt action in addressing harmful image dissemination globally underscores the urgency of conscientious AI development practices to align with societal values. The Federal Trade Commission has acknowledged receipt of Jones's correspondence regarding Microsoft's AI tool but declined to provide further comment at this time. As the discourse surrounding AI ethics and responsible technology deployment intensifies, the tech industry faces mounting pressure to prioritize societal well-being and ethical standards in their product offerings.