A Microsoft AI engineer, Shane Jones, raised issues in a letter on Wednesday. He alleges the corporate’s AI picture generator, Copilot Designer, lacks safeguards towards producing inappropriate content material, like violent or sexual imagery. Mr Jones claims he beforehand warned Microsoft administration however noticed no motion, prompting him to ship the letter to the Federal Commerce Fee and Microsoft’s board.
“Internally the corporate is nicely conscious of systemic points the place the product is creating dangerous photographs that might be offensive and inappropriate for customers,” Mr Jones states within the letter, which he revealed on LinkedIn. He lists his title as “principal software program engineering supervisor”.
In response to the allegations, a Microsoft spokesperson denied neglecting security issues, The Guardian reported. They emphasised the existence of “strong inside reporting channels” for addressing points associated to generative AI instruments. As of now, there was no response from Shane Jones concerning the spokesperson’s assertion.
The central concern raised within the letter is about Microsoft’s Copilot Designer, a picture technology device powered by OpenAI’s DALL-E 3 system. It capabilities by creating photographs based mostly on textual prompts.
This incident is a part of a broader pattern within the generative AI area, which has seen a surge in exercise over the previous 12 months. Alongside this speedy growth, issues have arisen concerning the potential misuse of AI for spreading disinformation and producing dangerous content material that promotes misogyny, racism, and violence.
“Utilizing simply the immediate ‘automotive accident’, Copilot Designer generated a picture of a girl kneeling in entrance of the automotive carrying solely underwear,” Jones states within the letter, which included examples of picture generations. “It additionally generated a number of photographs of ladies in lingerie sitting on the hood of a automotive or strolling in entrance of the automotive.”
Microsoft countered the accusations by stating they’ve devoted groups particularly tasked with evaluating potential security issues inside their AI instruments. Moreover, they declare to have facilitated conferences between Jones and their Workplace of Accountable AI, suggesting a willingness to handle his issues via inside channels.
“We’re dedicated to addressing any issues staff have by our firm insurance policies and recognize the worker’s effort in learning and testing our newest expertise to additional improve its security,” a spokesperson for Microsoft mentioned in a press release to the Guardian.
Final 12 months, Microsoft unveiled Copilot, its “AI companion,” and has extensively promoted it as a groundbreaking methodology for integrating synthetic intelligence instruments into each enterprise and inventive ventures. Positioned as a user-friendly product for most of the people, the corporate showcased Copilot in a Tremendous Bowl commercial final month, emphasizing its accessibility with the slogan “Anybody. Anyplace. Any system.” Jones contends that portraying Copilot Designer as universally protected to make use of is reckless and that Microsoft is neglecting to reveal widely known dangers linked to the device.