Header Ads Widget

Microsoft names developers behind illicit AI tools used in celebrity deepfake scheme

 

Microsoft

Microsoft said in a legal filing on Thursday that four developers from other countries and two from the United States illegally accessed generative AI services, reconfigured them to allow the creation of harmful content like celebrity deepfakes, and then resold access to the tools. Microsoft said in a blog post about its amended civil litigation complaint that users used the modified AI tools to create "non-consensual intimate images of celebrities and other sexually explicit content." These tools included Microsoft's Azure OpenAI services. The lawsuit was unsealed in January after being filed in December in a federal court in Virginia. Microsoft did not name the celebrities out of concerns for their privacy.  The company also said it “excluded synthetic imagery and prompts from our filings to prevent the further circulation of harmful content.”

 The developers of the malicious AI tools are part of a “global cybercrime network” that Microsoft tracks as Storm-2139, the blog post said. 

 Microsoft stated that the two individuals are based in Illinois and Florida, but it withheld their names due to ongoing criminal investigations. The four foreign developers, the company said, are Arian Yadegarnia, aka “Fiz,” of Iran; Alan Krysiak, aka “Drago,” of the United Kingdom; Ricky Yuen, aka “cg-dot,” of Hong Kong; and Phát Phùng Tấn, aka “Asakuri,” of Vietnam.

 Microsoft stated that it is preparing criminal referrals to domestic and international law enforcement agencies. Microsoft stated that Storm-2139 had access to the AI services using "exploited exposed customer credentials scraped from public sources." A preliminary injunction and temporary restraining order were issued by the court following Microsoft's initial filing, allowing the company to seize a Storm-2139-related website. Microsoft said the disruption enabled its investigation to go deeper.

 “The seizure of this website and subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another,” said the blog post, written by Steven Masada, assistant general counsel of Microsoft’s Digital Crimes Unit.

 As chatter about the lawsuit increased, participants in the group’s communications channels also doxed Microsoft lawyers, “posting their names, personal information, and in some instances photographs,” the company said.  The doxing backfired, though, and some suspected members of Storm-2139 emailed Microsoft, “attempting to cast blame on other members of the operation.” 

 The six individuals mentioned in the blog post are among 10 “John Does” listed in the original complaint, Microsoft said.


Microsoft has updated a lawsuit to name four developers it says were part of an effort to evade its generative AI guardrails and enable the creation of celebrity deepfakes, among other things.

 Why it matters: While Microsoft and other companies have implemented safeguards to prevent the misuse of generative AI, these safeguards are only effective if the legal and technological systems are able to effectively enforce them.  Driving the news: Microsoft named four developers it says are part of the Storm-2139 global cybercrime network: Arian Yadegarnia aka "Fiz" of Iran; Alan Krysiak aka "Drago" of the United Kingdom; Ricky Yuen aka "cg-dot" of Hong Kong and Phát Phùng Tấn aka "Asakuri" of Vietnam.

 Microsoft said that members of Storm-2139 used compromised customer credentials to hack accounts with access to generative AI services and then bypassed the tools' safety guardrails.

 The individuals then sold access to the accounts, offering "detailed instructions on how to generate harmful and illicit content, including non-consensual intimate images of celebrities and other sexually explicit content." In December, Microsoft filed the initial lawsuit, identifying the defendants as "John Does" at the time.  When the lawsuit was unveiled in January, Microsoft made a public statement.  What they're saying: "We are pursuing this legal action now against identified defendants to stop their conduct, to continue to dismantle their illicit operation, and to deter others intent on weaponizing our AI technology," Microsoft said in a blog post on Thursday.

 The intrigue: Microsoft said the four named defendants aren't the only participants in the scheme it has identified.

 "While we have identified two actors located in the United States — specifically, in Illinois and Florida — those identities remain undisclosed to avoid interfering with potential criminal investigations," the company said.   "Microsoft is preparing criminal referrals to United States and foreign law enforcement representatives."

 Between the lines: Microsoft said a court order allowing the company to seize a "website instrumental to the criminal operation," helped both disrupt the scheme and uncover its participants.

 "The seizure of this website and subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another."

 Yes, but Microsoft claimed that it also contributed to the "doxxing" of its lawyers, in which their names, personal information, and, in some instances, photographs were posted online.

Post a Comment

0 Comments