
Concerns regarding the ability of artificial intelligence (AI) techniques to support financial crimes have been raised in response to a scam in northern China that employed advanced “deepfake” technology to persuade a man to send money to a fictitious buddy.
Amid a spike in AI-driven fraud, mostly involving the exploitation of voice and face data, China has been closely examining such technologies and applications. In January, new guidelines were established to formally safeguard victims.
According to police in the Inner Mongolian city of Baotou, the fraudster pretended to be a victim’s buddy during a video conversation in order to acquire a transfer of CNY 4.3 million ($622,000, or Rs. 5,15,52,000) by using face-swapping technology powered by AI.
According to a statement released by the police on Saturday, the man transmitted the funds under the impression that his acquaintance was required to submit a deposit as part of an auction.
According to the police, who added that they had found most of the stolen money and were trying to find the balance, the guy only understood he had been tricked after the buddy admitted to being unaware of the circumstances.
The incident sparked debate on the microblogging platform Weibo about the risk to internet security and privacy, with the hashtag “#AI scams are exploding across the country” receiving more than 120 million views on Monday.