San Diego State University researchers have used artificial intelligence (AI) to combat and uncover bitcoin giveaway frauds that are prevalent on Twitter. This automatic algorithm, known as GiveawayScamHunter, has amazingly identified 95,111 scam lists created by 87,617 Twitter users between June 2022 and June 2023.
The researchers successfully extracted website and wallet addresses from the scams using GiveawayScamHunter. This skill resulted in the identification of 327 domains linked to giveaway scams and the disclosure of 121 new cryptocurrency wallet addresses involved in these fraudulent activities. Additionally, the tool has revealed the complex workings of these frauds, including the modus operandi of the con artists and an estimate of the number of victims caught throughout the one-year monitoring period.
The results of the thorough investigation highlight a troubling truth, as it was projected that over 365 people were duped out of over $870,000 throughout the study’s time period by crypto frauds.
The researchers’ investigation also revealed a worrying pattern: since Twitter Lists are open to anybody, fraudsters have learned to effectively use this networking function.
By utilizing data from previously known scams to train a natural language processing algorithm, the team was able to discover approximately 100,000 instances of giveaway scam lists and collect information on newly unknown fraudulent websites and wallets.
The findings of the study have compelled the researchers to share their knowledge with Twitter and the larger cryptocurrency community, along with the pertinent account handles, domain names, and wallet addresses.
Despite these attempts, the study shows that, as of publishing on August 10, a sizable 43.9% of the accounts linked to the frauds were still active. The analysis emphasizes how urgent it is to deal with spam accounts and stop their spread on Twitter.
The use of AI for scam detection fits with a troubling trend in which bad actors and fraudsters are increasingly using AI-driven technologies to create new deception schemes. These AI-powered technologies enable the quick growth of their following, creating an appearance of authority and popularity while being completely false. By creating phony interactions and accounts, these strategies maintain the appearance of authenticity.
Additionally, fraudsters communicate with prospective victims through AI-driven chatbots or virtual assistants, giving financial advice, pushing fake tokens and initial coin offerings, or presenting attractive high-yield investment possibilities. The use of AI may potentially undermine the concept of “social proof of work,” which bases a project’s validity on the size and tenacity of its online fan network.
The “pig butchering” strategy, which involves AI-powered instances of befriending people who are weak before deceiving them, frequently preys on the elderly or fragile, is a frightening example of AI-driven scamming.