Google and Facebook failed to remove online scam adverts on their platforms after fraud victims reported them, according to consumer watchdog Which?
Google had failed to remove 34% of the scam adverts reported to it, compared with 26% at Facebook, the study indicated.
Both companies said they removed fraudulent adverts, which are banned on their platforms.
But Which? said a more proactive approach was needed.
The report also indicated:
- 15% of those surveyed had fallen victim to a scam advert and reported it
- of these, 27% had been on Facebook and 19% on Google
- 43% of victims did not report the scam to the technology companies
On Facebook platform, the biggest reason majority of users did not report the scam was they doubted anything would be done.
On Google, it was because the fraud victims did not know how to report the scam. Which? researchers said Google’s reporting process was complex and difficulty to understand.
“The combination of inaction from online platforms when scam ads are reported, low reporting levels by scam victims and the ease with which advertisers can post new fraudulent adverts even after the original ad has been removed suggests that online platforms need to take a far more proactive approach to prevent fraudulent content from reaching potential victims in the first place,” Which? said.
And it has launched a free scam-alert service to warn consumers of the latest tactics used by fraudsters.
“There is no doubt that tech giants, regulators and the government need to go to greater lengths to prevent scams from flourishing,” Adam French, consumer rights expert at Which?, said.
“Online platforms must be given a legal responsibility to identify, remove and prevent fake and fraudulent content on their sites… and the government needs to act now.”
A Facebook representative said: “Fraudulent activity is not allowed on Facebook platform and we have taken action on a number of pages reported to us by Which?”
Google, meanwhile, said it had removed or blocked more than 3.1 billion ads for violating policies.
“We’re constantly reviewing ads, sites and accounts to ensure they comply with our policies,” the company added.
“We have strict policies that govern the kinds of ads that we allow to run on our platform.
“We enforce those policies vigorously -, and if we find ads that are in violation, we remove them.
“We utilize a mix of automated systems and human review to enforce our policies.”
There are so many rules governing what you can advertise on radio, television and in print that by comparison the internet is a Wild West.
Facebook and Google do have rules about what can and cannot be advertised on their platforms – but they are businesses of scale and it would cost them money to check every ad before it goes live.
So they don’t bother.
Reactive moderation is a game of whack-a-mole that leaves consumers vulnerable to scams on platforms they think are trustworthy.
On top of that, a third of the victims surveyed by Which? said they did not bother reporting scam ads because they thought Facebook would not remove them.
And they are right to be skeptical.
On Facebook and Instagram platforms, one company has been using videos and photos of me to sell a face mask it claims I am modelling – but that is impossible because I made the face mask myself.
Facebook lets you report an ad as “misleading” but does not allow you to explain why and since the company in question is selling some sort of face mask, its moderators have let the ad stay up.
Google, meanwhile, will not inform you whether it has taken any action on your report filed – and its ads remain littered with companies that break the search giant’s own rules.
Little wonder consumer groups are now asking for the technology giants to face regulation.