Which? surveyed 2,000 UK consumers in February and found nearly 15% had been victims of online scams.
The consumer lobbyist says the problem suggests that the reactive approach to fraudulent content taken by online platforms is not fit for purpose.
However, Google and Facebook have defended their efforts to crack down on scams and point out they are making huge efforts to weed out scammers.
Which? found that a third (34%) of victims who reported an advert that led to a scam on Google said the advert was not taken down by the search engine.
A quarter (26%) of victims who reported an advert on Facebook that resulted in them being scammed said the advert was not removed by the social media site.
Which? says flaws with the current “reactive approaches” to tackling online scams make a “clear case” for online platforms to be given legal responsibility for preventing fake and fraudulent adverts from appearing on their sites.
Which? wants the government to include content that leads to online scams in its Online Safety Bill.
Of those who said they had fallen victim to a scam as a result of an advert on a search engine or social media, a quarter (27%) said they had fallen for a fraudulent advert they saw on Facebook and one in five (19%) said a scam targeted them through Google adverts. Three per cent said they had been tricked by an advert on Twitter.
Two in five (43%) scam victims conned by an advert they saw online, via a search engine or social media ad, said they did not report the scam to the platform hosting it. The biggest reason for not reporting adverts that caused a scam to Facebook was that victims did not think the platform would do anything about it or take it down. For Google, the main reason for not reporting the scam ad was that the victim did not know how to do so. A third (32%) of victims said this.
Adam French, consumer rights expert at Which?, said: “Our latest research has exposed significant flaws with the reactive approach taken by tech giants including Google and Facebook in response to the reporting of fraudulent content – leaving victims worryingly exposed to scams.
“Online platforms must be given a legal responsibility to identify, remove and prevent fake and fraudulent content on their sites. The case for including scams in the Online Safety Bill is overwhelming and the government needs to act now.”
Which? has recently launched a free scam alert service to help consumers.
In a response to Which? Google said it was constantly reviewing ads and websites and had blocked or removed over 3.1bn ads for violating its policies. It said it encouraged people to flag ‘bad actors’ via its support tool which can be found by searching “How to report bad ads on Google.”
Facebook said it had taken action to remove a number of pages reported to it by Which? It said it has a 35,000 strong team of safety and security experts work who alongside AI to “proactively identify and remove this content”. It said its teams disable “billions” of fake accounts every year and it has donated £3m to Citizens Advice to deliver a UK Scam Action Programme.
A Twitter spokesperson said the organisation took “robust enforcement action” when it identified violations of its rules.
• Opinium conducted an online survey for Which? of 2,000 nationally representative UK adults aged 18+ between in February. Of those surveyed, 298 people said they had fallen victim to a scam through an ad on either a search engine or social media and reported it to the company.