Rob's Notes 32: Post-Click Scam Ad Notifications
Is this the next battleground for big tech platforms?
Crypto investment scams are an epidemic. There are many documented cases where they start with an ad, and then sometimes continue in encrypted group- or direct message chats. But platforms can do more than they do today to curb these. When an ad has been live for some time and the platform subsequently determines these ads are fraudulent and take them down, they should be subsequently notifying users who clicked on those scam ads. In this note I’ll explain how and why. [a]
Understanding why notification matters requires examining how these scams work. Crypto investment scams rarely involve instant theft. Instead, they follow multi-stage patterns over days or weeks. In one variant, a victim clicks a convincing deepfake ad, creates an account on a professional-looking ‘investment’ site, and gets contacted by a “customer service representative” who builds trust. Small initial investments appear to generate returns. They may even be able to make small withdrawals. Only later, when the victim tries to withdraw larger amounts, do endless fees and obstacles emerge. By then, some have lost everything.
There is not usually an immediate return for the scammers. Throughout this timeline (which can span days to months), there’s a window for intervention. A simple notification stating “The investment ad you clicked has been verified as a scam. Do not send money to any platform accessed through this ad” could potentially stop the fraud mid-stream.
This is technically possible
Platforms know who clicked on each ad, and you can get a sense of this yourself. For example Meta’s Facebook app has a feature called “Recent Ad Activity” that shows you ads you’ve clicked or have ‘saved’ for later. Google’s “My Activity” includes information about ads you’ve clicked on. So knowing which users to notify is trivial: the question is really more of where to notify people and how to explain these notifications.
For context: ads may change approval state during the time they are running. Some degree of ad review involving machine learning and AI and/or human beings looking at the ads typically happens before they run, but ads may also be “re-reviewed” over time. As Meta says, “ads may be reviewed again, including after they are live”. People who see ads they think are problematic sometimes report them as scams, spam or other things, which may sometimes trigger some of these additional reviews or other limitations (although companies are generally opaque on exactly what these procedures are). [b]
Today, when platforms take down advertisements or ad accounts that are violating their policies, for example AI-generated deepfakes of celebrities promoting fake crypto investments, these ads and accounts are quietly removed. I don’t recall seeing any reports of the platforms NOT keeping the money they make from these ads regardless of whether they’re taken down, by the way.
Some precedents exist for these notifications
Social media platforms wouldn’t be breaking new ground, although the nature of these notices and how quickly they’d want to deliver them might themselves be new. All 50 US states require companies to notify data-breach-affected users within 30-60 days of discovering a breach, for example, even if it occurred months earlier. The principle here is that when a company’s systems fail to protect users (whether through technical vulnerabilities or inadequate screening) the company must notify those affected. Banks routinely notify customers about fraudulent transactions detected days or weeks after they occurred. Banks must investigate and notify customers far more quickly than the data breach example, with many banks proactively alerting users when fraud detection systems flag past suspicious activity.
These and other precedents like product recalls share key elements: retroactive contact about past interactions, maintained user records, time gaps between events and notifications, legal mandates despite operational costs, and prioritization of public safety over corporate convenience. If banks can notify millions of people about fraud and companies must disclose breaches, platforms can notify users about scam ads they’ve taken down.
(Whether the companies are flagging and taking down enough of these scam ads is another question, which I won’t get into here).
A notification system would provide immediate victim protection, catching people mid-scam before they send more money. It could create better platform incentives. If notifications are required, fraud prevention becomes a visible priority rather than something quietly swept away.
There are Costs and Challenges
The user experience concerns that platforms will raise are legitimate. False positives could damage legitimate businesses and trigger lawsuits of various flavors. Alert fatigue might desensitize users, making warnings counterproductive. If this data were shared with law enforcement, privacy issues could arise when revealing what ads people clicked. Determining what constitutes a “scam” is obviously always a moving target. These schemes could create counter incentives for platforms to take down fewer suspected scams - but again, that’s part of a different and larger problem. We have limited third-party access to their corpus of ads yielding little true visibility into how many scams exist.
Clearly how this could be implemented and what these regulations or policies should look like, requires deeper discussion. And this means with people who’ve worked on, work on and care about the growing problem of scams.
But pretending this problem doesn’t exist and any fixes are “too hard” or “too easy to game”, is no longer an option.
Notes/References:
[a] Recent examples: Times of India, The Guardian, WSJ, Daily Mail, Times of India
[b] Meta “About Ads in Review” help page
Corrections: I incorrectly numbered this note as 31 originally: sorry!

