AI Safety: Google and Meta Should Start With Their Own Ads (Rob's Notes 10)
Or will they instead ship flashy features before they use AI to protect people?
Last week Meta CEO Mark Zuckerberg shared that the company is “building massive compute infrastructure to support [its] future roadmap,” spending billions of dollars on ~350,000 Nvidia H100s to build out the next generation of AI tools and products. Google is spending $8 billion a quarter on R&D “driven overwhelmingly by AI compute and related technical infrastructure”, per CNBC.
Despite these massive levels of investment, I worry about these companies repeating past patterns and prioritizing cool and flashy products that use these technologies before using them to protect users.
Neither company has (yet) been forthcoming about how they plan to use powerful new large language models (“LLMs”) to slow down the spread of scams and deceptive advertising on their platforms. Google and Meta are the two biggest walled gardens in media: so it’s fair for us all to ask for details of how those walls will be defended and maintained.
“Any consumer paying $20/month can now upload a screenshot of an ad to ChatGPT4 and get a very good assessment of whether it’s likely a scam” (BELOW)
My team at Meta built the company’s Ad Library, becoming the first company to make all its active ads visible to the public, in 2018. We also addressed complaints from researchers about the robustness of mechanisms (like APIs) to allow independent assessments of our ads enforcement at scale, especially of political ads. While there has rightfully been strong media attention on political ad transparency, the media has paid far less attention to deceptive ads. Google followed Meta in building ads transparency tools (and both added additional features as required by the EU’s Digital Services Act) but neither company yet provides public numbers on how prevalent policy-violating ads or scams are.
With the above data caveat, the scam ad problem seems to be growing more significant for both companies in a world of AI-powered deepfakes. 404 Media reported this month that “Joe Rogan, Taylor Swift, Andrew Tate, Steve Harvey, The Rock, and Oprah have been cloned by AI for scam ads that, in total, have been viewed more than 195 million times on YouTube”. I recently forwarded reports to Meta from UK users about a rash of deepfake ads featuring Elon Musk, Rishi Sunak and others (which Meta took down, but many persist today and are easy to find, with several sent to me just this week).
Google and Meta have historically used a combination of automated and human review to take down ads violating their ads policies, including those trying to deceive or scam people. But AI has greatly improved over the last 18 months! Any consumer paying $20/month can now upload a screenshot of an ad to ChatGPT4 and get a very good assessment of whether it’s likely a scam - an assessment far better than a lot of the current technology reviewing millions of ads daily. One has to imagine with scaled access to technology like this, and billions being spent to improve their own tools, the companies could do a lot more than they are doing today to stop bad ads.
Google’s president of public policy, Kent Walker, shared survey data last week stating consumers believe “better security” is one of the top three most important AI applications. He also stated that “people want AI development that is innovative, responsible, and anchored on strong public-private partnerships”. I agree on both counts.
These companies will argue that this is a highly adversarial problem with financially motivated bad actors attacking these platforms 24/7, and that no technology is a panacea. This is true: these actors will evolve their approaches and can often defeat detection, especially when they can also easily test their approach using off-the-shelf AI tools themselves. In spite of that, not doing all they can to use AI to prevent harm to users (while simultaneously spending billions of dollars in ad revenue to further strengthen their own AI), strikes me as problematic. This is one area where the walls need to be higher than they currently are.
I know the teams working on these problems don’t want scam ads, and will do everything in their power to protect people from them. They may even have some LLM ad review efforts in the works already - but it’s time for Google and Meta to explain to all of us in detail how they’ll use next-generation AI to protect people even if it costs them ads revenue.
Whether or not using such AI reduces their revenue in the short term, the perception of Google and Meta profiting from deceptive ads when their technology can ferret these out, will have a far worse long-term impact.
—
Rob Leathern led the product team for business integrity at Facebook (now Meta) from 2017 to 2020, and was VP of Product for Privacy/Security at Google from 2021 to 2023. He started Trust2.ai to analyze and improve how AI affects trust, privacy and safety.