Rob’s Notes 36: My First Guest Post (Jonathan Bellack)
There’s a logical reason that Meta (and other big platforms) always underfund safety
Hi readers - Rob here. Jonathan and I worked together at Google as part of the same product leadership group in a team called Privacy, Safety and Security. He writes about these topics and other issues at Platformocracy that are near and dear to my heart, so we’ve traded newsletters for the week. You can see my piece this week by heading over to Platformocracy!
Hello, friends of Rob’s Notes. I’m Jonathan Bellack, author of Platformocracy, a free weekly newsletter that advocates for more democracy in our online lives. I’ve been working on Internet technology for over 30 years, including fifteen years at Google as a product management leader in online advertising and counter-abuse.
Rob and I are guest-posting on each other’s platform this week as part of our (horrified) reaction to the revelation that 10% of Meta’s revenues come from scams and other illegal content. Last week Rob asked whether Meta has cut safety and security spending by 30% and hoped that they would spend more again. I wish they would, too, but I seriously doubt it.
The Budget Is Not Enough
To understand why big platforms constantly under-invest in safety, you need to understand that when you’re as profitable and enormous as Meta or Google, decision-making gets really weird, because money is effectively infinite. Everyone at the company knows this, so everyone is always asking for more of the infinite pie – hiring more staff, spending more on contractors, using more computing resources, etc.
This hunger starts at the bottom and flows uphill. Every team of 5 engineers knows they could do more with a sixth. Every product team of 50 has an exciting vision that would get them to 100 people with some targeted investment. Every business line of a thousand people can explain in detail why additional strategic spending is critical to defend the company’s market position and future opportunities.
This isn’t just business. Every support function also knows that they could be the world leader in marketing, customer service, or cafeteria food, if the company would just fund them for greatness. Or, if they are responsible for handling problems like legal, policy, or integrity/safety, they can paint exquisite scenarios of doom and destruction if they aren’t staffed up to meet the ever-growing threat environment.
In fifteen years at Google, I can’t remember any team at any level ever showing up to a strategic planning meeting without a request for growth and a case for absolute urgency and importance.
If you’re a senior executive sitting on top of this, it all starts to blur together. Planning discussions sound like a bunch of baby birds squawking for mom’s latest worm. If you control access to more money than the average nation-state, why not just give everyone whatever they’re asking for? [Meta’s annual earnings would make it the 60th largest country in the world by GDP. Microsoft: 56. Alphabet: 44. Apple: 43. Amazon: 26.]
Dr. No, You Can’t Hire Two Thousand People In One Month
Even so, as an executive, you still have to figure out who gets some of the infinite money and who doesn’t. If you gave everyone everything they asked for, you’d be creating a perverse incentive for people to ask for stupid stuff because they know they’d get it. [I used to joke about proposing to build a 50-foot bronze statue of my SVP on the Mountain View campus. I knew the company could afford it.]
Even insanely-rich companies have constraints. There’s a limit on how fast you can hire and train qualified people. Data centers take time to build. And most of all, it’s hard for the human brain to keep up at big-tech scale. No one executive can understand the fine details of every resource request, especially when they come from a specialist team with a bunch of domain-specific jargon.
OKRs Are Forever
The executive shortcut when there’s tons of money, weird constraints, and too much complexity to keep track of, is to make teams normalize everything into some kind of comparable metric. Often this takes the shape of the (in)famous OKRs (Objectives and Key Results).
If you already have customers, your OKR can be the promise of growth through new revenues and increased profit. If you’re an experimental project, you can spin dreams of the next billion-dollar market (or fear of falling behind competitors). If you’re legal, policy (regulation), or security, you can point to the terrible things that have happened to companies that underestimated the risks. There’s a whole Wikipedia page for big companies driven out of business by fraud and scandal.
Safety teams tend fall down and lose the race for funding because they can’t express their work in OKRs that matter to executives. There are no standards for what “safe enough” looks like. Warnings about high levels of fraud, harassment, and scams end up just sounding like the same squawking as all the other hungry baby birds. Elon Musk slashed X’s trust and safety team by 30-50% and its usage did shrink, but it didn’t die, which suggests there’s a viable minimum spend. Might as well give the worm to that new-business bird in the corner who has potential to grow up big and fat.
Casino Royale
This decision-making flow is internally logical, but it’s a huge bet on externalities – that society will continue to pay the price of all these harms, so the company doesn’t have to. It’s like we are back in the 50s and 60s, when companies like General Electric and General Motors were dumping huge amounts of chemicals and paint into the Hudson River.
Water pollution didn’t start to be reduced until the passage of the Clean Water Act in 1972, which introduced permitting and waste water treatment requirements. This isn’t a perfect analogy, because online abuse involves human behavior which can’t be regulated and purified like dangerous chemicals. But the broader point remains, that government intervention may be required to force platforms to bear the cost of their risky wager on underinvestment in safety.
I believe this explains why the one thing platforms seem sensitive to when it comes to safety is bad press. It’s not just the immediate embarrassment. Executives are afraid that if the criticism continues and grows, it could generate popular demand for regulation. That would bring the casino game to a screeching halt.

