Meta rolls out new AI content enforcement systems while reducing reliance on third-party vendors
Meta rolls out new AI content enforcement systems while reducing reliance on third-party vendors
Meta on Thursday announced that it’s starting to roll out more advanced AI systems to handle content enforcement as it plans to cut back on third-party vendors. Tasks related to content enforcement include catching and removing content about terrorism, child exploitation, drugs, fraud, and scams.
The company says it will deploy these more advanced AI systems across its apps once they consistently outperform its current content enforcement methods. At the same time, it will reduce its reliance on third-party vendors for content enforcement.
“While we’ll still have people who review content, these systems will be able to take on work that’s better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drug sales or scams,” Meta explained in a blog post.
Meta believes these AI systems can detect more violations with greater accuracy, better prevent scams, respond more quickly to real-world events, and reduce over-enforcement.
The company says early tests of the AI systems have been promising, as they can detect twice as much violating adult sexual solicitation content as its review teams, while also reducing the error rate by more than 60%. It also says the systems can identify and prevent more impersonation accounts involving celebrities and other high-profile individuals, as well as help stop account takeovers by detecting signals such as logins from new locations, password changes, or edits made to a profile.
Additionally, Meta says the systems can identify and mitigate around 5,000 scam attempts per day, in which scammers try to trick people into giving away their login details.
“Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions,” Meta wrote in the blog post. “For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement.”

The move comes as Meta has been loosening its content moderation rules over the past year or so, as President Donald Trump took office for a second time. Last year, the company ended its third-party fact-checking program in favor of an X-like Community Notes model. It also lifted restrictions around “topics that are part of mainstream discourse” and said users would be encouraged to take a “personalized” approach to political content.
It also comes as Meta, and other Big Tech companies, are currently facing several lawsuits looking to hold social media giants accountable for harming children and young users.
Meta also announced Thursday that it’s launching a Meta AI support assistant that will give users access to 24/7 support. The assistant is rolling out globally to the Facebook and Instagram apps for iOS and Android, and within the Help Center on Facebook and Instagram on desktop.
Aisha is a consumer news reporter at TechCrunch. Prior to joining the publication in 2021, she was a telecom reporter at MobileSyrup. Aisha holds an honours bachelor’s degree from University of Toronto and a master’s degree in journalism from Western University.
You can contact or verify outreach from Aisha by emailing aisha@techcrunch.com or via encrypted message at aisha_malik.01 on Signal.