For graphic violence, Facebook's technology accounted for 86 percent of the reports.
Guy Rosen, Facebook's vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content.
The release of the report-the first time the company has ever made such data public-comes on the heels of a series of other first-ever efforts at transparency following the Cambridge Analytica scandal, Facebook's subsequent apologies, and Mark Zuckerberg's many hours of testimony on Capitol Hill. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do. Facebook estimates that out of every 10,000 pieces of content viewed, seven to nine of those were of content that violated its pornography and nudity rules. But a recent report from the Washington Post found that Facebook's facial recognition technology may be limited in how effectively it can catch fake accounts, as the tool doesn't yet scan a photo against all of the images posted by all 2.2 billion of the site's users to search for fake accounts.
Eddy Mitchell plaide pour une réconciliation des deux clans — Héritage de Johnny
Il n'aimait pas beaucoup les discours mais il se faisait comprendre, entendre. David Hallyday et Laura Smet n'ont pas été proches pendant leur enfance.
Most of the content was found and flagged before users had a chance to spot it and alert the platform.
Several categories of violating content outlined in Facebook's moderation guidelines - including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement - are not included in the report.
While Facebook uses what it calls "detection technology" to root out offending posts and profiles, the software has difficulty detecting hate speech.
Watch a Young Jada Pinkett Smith Challenge Eazy-E on His Lyrics
Jada started dating the actor while he was still married to Fletcher , with whom he shared a 3-year-old son, Trey, at the time. And after all of that kinda settled down and it was like a kind of lull, I was just listening to a lot of dark music.
However, when it came to hate speech, the company's technology only flagged around 38 percent of posts that it took action on and Facebook notes it has more work to do there.
Facebook took action on 2.5 million pieces of content over hate speech, but doesn't have view numbers as it is still "developing measurement methods for this violation type". The inaugural report was meant to "help our teams understand what is happening" on the site, he said. Current estimates by the firm suggest 3-4% of active Facebook accounts on the site between October 2017 and March were fake. For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.
The report did not cover the spread of false news directly, which it has previously said it was trying to stamp out by increasing transparency on who buys political ads, strengthening enforcement and making it harder for so-called "clickbait" from showing up in users' feeds. And more generally, as I explained last week, technology needs large amounts of training data to recognise meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.
Cavaliers coach Tyronn Lue contemplating starting Tristan Thompson for Game 2
But don't expect Celtics coach Brad Stevens to fall for the potential smokescreen that is Thompson's starting presence. He's at the forefront of that. "We're very flexible, and we'll change it if we feel like we need it".
Facebook took action on 1.9 million pieces of content over terrorist propaganda.