The polarization and tensions that cross society are reflected in social networks. And platforms like Facebook are the mirror of those discussions: with more than 3 billion people using Facebook applications monthly, everything good, bad and ugly in our societies will find a place to be expressed. That puts a huge responsibility on Facebook and other social media companies to decide what content is acceptable. Facebook has come under fire in recent weeks for its decision to allow controversial posts from U.S. President Donald Trump to stay on the platform, and was also questioned by individuals and companies advertising on our services about how we approach the speech of hate. I want to be clear: Facebook does not benefit from hate. Billions of people use Facebook and Instagram because they have good experiences in our apps. Our community doesn’t want to see hateful content, our advertisers don’t want to see it, and neither do we. There is no incentive for us to do anything other than remove it. More than 100 billion messages are published daily on our services. It is us talking, sharing our lives, our opinions, our hopes and our experiences. In those billions of interactions, a small fraction is hate content. When we find such posts on Facebook and Instagram, we have zero tolerance and remove them. When a content does not classify to be treated as hate speech, nor is it considered within other policies aimed at preventing harm or suppressing the vote, we are inclined to free expression, because ultimately, the best way to counter offensive speech divisive and damaging is with more conversation. Exposing it to light is better than hiding it in the shadows. Unfortunately, zero tolerance does not mean zero cases. With the volume of content that is published daily on our platforms, eradicating hate is like looking for a needle in a haystack. We invest billions of dollars annually in human resources and technology to keep our platform secure. We have tripled our security team to the current 35,000 people. We are pioneers in applying artificial intelligence to remove hate content at scale. And we have made progress. A recent European Commission report revealed that Facebook reviewed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter. Last month, we reported that we removed almost 90% of the hate speech we found before someone reported it. This represented a 24% increase in our effectiveness in just over two years. We acted on 9.6 million pieces of content in the first quarter of 2020, compared to 5.7 million in the previous quarter. And 99% of the ISIS and Al Qaeda content that we remove is removed before anyone reports it. We are improving, but we are not complacent. That’s why we recently announced new policies and products to ensure that everyone can stay safe, informed, and ultimately can use their voice where it matters most: voting. This includes the deployment of the largest electoral information campaign in US history, an initiative that seeks to register four million voters. Also, we implement updates to policies designed to combat vote suppression and hate speech. Many of these changes are the direct result of contributions from the civil rights community. We will continue to work with them and with other experts to adjust our policies and address new risks as they arise. It is understandable and necessary to focus on hate speech and other types of harmful content on social media, but it is worth remembering that the vast majority of conversations that happen are positive. We saw it when the coronavirus pandemic broke out. Billions of people used Facebook to stay connected when they were physically separated. Grandparents and grandchildren, brothers and sisters, friends and neighbors. And more than that, people came together to help each other. Thousands upon thousands of local groups were formed, and millions of people organized to help the most vulnerable in their communities. Others, to celebrate and support health workers. And when companies had to close their doors to the public, for many, Facebook was their lifeline. More than 160 million businesses use Facebook’s free tools to reach customers, and many used them to stay afloat, helping to preserve jobs and livelihoods for many people. It is important to note that Facebook contributed to people obtaining accurate and reliable information on health issues. We guide more than 2 billion people on Facebook and Instagram to access information from the World Health Organization and other public health authorities. And more than 350 million people clicked to see that content. And it’s worth remembering that when the darkest things are happening in our society, social media provides people with a tool to shed light. To show the world what is happening; organize against hate and unite; and for millions around the world to express their solidarity. We’ve seen it countless times, and we’re seeing it right now with the Black Lives Matter movement. We may never be able to eradicate hate entirely on Facebook, but we are constantly getting better at stopping it.