Hateful Activity Policy

For any independent social media* platform that seeks to be considered by The Coca‑Cola Company as a viable partner, they must meet, at a minimum, the following criteria or be committed to meet the criteria and therefore are acting in good faith:

1. Maintain & Evolve Community Guidelines regarding Hateful Activity: 

- Social media platforms must clearly define hateful activity** and provide clear community guidelines that protect all groups equally from hateful activities; as well as clearly defined consequences for violations, up to and including termination of service. Going forward, it is expected that through industry engagement, common definitions will be accepted by all social media platforms.

2. Specialized, Equal & Local Guideline Enforcement backed by Service Level Agreement (SLA):

- Use a combination of machine-based and specialized human review as well as an appeals process that all work together to enforce policies equally across all users and which reflects global best practices as defined by and agreed to through GARM.

- Ensure and demonstrate that machine-based systems and policies are designed by bodies with appropriate diversity.

- Ensure local human reviewers understand the cultural, social, and political history and context in each locality.

- Provide service level agreements both to the platform user community as well as the advertiser base.

3. Robust Governance:

- Establishment of an independent, cross-industry audit protocol where platforms engage approved, expert third-party auditors to assess performance against our policy expectations. Where violations are discovered, remediation and follow-up audits need to be put in place to achieve compliance.

- Independently audited transparency reporting results will be proactively provided to TCCC at least bi-annually with outcomes of the reporting mechanism to include:

- Assessment of absolute amount of hateful activity on the platform (posts, impressions & unique impressions)

- Assessment of relative amount of hateful activity (as a percent of all activity: posts, impressions & unique impressions)

- Incidence of hateful activity directed at different communities in transparency reports (i.e., percentage of content removed for anti-LGBTQ+, antisemitism, anti-Muslim, anti-immigrant hate, etc.).

- Assessment of resolution timing (timeline to be determined by industry standard)

- Assessment of progress toward ease of reporting hateful activity by users

- Going forward, it is expected that through industry engagement, common methodologies for calculating and reporting will be accepted by all social media platforms.

4. Monetization of hateful activity: 

- Platforms will grant advertisers adequate control*** to proactively reduce risk of any advertising adjacency to hateful activity (e.g., being able to blocklist sites)

- ***Adequate control to be defined by GARM and accepted as a common, harmonized definition by all social media platforms.

*We define Social Media as digital spaces that are primarily conversation driven, supported by user generated content. This includes photo, video & gaming-centric platforms that are significantly driven by user contribution and conversation.

**Currently defining Hateful activity by Recommended Internet Company Corporate Policies and Terms of Service to Reduce Hateful Activities.

Related Policies: