Meta is facing a new call to pay reparations to the Rohingya people for Facebook’s role in inciting ethnic violence in Myanmar.
A new report from Amnesty International – providing what it calls a “first in-depth human rights analysis” of the role played by Meta (aka Facebook) in the atrocities against the Rohingya in 2017 – has found the contribution from the genocidal tech giant was not simply that of a “passive, neutral platform” that responded inadequately to a major crisis, as the company sought to claim, but rather that the model Facebook’s core business – behavioral ads – was responsible for actively encouraging profit-hate.
“Meta’s content shaping algorithms proactively amplified and promoted content on the Facebook platform that incited violence, hatred and discrimination against Rohingya,” Amnesty concludes, pointing its tracking-based business model – aka “invasive and targeted profiling”. advertising” – which she says thrives on “inflammatory, divisive and harmful content”; a dynamic that implicates Facebook for actively inciting violence against the Rohingya due to its prioritization of engagement for profit.
UN human rights investigators warned in 2018 that Facebook was helping to spread hate speech and violence against Myanmar’s local Muslim minority. The tech giant later admitted that it was “too slow to stop misinformation and hate” from spreading on its platform. However, it did not accept the accusation that its use of algorithms designed to maximize engagement was a powerful driver of ethnic violence due to its ad systems’ preference for amplifying polarization and outrage. which led the platform to optimize for hate speech.
Amnesty says its report, which is based on interviews with Rohingya refugees, former Meta staff, civil society groups and other subject matter experts, is also based on new evidence gleaned in documents leaked by Facebook whistleblower Frances Haugens last year – aka the Facebook Papers – which she says provides “a shocking new understanding of the true nature and extent of Meta’s contribution to harm suffered by the Rohingyas”.
“This evidence shows that the major content-shaping algorithms powering the Facebook platform – including its News Feed, ranking and recommendation functions – all actively amplify and distribute content that incites violence. and discrimination, and provide that content directly to those most likely to act on such incitement,” he wrote in an executive summary of the 74-page report.
“As a result, content moderation alone is inherently inadequate as a solution to algorithmically amplified harm,” he continues. “Internal meta documents acknowledge these limitations, with a July 2019 document stating, “We only take action against approximately 2% of hate speech on the platform.” Another document reveals that at least some Meta staff recognize the limitations of content moderation. As an internal memo dated December 2019 put it: “We are never going to remove anything harmful from a medium used by so many people, but we can at least do our best to stop amplifying the harmful content by giving it an unnatural distribution”.
“This report further reveals that Meta has long been aware of the risks associated with its algorithms, but has failed to act appropriately in response. Internal studies dating back to 2012 have consistently indicated that Meta’s content shaping algorithms Meta could cause serious real-world damage.In 2016, prior to the 2017 atrocities in northern Rakhine State, internal Meta research clearly acknowledged that ‘[o]Our recommender systems compound the problem of extremism. These internal studies could and should have spurred Meta to implement effective measures to mitigate the human rights risks associated with its algorithms, but the company repeatedly failed to act.
“Relentless pursuit of profit”
Amnesty claims the Facebook Papers also show that Meta continued to ignore the risks generated by its content shaping algorithms in “the relentless pursuit of profit” – with its summary citing an internal memo dated August 2019 in which a former Meta employee writes: “We have evidence from a variety of sources that hate speech, divisive political speech and misinformation on Facebook and the family of apps affect societies around the world. We also have evidence compelling that the core mechanics of our products, such as virality, recommendations, and engagement optimization, are a big reason why these types of talk thrive on the platform.”
“Amnesty International’s analysis shows how Meta’s irresponsible content-shaping algorithms and business practices have facilitated and enabled discrimination and violence against Rohingya people.” “Meta’s algorithms directly contributed to the harm by amplifying harmful anti-Rohingya content, including calling for hatred against Rohingya. They have also indirectly contributed to real-world violence against the Rohingya, including violations of the right to life, the right not to be tortured and the right to adequate housing, by enabling, facilitating and encouraging the actions of the Myanmar army. Furthermore, Meta completely failed to engage in proper human rights due diligence with respect to its operations in Myanmar prior to the 2017 atrocities. This analysis leaves little room for doubt: Meta has largely contributed to the negative human rights impacts suffered by the Rohingya and has a responsibility to provide survivors with an effective remedy.
Meta has resisted calls to pay reparations to (at least) hundreds of thousands of Rohingya refugees forced to flee the country since August 2017 amid a campaign of violence, rape and murder perpetrated by Myanmar’s military junta. . And faces a class action lawsuit from Rohingya refugees who are suing the company in the US and UK – seeking billions in damages for its role in inciting genocide.
Amnesty has added its voice to calls for Meta to pay reparations to refugees.
Its report notes that Meta has previously turned down applications for support funding from Rohingya refugee groups, such as refugee groups in Cox’s Bazar, Bangladesh, asking it to fund a $1 million education project in the camps. – saying: “Facebook does not directly engage in philanthropic activities.
“Meta’s portrayal of the Rohingya communities’ pursuit of remedies as a request for charity paints a deeply flawed understanding of the company’s human rights responsibilities,” Amnesty said in the report, adding, “Despite its partial acknowledgment that played a role in the 2017 violence against the Rohingya, Meta has so far failed to provide effective remedy to affected Rohingya communities.
Making a series of recommendations in the report, Amnesty calls on Meta to work with survivors and the civil society organizations that support them to provide “effective remedy to affected Rohingya communities” – including fully funding the requested education program by Rohingya communities who are parties to a refugee claim against the company under the OECD Guidelines for Multinational Enterprises via the Irish National Contact Point.
Amnesty is also calling on Meta to undertake ongoing human rights due diligence on the impacts of its business model and algorithms, and to stop collecting “invasive personal data that infringes the right to life.” privacy and threaten a range of human rights,” as its report puts it – urging it to end the practice of tracking-based advertising and embrace less harmful alternatives, such as contextual advertising.
It also calls on regulators and lawmakers who oversee Meta’s operations in the US and EU to ban targeted tracking-based advertising based on “invasive” practices or involving the processing of personal data; and regulate tech companies to ensure that content-shaping algorithms aren’t profiling-based by default – and must require opt-in (instead of opt-out) consent. opt-in being “freely given, specific, informed and unambiguous”, echoing calls from some EU lawmakers.
Meta has been contacted for a response to the Amnesty report. A company spokesperson sent this statement – attributed to Rafael Frankel, Director of Public Policy for Emerging Markets, Meta APAC:
“Meta stands in solidarity with the international community and supports efforts to hold the Tatmadaw accountable for its crimes against the Rohingya people. To this end, we have voluntarily and lawfully disclosed data to the United Nations investigative mechanism on Burma and The Gambia, and are also currently participating in the OECD complaint process. Our safety and integrity work in Burma remains guided by feedback from local civil society organizations and international institutions, including the United Nations Fact-Finding Mission on Burma; the human rights impact assessment we commissioned in 2018; as well as our ongoing management of human rights risks.
Amnesty’s report also warns that the findings of what it calls “Meta’s blatant disregard for human rights” are not just about Rohingya survivors – as it says the company’s platforms are at risk of contribute to “new serious violations of human rights”.
“Already, from Ethiopia to India and other regions affected by conflict and ethnic violence, Meta represents a real and present danger to human rights. Urgent and far-reaching reforms are needed to ensure Meta’s history with the Rohingya is not repeated elsewhere,” he adds.