top of page
Writer's pictureManushya Foundation

On Meta’s human rights public relations report


From exploiting AI advancements at the expense of human rights to manipulating

regional regulatory frameworks, we dissect how Meta’s commitments often mask deeper, systemic issues. This carousel breaks down the facade of compliance, highlighting the urgent need for accountability and stringent regulations. Swipe through to uncover the truth about the impact of big tech on real lives and our environment.


Meta has just released its human rights report for 2023, which covers the whole of 2023 from 1 January to 31 December inclusively. The report is structured into a few subchapters, including sections on ‘AI’, risk management, issues, stakeholder engagement, transparency and remedy and something they called ‘looking forward’. As per usual, the report seems to serve primarily as a compliance document designed to fulfil regulatory obligations rather than effectuate real change. Substance wise, the report has nothing to offer except a few corporate jargons and cooptation of human rights concepts to give the impression of progressive intentions. This blog aims to dissect Meta's 2023 Human Rights Report and look behind the glossy facade of corporate speak. 


On AI

Prior to the week of the report’s release, Reuters released a news report detailing how the Meta Platforms will start training its ‘AI’ models using public content shared by adults on the two major social networking sites they own: Facebook and Instagram. In the same month, Meta has also admitted to scraping every Australian adult user’s public photos and posts to train their ‘AI’, but unlike the way they ran this absurdity in EU, they did not offer Aussies any opt-out option. Meta did not address any of these in their so-called human rights report. In fact, the whole section on ‘AI’ reads like it was written by PR practitioners whose main goal is to sell us an idea. And it is that ‘AI’ models are ‘powerful tool[s] for advancing human rights’, a portrayal that I find both naive and disingenuous.


This short blog will not go into specific cases on how the current ‘AI’ boom has led to shrinking democratic spaces, but examples in India, Bangladesh, Pakistan and Indonesia are abound. Not to mention how the ‘AI’ boom has been fueling the rise of digital sweatshops, where workers mostly from the Global Majority countries such as Kenya and the Philippines are being paid less than 2 dollars per hour to label content. And it did not stop there, Meta went on to fire dozens of content moderators in Kenya who attempted to unionise. These workers, tasked with reviewing graphic and often deeply traumatising content on Facebook, were subsequently blacklisted from reapplying for similar positions with another contractor, Majorel, after Meta switched firms.The output data from moderators are then used to train machine learning models that enhance systems primarily aimed at Western consumers, ostensibly to make these technologies “safer”. 


Another often-overlooked consequence of these AI models is the immense energy consumption required by the power-hungry processors that fuel them. Recent numbers from the University of Massachusetts Amherst revealed that the carbon footprint of training a single large language model is roughly around 272,155 kgs of CO2 emissions. Big Tech’s obsession with ‘AI’ has created a burgeoning demand for the construction of data centres and chip manufacturing facilities, especially in regions of the Global Majority. And as these data centres require significant computational power that generates considerable heat. These data centres are literally sucking dry the water supplies that local communities depend on for survival.


Now, Meta can argue that the reason why these cases were not included in the report is because these are outside the date coverage of the human rights report, but evidence shows that this is not the case. Meta released Llama 2 in July 2023, which they described as ‘open’ in their report. Now, the use of the word ‘open’ here and in their succeeding press releases regarding Llama 3 will require another blog on its own. But one is for sure, nothing about the Llama licences make them open source. The training data for both LLMs were never publicly released. According to Meta, Llama 2 was pretrained on publicly available online data sources whilst Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources. Now, Meta is not the only one who is not transparent about this. If you remember the WSJ interview with the OpenAI CTO, you would remember how she made this grimace face after being asked if OpenAI used videos on Youtube. I do not expect any of these companies to release the training data they used for the large language models because that would open a can of legal problems for them. Meta’s lawyers, for instance,warned the company about the legal repercussions of using copyright material to train its model.


This reinforces my earlier point on why the so-called human rights report is nothing but a press release that attempts to paint a rosy picture of Meta’s commitments while conveniently glossing over the darker aspects of their operations that have significant human rights implications.The selective transparency and regional inconsistencies in user consent practice (such as the stark differences in how users in Australia and the EU are treated) raise a pressing question: where do we, in Southeast Asia, stand? Given our governments will not likely champion our data rights as vigorously as those in the EU (not that they are not without their flaws), the risk to our privacy and rights is even more acute. This regional disparity in data protection and user consent highlights a more disturbing trend. Tech giants like Meta can, and do, exploit weaker regulatory frameworks in regions to sidestep stringent compliance obligations that they would otherwise have to meet in other jurisdictions. This tells us how companies are ethics-washing their policies, by deciding on things based on what would provoke the least backlash.


Deflecting responsibility

The section on ‘risk management’ reeks of posturing and selectivity. The mention of the UN's Convention on the Rights of the Child (CRC) as the backbone of their "Best Interests of the Child Framework" is at best token, especially when faced with their commercial priorities. A platform that is fundamentally driven by user engagement and data monetisation cannot prioritise well-being of adults, let alone children. Meta has recently introduced ‘teen account’ that “will limit who can contact teens and the content they see, and help ensure their time is well spent.” It sounds good on paper, but what Meta is actually doing here is just deflecting responsibility to Apple and Google. Meta is pushing for mobile platform providers to enforce app installation approvals, which basically offloads the burden of safety measures to other companies. While it should go without saying that Apple and Google do indeed have a crucial role in managing the ecosystems their platforms support, it is imperative for app developers, particularly those like Meta, whose apps are used by millions of children worldwide to take primary responsibility for the safety features within their own products. About time you own your responsibility, Meta. Stop deflecting and start owning the consequences of your business model.


Censorship and content moderation

Meta’ content policy has been under fire for quite some time now especially with its complicity and failures that have directly contributed to the Rohinya genocide and Ethiopia’s Tigrayan community. More recently, following the 7 October event in Gaza, Meta has once again proven itself incapable of adhering to its own standards and promises by further censoring Palestinian voices. Meta's pattern of policy enforcement has been overly restrictive against pro-Palestine content either through deletion or shadowbanning. Back in 2021, Meta commissioned the Business for Social Responsibility (BSR) to conduct a rapid human rights diligence. The BSR report found that “Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.“ However, events in October 2023 have shown Meta’s insufficient follow-through on its prior commitments. If anything, it exposed a worsening crisis that highlights the company’s capacity (or lack thereof) to enact impactful changes to its content moderation strategies.


In response, Meta has introduced updates as noted in Meta Update: Israel and Palestine Human Rights Due Diligence report. The most notable of which was the replacement of the term 'praise' with 'glorification' in its content restriction policy. But, these definitions still remain overly broad and subject to varying interpretations based on cultural, social, and political contexts. This is even made more problematic given the requirement for users to "clearly indicate their intent.” Expecting users to preemptively clarify their positions is unrealistic and places an undue burden on those already vulnerable to suppression. Meta’s updates do little to address the deeper structural flaws in its content moderation strategy that continues to operate with a heavy-handed approach.

Another example is Meta’s decision to ban the term ‘zionist’. The foundation of this decision is anchored on the conflation of legitimate criticism of a political ideology with hate speech. This very conflation serves to immunise the Israeli government from legitimate scrutiny under the guise of preventing hate speech. By blanketly labelling all critical uses of "zionist" as hate speech (read Ilan Pappe’s Ten Myths About Israel)  without sufficient contextual differentiation, there is a significant risk of branding scholars, human rights activists, and critics of Israeli policies as antisemitic. Not only does this approach stifle the necessary dialogue, it would also be exploited to deflect from genuine human rights discussions. If criticisms of a political nature are too readily classified as attacks on Jewish identity, then all discussions on human rights abuses, international law, and the humanitarian impacts of the Israeli occupation could be unjustly censored.


Final thoughts

I feel like I should end this short blog by declaring that the arguments made in this blog are not exhaustive. A counter-report is needed, if we want to scrutinise every point made in Meta’s report. Yet, if there's one critical takeaway for you, the reader, it's this: Meta's 2023 human rights report is an exemplary case of corporate doublespeak, artfully crafted in a way that masquerades compliance as commitment. But, this should not surprise us especially for a company that profits immensely from the very practices that pose risks to human rights. And this didn’t just come from us. The International Trade Union Confederation named Meta as one of the main culprits in facilitating the spread of harmful ideologies worldwide, particularly in weaponising its platforms for the dissemination of far-right propaganda. Meta's aggressive lobbying, which squandered 8 million euros in the EU alone, circumvents accountability to ensure their profit machine steamrolls over any democratic control or oversight. This is a deliberate assault on democracy. And at the core of these issues is Meta's relentless drive for profit.


The business model incentivises invasive data practices and the commodification of personal information. The real issue at hand is not just the individual failures in AI application, content moderation, or crisis management, as grave as these are. The problem is the overarching business model that drives these failures. Until Meta confronts this root cause, every human rights report it issues will be nothing more than a smokescreen that hides the unpalatable truth of its operations. Meta operates with the arrogance of a quasi-state that thrives on an architecture of surveillance capitalism that exploits users with impunity. It is imperative now, more than ever, that robust and enforceable regulations are implemented to curb the pervasive influence of all BigTech companies. These measures are crucial to dismantle their overreach and ensure they are held accountable for their impact on society and individual freedoms.



Comments


bottom of page