How AI Companies and Models Leave the World’s Most Vulnerable Open to Exploitation

Introduction: The Invisible Victims of the AI Revolution

The artificial intelligence revolution is being celebrated as humanity’s greatest technological leap forward. Tech giants trumpet AI’s potential to transform healthcare, revolutionize education, and democratize access to information. What they rarely mention is who bears the cost of this transformation—and who gets left behind.

For the world’s most vulnerable workers—including the 2.3 million Overseas Filipino Workers (OFWs) scattered across more than 200 countries—AI is not delivering liberation. Instead, it is creating new forms of exploitation, amplifying existing biases, and building systems that systematically exclude those who most need protection.

This report examines the uncomfortable truth that AI companies prefer to ignore: the algorithms they build, the data they harvest, and the workers they employ are part of a system that extracts value from the Global South while concentrating benefits in the Global North. From Kenyan data labelers earning $2 per hour to watch horrific content, to Filipino call center workers facing displacement by chatbots, to OFWs targeted by AI-powered scams—the most vulnerable are bearing the highest costs of the AI revolution.


Part 1: The Ghost Workers Behind the Machines

The Hidden Human Labor Powering AI

Every time you ask ChatGPT a question or let your Tesla navigate traffic, you’re benefiting from what appears to be pure machine intelligence. But behind these “automated” systems lies a vast, invisible workforce of humans performing tedious, often traumatic labor for pennies—work so obscured that researchers call it “ghost work.”

The AI supply chain extends from Venezuela, where workers label data for self-driving vehicles, to Bulgaria, where Syrian refugees fuel facial recognition systems with selfies labeled according to race, gender, and age. Tasks are outsourced to precarious workers in countries like India, Kenya, and the Philippines, where high unemployment and economic desperation create fertile ground for exploitation.

The Scale of Exploitation

  • Kenyan data labelers earn as little as $1.32-2 per hour—compared to $20+ for American counterparts doing identical work
  • In May 2024, nearly 100 Kenyan workers wrote an open letter to President Biden describing conditions they called “modern-day slavery”
  • Workers spend 8+ hours daily reviewing graphic content: murders, beheadings, child abuse, rape, pornography, and bestiality
  • The World Bank estimates 154-435 million people are engaged in online gig work globally—4.4% to 12.5% of the global labor force
  • In March 2024, Remotasks (owned by Scale AI) abruptly shut down operations in Kenya with just hours’ notice, leaving thousands stranded

The Philippine Connection

The Philippines, with its large, English-speaking workforce, has become a significant player in the AI labor market. Filipino freelance data annotators on platforms like Upwork describe work that is “time-consuming and tedious” with intense competition. One worker in Bulacan province, who began earning 12,000 pesos ($218) monthly in 2019, now earns about 40,000 pesos—but acknowledges: “If I get a chance for another job, I will take that.”

The gig-based nature of AI labor compounds precariousness. Workers are typically employed on short-term contracts or as freelancers, lacking job security. Digital platforms operate in legal gray areas with weak labor protections, offering limited recourse for mistreatment or unfair dismissal.


Part 2: Built for English, Broken for Everyone Else

The Language Modeling Bias

AI-based language technology is currently limited to roughly 3% of the world’s most widely spoken, financially and politically backed languages. This creates what researchers call “language modeling bias”—a hard-wired preference for certain languages that produces systems that work excellently for English speakers while failing billions of others.

Only 15% of ChatGPT users are from the US, where Standard American English is the default. Yet the model is commonly used in countries where people speak other varieties of English—or entirely different languages. Over 1 billion people speak varieties such as Indian English, Nigerian English, and African-American English, and speakers of these non-“standard” varieties often face discrimination in the real world.

Research Findings on Dialect Discrimination

A 2024 study by Berkeley’s AI Research Lab found that ChatGPT responses to non-“standard” English varieties were rated significantly worse across multiple dimensions:

  • 19% more stereotyping than responses to “standard” varieties
  • 25% more demeaning content
  • 9% worse comprehension
  • 15% more condescending responses

Perhaps most troubling: GPT-4, the newer and more powerful model, actually made stereotyping worse—14% higher than GPT-3.5 for minoritized varieties. Larger, newer models don’t automatically solve dialect discrimination; they may make it worse.

The Filipino Language Gap

Tagalog, spoken by 76 million Filipinos, remains a “low-resource” language for AI development. Despite millions of speakers and extensive written content, there is a scarcity of readily available datasets for AI training. The best instruction-following training data for Tagalog consists of only about 1,460 samples—compared to millions for English.

Only 55% of Filipinos speak English fluently, yet most AI systems require English proficiency. The government’s iTanong chatbot, expected to launch in 2025, aims to address this by supporting Filipino, Taglish, and other local languages—but it’s one of the only AI tools designed specifically for Philippine linguistic needs.

This has real consequences for OFWs. AI translation tools fail asylum seekers, with some detained for months due to translation inaccuracies. When immigration systems rely on translation tools that are inherently limited, human lives are at risk.


Part 3: When Algorithms Decide Your Fate

AI in Immigration and Border Control

As AI becomes increasingly embedded in border control systems around the world, a troubling question emerges: are these tools helping governments manage migration more efficiently, or dangerously undermining the rights of some of the world’s most vulnerable populations?

In the UK, the Home Office used an AI algorithm to process visa applications that was eventually scrapped due to concerns over racial bias. Reports indicated the algorithm disproportionately flagged applications from certain nationalities as high-risk. As one legal organization stated: “This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software.”

Facial Recognition Failures

Facial recognition technology performs unevenly across races—typically worse for darker skin tones. A 2019 NIST study of 189 algorithms found many were 10 to 100 times more likely to misidentify a Black or East Asian face than a white face. Most algorithms were substantially less likely to correctly identify a Black woman than any other demographic.

For Southeast Asian migrants, this creates particular risks. A facial recognition tool trained predominantly on Caucasian faces may have drastically lower accuracy at identifying Southeast Asian individuals. As researchers note: “Devices have been created in ignorance of the population they are going to interact with.”

AI Hiring Discrimination

AI hiring tools can perpetuate discrimination against migrant workers and those from non-English-speaking backgrounds. Amazon famously scrapped an AI hiring tool in 2018 after discovering it discriminated against women applying for technical jobs. The tool had been trained on a dataset of mostly men and learned to prefer applicants who used words more commonly associated with male candidates.

Research shows AI-generated content exhibits notable discrimination against underrepresented groups. One study found AI systems used 24.5% fewer female-related words than human writers, while older models cut Black-associated language by 45.3%. In resume screenings using AI, 0% of applications with Black male names were selected.


Part 4: The AI Divide – Left Behind by Design

Infrastructure Inequality

In 2024, the International Telecommunication Union estimates that 2.6 billion people—32% of the world’s population—remain offline. The disparity is stark: 93% of the population in high-income countries uses the internet, compared to just 27% in low-income countries. Furthermore, 83% of urban dwellers globally are online, while only 48% of rural populations have internet access.

This digital divide is directly relevant to OFWs and their families. In the Philippines, 20 million people remain “internet-poor,” particularly in rural areas. While the Philippine broadband speed averages 94.42 Mbps, this lags far behind regional neighbors like Singapore (284.93 Mbps), Thailand (231.01 Mbps), and Malaysia (132.72 Mbps).

The Global AI Readiness Gap

According to the ILO’s “Mind the AI Divide” report, annually more than $300 billion is spent globally on technology to enhance computing capacity, but these investments are focused mainly on higher-income nations. This creates a disparity in access to infrastructure and skills development that puts developing countries and their workers at a severe disadvantage.

  • 56% of Asia-Pacific enterprises lack AI infrastructure
  • 90% of Filipinos lack basic ICT skills
  • Only 22% of organizations globally are fully prepared to use AI
  • 70% of Philippine towns lack a bank branch, limiting access to AI-powered financial services

The report notes that women are most vulnerable to the automating effects of AI, particularly in clerical and business process outsourcing roles prevalent in developing economies like the Philippines.


Part 5: Automation’s Targets – The BPO Crisis

The Philippine BPO Industry Under Threat

The Philippines’ business process outsourcing sector—accounting for 7.4% of GDP and employing 1.3 million workers—faces unique challenges from AI. As one longtime BPO worker told researchers: “Multinational companies came here because of our skill in customer care. And that’s the first to be displaced by AI.”

The Numbers

  • 89% of the BPO workforce faces a high risk of automation, according to ILO data
  • Around one-third of jobs in the Philippines are highly exposed to AI
  • 67% of Philippine BPO companies are already implementing AI to boost productivity
  • The most exposed workers are “college-educated, young, urban, female, and well-paid workers in the service sector”

“If AI reduces the volume of entry-level roles that BPOs and call centers once provided, what’s next?” asks David Sudolsky, CEO of outsourcing company Boldr. “I think there’s a significant risk of displacement.”

The industry will no longer scale with “hundreds of thousands of jobs” for fresh graduates as before, experts warn. While some new roles will emerge—AI trainers, data annotators, chatbot managers—these require technical skills that traditionally trained call center agents often lack.

The Double Bind

Filipino workers face a cruel irony: they may lose their jobs to AI while simultaneously being exploited to train that AI. The BPO workers of yesterday become the data labelers of today—doing more traumatic work for less pay and fewer protections.

As one worker advocate noted: “Multinational BPO companies are here because they wanted to lower production costs and earn more profit and often, workers’ unions are present in their countries. But BPO in the Philippines is a non-unionized industry, and the government seems to want to keep it that way as a catch to investors.”


Part 6: AI-Powered Predators – Scams Targeting OFWs

The Deepfake Scam Explosion

While AI companies focus on building tools for wealthy consumers, scammers have rapidly weaponized the same technology to target the vulnerable. OFWs, with their hard-earned remittances, are prime targets.

The Philippines now has the second-highest fraud rate globally, with estimated losses exceeding ₱100 billion ($1.8 billion) in 2024. Deepfake-related fraud incidents increased from 150 in all of 2024 to 580 in just the first half of 2025—a nearly fourfold increase. Cumulative deepfake fraud losses have reached $897 million globally.

How Scammers Exploit AI

  • Deepfake investment scams using images of tycoons like Manuel Villar, Jaime Zobel de Ayala, and BSP Governor Eli Remolona
  • Voice cloning possible with just 15 seconds of audio
  • Romance scams using “real face models” via deepfake technology during video calls
  • AI-powered job scam pages quadrupled globally from May 2024 to April 2025—over 38,000 new scam pages per day

“You know what I’m really afraid of?” said SEC Chairperson Emilio Aquino. “While technology can really help us so much, there are also trade-offs. If used improperly by these undesirables, they can scam so many people, and people fall prey to these scammers.” He specifically noted that OFWs are particularly vulnerable because “they are the ones with the money.”

Trafficking to Scam Hubs

In March 2025, 30 OFWs were repatriated from Myanmar where they had been trafficked to work in “scam hubs.” Online recruiters exploit messaging platforms to deceive individuals with promises of high-paying jobs as customer sales representatives or chat support agents. Instead, victims are forced to work in scam centers, facing inadequate food, poor sanitation, limited healthcare, and instances of torture.

The Commission on Human Rights expressed “grave concern” over multiple reports of Filipinos being trafficked into these illegal operations. The cruel irony: AI is being used to recruit victims who are then forced to perpetrate AI-powered scams on others.


Part 7: Algorithmic Bosses – Platform Work Exploitation

When Your Boss Is an Algorithm

For millions of workers in the gig economy—including many OFWs and their families—work is now controlled not by human managers but by opaque algorithms that determine pay, assignments, and even whether they keep their jobs.

A Human Rights Watch report on platform work documents how companies like Uber, DoorDash, and Lyft use “algorithmic management” to extract maximum value from workers while providing minimum protections. Workers face algorithmically determined pay that can change without notice, surveillance of every movement, and punishment for deviations from platform demands.

The Grab Case Study

Research on Grab, the dominant ride-sharing platform in Southeast Asia, reveals how workers must constantly “tame the algorithm” to survive. After Grab acquired Uber in 2018, commission rates increased from 15% in 2016 to 27.2% in 2020, while prices for customers declined.

Workers describe a system that rewards and punishes simultaneously—creating what researchers call “platform realism,” where the goal is not to disrupt the system but simply to survive within it. One worker described the platform as an “augmented reality game” combining real-world traffic obstacles with virtual challenges.

Algorithmic Wage Discrimination

Platform work disproportionately affects immigrants and racial minorities. Research shows that algorithmically personalized wages and rating systems can amplify discrimination. The gender wage gap in the gig economy is 30%—up from 20% in the traditional job market.

“Gig platforms assert control over workers by maintaining an informational asymmetry,” notes the Worker Info Exchange. Workers cannot see how the algorithm determines their pay or assignments, cannot challenge algorithmic decisions, and can be “deactivated” without warning or explanation.


Part 8: Data Without Consent – The Extraction Economy

Scraping the Vulnerable

AI systems require massive amounts of training data, and much of it comes from scraping content from the internet without consent. This creates particular risks for vulnerable communities whose data may be harvested and used against their interests.

The MegaFace dataset contained 3.5 million photos with faces scraped from Flickr contributors. Researchers “redistributed it to thousands of groups, including corporations, military agencies, and law enforcement.” Contributors were never asked to opt-in and never provided informed consent.

Re-identification Risks

Even “anonymized” data poses risks. AI systems can often re-identify individuals from supposedly anonymous datasets. For migrants and workers in precarious situations, this creates dangers: data collected for one purpose can be used for surveillance, discrimination, or exploitation.

Community data about electricity consumption, transportation patterns, or financial behavior can be used to target particular communities or geographies. “If usage or misuse of this data could adversely impact or target a particular community in a particular geography,” researchers note, “then there are risks of targeting, discrimination, ethical concerns about using the data.”

The Consent Paradox

The issue of data consent is particularly salient for vulnerable and marginalized groups. Historical examples like the commercialization of HeLa cells without Henrietta Lacks’ consent, and the misuse of Indigenous DNA without informed consent, show how exploitation of data from marginalized populations is not new—AI simply scales it.

As one privacy researcher noted: “When the data is scraped from people’s private social media to train AI models, the data inevitably contains private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”


Part 9: What Must Change – A Call for AI Justice

Regulatory Imperatives

The exploitation documented in this report is not inevitable—it is the result of policy choices that prioritize corporate interests over worker protection. Change requires action at multiple levels:

  1. Mandatory Transparency: AI companies must disclose how workers are employed, what they are paid, and what protections are provided throughout the AI supply chain.
  2. Living Wages Globally: Data labelers and content moderators should receive wages equivalent to their counterparts in developed countries for equivalent work—not $2 vs. $20 per hour.
  3. Mental Health Support: Workers exposed to traumatic content must receive adequate psychological support, not just during employment but afterward.
  4. Algorithm Accountability: AI systems used in hiring, immigration, and financial decisions must be audited for bias and their decision-making processes must be explainable.
  5. Language Inclusion: Major AI companies should invest in developing capabilities for underserved languages, with community involvement in training data creation.

What OFWs and Families Can Do Now

While systemic change is needed, individuals can take steps to protect themselves:

  1. Verify Everything: Be skeptical of investment opportunities featuring celebrity endorsements—use official SEC and DMW channels to verify legitimacy.
  2. Question Video Calls: If someone you know asks for money during a video call, hang up and call them directly through a number you know is genuine.
  3. Protect Your Data: Be cautious about what personal information and images you share online—these can be scraped for AI training or used in scams.
  4. Know Your Rights: If working in data labeling or content moderation, document working conditions and connect with organizations advocating for digital workers’ rights.
  5. Build Digital Literacy: Understanding how AI systems work is the first step to protecting yourself from their misuse.

Emergency Contacts and Resources

AgencyContact InformationDMW Hotline1348 | connect@dmw.gov.phOWWA 24/7 Operations Center1348 | owwa.gov.phIACAT (Anti-Trafficking)1343NBI Anti-Fraud Division(02) 8523-8231 to 38PNP Anti-Cybercrime Group(02) 8723-0401 local 5313SEC Investor Assistance(02) 8818-0921 | sec.gov.phNational Mental Health Hotline0917-899-USAP (8727)

Conclusion: Demanding a Seat at the Table

The AI revolution doesn’t have to leave the vulnerable behind. But changing its trajectory requires acknowledging an uncomfortable truth: the current system is working exactly as designed—to extract maximum value from the most vulnerable while concentrating benefits among the already powerful.

OFWs send home $38 billion annually, sustaining families and communities across the Philippines. They deserve AI systems that protect rather than exploit them—that speak their languages, respect their data, and offer fair opportunities for dignified work.

The workers in Kenya, the Philippines, India, and Venezuela who train AI systems deserve living wages, mental health support, and labor protections—not modern-day slavery conditions that fund Silicon Valley billionaires.

The 2.6 billion people without internet access deserve to participate in the AI age—not to be left further behind with each technological advance.

AI companies will not make these changes voluntarily. They require pressure from governments, from workers organizing across borders, from consumers demanding ethical AI, and from researchers documenting harms. The future of AI is not predetermined—it will be shaped by those who fight for a seat at the table.

The question is not whether AI will transform our world. It already is. The question is: for whose benefit?

The future is being coded today. Make sure your voice is in the algorithm.

Leave a Reply

Your email address will not be published. Required fields are marked *.

*
*