Categories
Media, Culture, and Economy

Racist by Design: How Algorithms Facilitate Racial Capitalism in Advertising

Picture taken from Analyticsbyvidhya

Algorithms don’t just personalise our ads, they categorise us. In doing so, they reproduce the same systems of racial inequality embedded in society.

These systems don’t malfunction when they produce biased outcomes. Instead, they are functioning as designed and using data to optimise for profit in ways that reinforce racial hierarchies.

Ruha Benjamin


It’s something I’ve started to notice more as I scroll, The way algorithms seem to already know who we are, and what we’re worth to advertisers. It can feel invisible, but that’s exactly the point.

In the world of digital advertising, machine learning systems are trained on existing data. These are formed by historical patterns of behaviour, demographic information, and purchase history, all of which are already shaped by racial capitalism. This term, coined by Cedric Robinson describes how economic systems have long exploited racialised groups for profit. Algorithms inherit that logic, using race as a sorting mechanism to decide who sees what and who is excluded. I’ve even seen this in small ways in comparing what kinds of ads I get with my friends from different backgrounds. It’s subtle, but it’s there. And it hurts whole communities.

Research demonstrated that Facebook’s advertisement delivery system could result in discriminatory outcomes even when advertisers didn’t explicitly target race. For example, adverts for housing were more likely to be shown to white users, while adverts for low-wage jobs or debt relief appeared more frequently to Black and Latinx users. The bias wasn’t in the input but in the optimisation process. Facebook’s algorithm simply learned which users were more likely to click and reinforced stereotypes to maximise engagement. This made me realise that the issue isn’t just a few biased decisions. The entire system might be designed in a way that produces unfair outcomes.

This is not a neutral process. It’s what Benjamin refers to as the New Jim Code: a system in which discrimination is automated under the appearance of objectivity.

Instead of human prejudice, we now have statistical ones which are just as effective, and often harder to detect. When advertising systems treat race as a performance metric, racial inequality becomes a feature, not a flaw.

That’s the part that unsettles me most, the idea that these systems are working exactly as intended.

Beyond social media, these dynamics play out across Google search, predictive policing tools, facial recognition, and recruitment platforms. Safiya Noble found that Google searches for terms associated with Black girls and women often returned pornographic or demeaning results. This is a direct reflection of how search engines index the web and assign relevance based on existing social biases. Whiteness is often rewarded, while Blackness is commodified or erased.

In advertising, this leads to a hierarchy of audiences. White, affluent users are shown investment adverts, property listings, and educational tools. POC users are shown payday loans, prison documentaries, and policing content. These patterns shape and reflect reality while narrowing what opportunities and representations are offered to different groups.

What makes algorithmic racism so deceptive is its invisibility. Users rarely know what they’re not being shown, or why and platforms offer little transparency. While some companies have introduced “ethical AI” initiatives, these often amount to PR practices that offer minor tweaks that avoid addressing the profit-driven logic that foregrounds the entire system.

Ultimately, the problem is not just biased data or flawed code. It’s an economic model that rewards discrimination. As long as racial sorting generates engagement and revenue, algorithms will continue to emphasise inequality. All under the guise of efficiency.

Leave a Reply

Your email address will not be published. Required fields are marked *