Skip to ContentSkip to Navigation
University of Groningen LibraryPart of University of Groningen
University of Groningen Library
Library Open Access
Header image Open Science Blog

Open access publication in the spotlight - 'AI as moral cover: How algorithmic bias exploits psychological mechanisms to perpetuate social inequality'

Date:28 October 2025Author:Open Access Team
Open access publication in the spotlight: October 2025
Open access publication in the spotlight: October 2025

Each month, the open access team of the University of Groningen Library (UB) puts a recent open access article by UG authors in the spotlight. This publication is highlighted via social media and the library’s newsletter and website.

The article in the spotlight for the month of October 2025 is titled 'AI as moral cover: How algorithmic bias exploits psychological mechanisms to perpetuate social inequality’, written by Islam Borinca (Faculty of Behavioural and Social Sciences).

Abstract
Algorithmic decision-making systems are increasingly shaping critical social outcomes (e.g., hiring, lending, criminal justice, healthcare), yet technical approaches to bias mitigation ignore crucial psychological mechanisms that enable discriminatory use. To address this gap, I integrate motivated reasoning, system justification, and moral disengagement theories to argue that AI systems may function as “moral cover,” allowing users to perpetuate inequality while maintaining beliefs in their own objectivity. Users often demonstrate “selective adherence,” following algorithmic advice when it confirms stereotypes while dismissing counter-stereotypical outputs. System justification motives lead people to defend discriminatory algorithmic outcomes as legitimate, “data-driven” decisions. Moral disengagement mechanisms (including responsibility displacement, euphemistic labeling, and advantageous comparison) can enable discrimination while preserving moral self-regard. Finally, I argue that understanding AI bias as fundamentally psychological rather than merely technical demands interventions addressing these underlying psychological processes alongside algorithmic improvements.

We asked author Islam Borinca a few questions about the article:

You describe how AI systems can function as “moral cover” at both individual and institutional level. Can you give some real-life examples of ‘AI as moral cover’ at both levels?
As I argue in the paper, AI systems can serve as psychological shields that diffuse moral responsibility. At the individual level, consider HR managers using AI résumé-screening tools. When such systems reject candidates with “ethnic-sounding” names or from certain postal codes, managers often accept these decisions as objective and data-driven. They do so even though they might question similar patterns if making the decisions manually. This reflects selective adherence, where users accept algorithmic outputs that confirm stereotypes while dismissing counter-stereotypical results. The AI provides moral cover by enabling discriminatory choices while preserving the belief in one’s own fairness.

At the institutional level, financial institutions using AI for loan decisions exemplify this dynamic. When algorithms disproportionately deny loans to minority applicants, banks defend these outcomes as legitimate risk-based decisions grounded in historical data. Institutions displace moral responsibility onto the algorithm and use technical language like predictive accuracy to legitimize fundamentally discriminatory results. This aligns with what Van der Gun and Guest (2024) call non-intentional dehumanization, which refers to achieving efficiency goals while producing harmful side effects that remain unacknowledged.

Together, these examples illustrate how AI systems allow individuals and organizations to treat biased outcomes as neutral or inevitable. In doing so, they perpetuate inequality under the guise of objectivity.

You write that preventing or reducing AI bias requires not only technological (algorithmic) interventions but also psychological ones. Can you describe some of these psychological interventions? 
The paper argues that addressing AI bias requires interventions targeting three key psychological mechanisms.

First, to counter selective adherence, users need training that highlights their tendency to cherry-pick AI outputs that confirm pre-existing beliefs. Exercises that prompt reflection on when and why users override or accept AI recommendations can foster greater awareness.

Second, to challenge system justification, interventions must dismantle the myth of algorithmic neutrality. Educating users about how historical biases embedded in training data are reproduced and amplified helps reduce blind faith in data-driven decisions. It is essential to make visible the human choices behind AI design.

Third, to prevent moral disengagement, systems should require users to explicitly endorse AI recommendations rather than passively accept them. This keeps users psychologically connected to outcomes and reinforces accountability. Crucially, as I argue in the paper, sometimes the most ethical intervention is recognizing when not to use AI at all. This is especially important in high-stakes contexts affecting vulnerable populations. The existence of a technology does not justify its deployment.

Do you use AI yourself in your work as a scientist? If so, how?
I have used AI tools in limited and cautious ways. For instance, I have tested platforms like Elicit to identify academic papers on narrowly defined topics. However, I quickly realized that even scholarly-oriented tools can produce hallucinated or inaccurate outputs. Based on that experience, I always recommend manually verifying results and treating such platforms as discovery aids rather than reliable sources.

I do find AI useful for low-risk tasks such as grammar checking or improving clarity. However, I do not use AI to generate ideas, build arguments, or draw conclusions. These are core aspects of scientific work that require human judgment, theoretical grounding, and ethical responsibility.

Could you reflect on your experiences with open access and open science in general?
Open access is essential for making scientific knowledge available to a broader audience. Paywalls create significant barriers. Many researchers and institutions around the world lack access to academic journals. This includes scholars in the Global South as well as those working in institutions with limited funding across other regions with lower income levels. It also affects early-career researchers without stable affiliations and practitioners in NGOs or public institutions who rely on research but lack university library access.

At the same time, there is a paradox in the open access system itself. The high article processing charges required to publish open access can exclude researchers with limited resources. As a result, a system designed to democratize knowledge can unintentionally reinforce existing inequalities and give greater visibility to those at well-funded institutions.

Open access helps create more equitable engagement with research. It enables scholars and practitioners from diverse contexts to access, build upon, and contribute to scientific knowledge. This is essential for ensuring that science reflects a wide range of perspectives and lived experiences.

Beyond access, open science practices such as sharing data, analysis scripts, and research materials foster transparency, reproducibility, and cumulative progress. Although my current article is theoretical, I fully support these practices in empirical work because they allow others to verify claims, adapt methods across contexts, and refine models over time.

Ultimately, the open science movement promotes accountability, inclusion, and collaboration. These are not just technical practices. They are ethical commitments to making science more socially responsive and globally accessible.

Useful links:
To learn more about AI and develop a critical mindset regarding novel AI technologies, check out the two-hour self-study Critical AI Literacy module, created by the UB and Educational Support & Innovation of the UG, openly available via Wikiwijs. 

Citation:
Borinca, I. (2025). AI as moral cover: How algorithmic bias exploits psychological mechanisms to perpetuate social inequality. Analyses of Social Issues and Public Policy, 25, e70031. https://doi.org/10.1111/asap.70031

If you would like us to highlight your open access publication here, please get in touch with us

About the author

Open Access Team
The Open Access team of the University of Groningen Library

Link: /openaccess
Share this Facebook LinkedIn
Follow us onmastodon instagram