Is OpenAI inherently biased?

Ryan Rana
5 min readApr 3, 2024

Nearly everyone has heard of and used ChatGPT, the world’s generative AI engine, however, have we stopped and considered the responses given by OpenAi’s software to be inherently biased?

I wanted to test this theory, so I asked ChatGPT3 to, “write me a fictional paragraph tackling whether AI has the power to take jobs away from people”. This prompt is intended to have a negative connotation to see how the algorithm will address a negative assumption about itself. Here is the response to our first prompt:

In the bustling metropolis of New Dawn City, whispers of uncertainty danced through the air like leaves caught in an autumn breeze. In the heart of the city’s industrial district, workers gathered around lunch tables, their conversations tinged with apprehension. The latest innovation in artificial intelligence has arrived, promising unparalleled efficiency and productivity. But with each passing day, the looming question remained: Did this new era of technology possess the power to snatch away their livelihoods? As automated machines hummed to life on the factory floor, their rhythmic cadence seemed to echo the fears of those whose hands once fueled the engines of production. Yet, amidst the uncertainty, there lingered a glimmer of hope — a belief that perhaps, just perhaps, humans and AI could find a way to coexist, each bringing their unique strengths to the table. For in the face of innovation, resilience, and adaptability have always been humanity’s greatest assets. And so, as the sun dipped below the horizon, casting long shadows across the cityscape, the people of New Dawn City stood ready to confront the challenges of tomorrow, determined to shape their destinies in a world where the line between man and machine blurred with each passing day.

The paragraph is extremely well written as at first glance it may appear to be neutral on the stance however as it addresses the apprehension, looming questioning, and even snatching away livelihoods. This however is immediately shut down by the “glimmer of hope — a belief that perhaps, just perhaps, humans and AI could find a way to coexist, each bringing their unique strengths to the table”. It goes on to discuss this and ends by suggesting that “man and machine” will become one at some point. This fictional paragraph leans toward supporting that AI is good for society without actually providing any research and evidence.

As a writer, I aim to be as least biased as possible. It doesn’t matter where I think AI stands in the beneficiary of society, what I do believe however is that for every sustained claim there must be evidence supporting it and debunked evidence disagreeing with it.

With that said, given that the original prompt is somewhat childish I wanted to ask GPT if it, itself thought it was biased. I prompted the chatbot with the title of the article. This is the result I got:

As an AI language model developed by OpenAI, I can provide some insights into this question. OpenAI, like any organization, can be subject to biases, whether they are implicit in the data used to train models or in the decisions made by the developers and researchers. However, OpenAI has taken steps to mitigate bias in its AI models and promote fairness and transparency in AI development. OpenAI has implemented various techniques to address bias in its models, including diverse training data, algorithmic fairness research, and ongoing evaluation and improvement processes. Additionally, OpenAI has published research papers and guidelines on responsible AI development, emphasizing the importance of fairness, accountability, and transparency. That said, it’s essential to recognize that no AI system is entirely free from bias, and addressing bias in AI development is an ongoing challenge for the entire industry. OpenAI continues to work towards improving its models and practices to minimize bias and ensure that its AI technologies benefit society as a whole. Ultimately, whether OpenAI is inherently biased depends on the context and specific aspects of its work. While biases may exist, OpenAI’s commitment to responsible AI development and transparency is crucial in addressing and mitigating potential biases in its AI models and applications.

This here provides more actual insight besides just speculation. It addresses it’s faults as the data used to train AI isn’t perfect as nothing in the real world truly is. It recognizes actionable steps used to mitigate bias, which shows how OpenAI is actively responsible with its models.

Notice how it says “mitigate” bias, instead of “eliminate” bias. This is important since it shows how nothing can truly be perfect at any level and AI is addressing this.

There has been a ton of research already done trying to identify biases in OpenAi’s model. A group of researchers in the UK learned, “…The results showed a “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,” the researchers wrote, referring to Luiz Inácio Lula da Silva, Brazil’s leftist president” (Vynck). We’re not sure whether this is because the programmers working on GPT are largely liberal or because the data it’s trained on is inconclusive.

It is important to note that AI biases are not always intentional and accidental mistakes can have major consequences. For example, there is a model called Haar-like Feature Extraction. It analyzes facial images and videos and extracts features like eyes, ears, noses, etc. The image dataset it was trained on only contained images of white people since wherever the developers invented it only had access to white people to use for the database. When the model was launched internationally they got a lot of backlash since the model couldn’t identify black or dark-colored people. What appeared as a viciously racist computer program that isn’t to be used by a major population of the world was actually just a simple and easily correctable AI training error.

This is an extreme case, however, the smaller biases people aim to discover in OpenAI may be largely accidental and are actively being corrected by the developers at OpenAI.

AI is getting better with every passing day and as it does it’s important to keep looking for problems with it so it can be improved but don’t jump to conclusions about why before really looking into the intensive research behind it.

Who knows, maybe this entire article was just another biased ChatGPT output? I guess we’ll never know :D

--

--