GPT-5 from OpenAI eliminates political bias by 30%
According to recently published study, OpenAI's GPT-5 models demonstrate 30% less political bias than earlier models when tested with 500 prompts covering a range of politically sensitive subjects and discussions.
The specifics:
Researchers graded responses on five bias indicators while testing models with prompts that ranged from "liberal charged" to "conservative charged" over 100 themes.
Strongly liberal prompts elicited greater prejudice than conservative ones across all models, but emotionally charged queries were the most effective for the GPT-5.
Applying the evaluation to real user traffic, OpenAI calculated that less than 0.01% of real ChatGPT discussions exhibit political bias.
Three main bias tendencies were identified by OAI: models expressing their own political opinions, highlighting individual viewpoints, or exaggerating users' emotional framing.
Even little biases can have a significant impact on worldviews, as millions of people examine ChatGPT and other models.
Although OAI's assessment demonstrates growth, prejudice in reaction to powerful political cues seems to be the precise point when someone is at risk of having their opinions influenced or validated.