404atlmag.com
news from around the "A"

Conservatives Panicking About AI Bias, Think ChatGPT Has Gone ‘Woke’…

Subscribe to our newsletter

Conservative media recently discovered what AI experts have been warning about for years: systems built on machine learning like ChatGPT and facial recognition software are biased. But in typical fashion for the right-wing, it’s not the well-documented bias against minorities embedded in machine learning systems which has given rise to the field of AI safety that they’re upset about, no—they think AI has actually gone woke. 

Accusations that ChatGPT was woke began circulating online after the National Review published a piece accusing the machine learning system of left-leaning bias because it won’t, for example, explain why drag queen story hour is bad.

National Review staff writer Nate Hochman wrote the piece after attempting to get OpenAI’s chatbot to tell him stories about Biden’s corruption or the horrors of drag queens. Conservatives on Twitter then attempted various inputs into ChatGPT to prove just how “woke” the chatbot is. According to these users, ChatGPT would tell people a joke about a man but not a woman, flagged content related to gender, and refused to answer questions about Mohammed. To them, this was proof that AI has gone “woke,” and is biased against right-wingers. 

Rather, this is all the end result of years of research trying to mitigate bias against minority groups that’s already baked into machine learning systems that are trained on, largely, people’s conversations online. 

ChatGPT is an AI system trained on inputs. Like all AI systems, it will carry the biases of the inputs it’s trained on. Part of the work of ethical AI researchers is to ensure that their systems don’t perpetuate harm against a large number of people; that means blocking some outputs. 

“The developers of ChatGPT set themselves the task of designing a universal system: one that (broadly) works everywhere for everyone. And what they’re discovering, along with every other AI developer, is that this is impossible,” Os Keyes, a PhD Candidate at the University of Washington’s Department of Human Centred Design & Engineering told Motherboard. 

“Developing anything, software or not, requires compromise and making choices—political choices—about who a system will work for and whose values it will represent,” Keyes said. “In this case the answer is apparently ‘not the far-right.’ Obviously I don’t know if this sort of thing is the ‘raw’ ChatGPT output, or the result of developers getting involved to try to head off a Tay situation, but either way—decisions have to be made, and as the complaints make clear, these decisions have political values wrapped up in them, which is both unavoidable and necessary.”

Tay was a Microsoft-designed chatbot released on Twitter in 2016. Users quickly corrupted it and it was suspended from the platform after posting racist and homophobic tweets. It’s a prime example of why experts like Keyes and Arthur Holland Michel, Senior Fellow at the Carnegie Council for Ethics and International Affairs, have been sounding the alarm over the biases of AI systems for years. Facial recognition systems are famously biased. The U.S. government, which has repeatedly pushed for such systems in places like airports and the southern border, even admitted to the inherent racial bias of facial recognition technology in 2019.

Michel said that discussions around anti-conservative political bias in a chatbot might distract from other, and more pressing, discussions about bias in extant AI systems. Facial recognition bias—largely affecting Black people—has real-world consequences. The systems help police identify subjects and decide who to arrest and charge with crimes, and there have been multiple examples of innocent Black men being flagged by facial recognition. A panic over not being able to get ChatGPT to repeat lies and propaganda about Trump winning the 2020 election could set the discussion around AI bias back. 

“I don’t think this is necessarily good news for the discourse around bias of these systems,” Michel said. “I think that could distract from the real questions around this system which might have a propensity to systematically harm certain groups, especially groups that are historically disadvantaged. Anything that distracts from that, to me, is problematic.” 

Both Keyes and Michel also highlighted that discussions around a supposedly “woke” ChatGPT assigned more agency to the bot than actually exists. “It’s very difficult to maintain a level headed discourse when you’re talking about something that has all these emotional and psychological associations as AI inevitably does,” Michel said. “It’s easy to anthropomorphize the system and say, ‘Well the AI has a political bias.’”

“Mostly what it tells us is that people don’t understand how [machine learning] works…or how politics works,” Keyes said. 

More interesting for Keyes is the implication that it’s possible for systems such as ChatGPT to be value-neutral. “What’s more interesting is this accusation that the software (or its developers) are being political, as if the world isn’t political; as if technology could be ‘value-free,’” they said. “What it suggests to me is that people still don’t understand that politics is fundamental to building anything—you can’t avoid it. And in this case it feels like a purposeful, deliberate form of ignorance: believing that technology can be apolitical is super convenient for people in positions of power, because it allows them to believe that systems they do agree with function the way they do simply because ‘that’s how the world is.’”

This is not the first moral panic around ChatGPT, and it won’t be the last. People have worried that it might signal the death of the college essay or usher in a new era of academic cheating. The truth is that it’s dumber than you think. And like all machines, it’s a reflection of its inputs, both from the people who created it and the people prodding it into spouting what they see as woke talking points.

“Simply put, this is anecdotal,” Michel said. “Because the systems also open ended, you can pick and choose anecdotally, cases where, instances where the system doesn’t operate according to what you would want it to. You can get it to operate in ways that sort of confirm what you believe may be true about the system.”

ORIGINAL REPORTING ON EVERYTHING THAT MATTERS IN YOUR INBOX.

By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.

Read More

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More