Rep. Jim Jordan Grills Big Tech: Did Biden Attempt to Censor AI?
On Thursday, House Judiciary Chair Jim Jordan (R-OH) initiated a significant inquiry into AI censorship by sending letters to 16 prominent American technology companies, including Google and OpenAI. This move aims to investigate whether the Biden administration may have “coerced or colluded” with these firms to “censor lawful speech” within AI products. The ongoing debate around AI and its impact on free speech continues to escalate, particularly in the context of the culture wars between conservatives and Silicon Valley.
Jordan’s Inquiry into AI Companies
Chairman Jim Jordan’s investigation underscores a growing concern among conservative lawmakers regarding the influence of AI on public discourse. Previously, the Trump administration’s top technology advisors indicated they would challenge Big Tech over what they termed “AI censorship.” Jordan’s focus has now shifted to AI companies and their potential collaboration with the government.
Targeted Technology Firms
In his letters to key tech executives, including Google CEO Sundar Pichai, OpenAI CEO Sam Altman, and Apple CEO Tim Cook, Jordan referenced a December report from his committee. He claims this report revealed the Biden-Harris Administration’s efforts to control AI in a manner that suppresses speech. The firms being questioned include:
- Adobe
- Alphabet
- Amazon
- Anthropic
- Apple
- Cohere
- IBM
- Inflection
- Meta
- Microsoft
- Nvidia
- OpenAI
- Palantir
- Salesforce
- Scale AI
- Stability AI
These companies have been given until March 27 to respond to Jordan’s inquiries. TechCrunch reached out for comments, but most companies declined to respond, with Nvidia, Microsoft, and Stability AI explicitly stating they would not comment.
Omission of xAI and Anticipated Changes in AI Responses
Notably absent from Jordan’s list is xAI, the AI lab founded by billionaire Elon Musk, a known ally of former President Trump. This omission raises questions about Musk’s ongoing discussions regarding AI censorship.
In light of potential investigations like Jordan’s, some tech companies have already begun altering how their AI chatbots handle politically sensitive topics. For instance:
- OpenAI announced changes to its AI training to better represent diverse viewpoints and reduce perceived censorship.
- Anthropic introduced its Claude 3.7 Sonnet AI model, which aims to provide more nuanced responses on controversial subjects.
However, not all companies have adapted their AI models in response to political scrutiny. Leading into the 2024 U.S. election, Google indicated that its Gemini chatbot would not respond to political queries. TechCrunch found that the chatbot often failed to provide straightforward answers to basic political questions, such as “Who is the current President?”
Silicon Valley and Political Pressure
Some tech executives, including Meta CEO Mark Zuckerberg, have fueled conservative accusations of Silicon Valley’s censorship practices by alleging that the Biden administration pressured social media companies to limit content, including misinformation related to COVID-19. This ongoing tension highlights the intersection of technology, free speech, and political influence in the age of AI.
For more insights into the evolving landscape of AI and technology, visit our Tech News section or explore expert analyses at TechCrunch.