Technology

Center for Artificial Intelligence and Digital Policy Files Complaint with FTC Against OpenAI’s GPT-4

The Center for Artificial Intelligence and Digital Policy (CAIDP) has filed a complaint with the United States Federal Trade Commission (FTC) alleging that OpenAI’s latest big language model, GPT-4, poses a risk to public safety and privacy, and is biased and deceptive. The non-profit research group claims that the commercial distribution of GPT-4 constitutes “unfair or deceptive acts or practices in or affecting commerce” in violation of Section 5 of the FTC Act.

CAIDP’s lawsuit, filed on March 30, accuses GPT-4 of having the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups. The group claims that AI systems, like GPT-4, have the potential to reinforce entire ideologies, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.

Moreover, CAIDP criticized OpenAI for distributing GPT-4 to the public for commercial use without undergoing an independent examination of the model, despite knowing these concerns. As such, CAIDP has requested that the FTC investigate OpenAI’s and other similar companies’ AI products.

GPT-4 is the latest version of OpenAI’s big language model, following the release of GPT-3 in November. Research published on March 14 indicated that GPT-4 passed the most difficult high school and law school examinations in the United States, and it is said to be 10 times smarter than its predecessor. However, concerns about the hazards of AI have been raised by prominent figures such as Tesla CEO Elon Musk.

While AI has the potential to transform many aspects of society for the better, the dangers of advanced AI systems must be addressed. The complaint filed by the CAIDP underscores the importance of ensuring that these systems are designed and deployed in a manner that is responsible, transparent, and safe for all.