Beyond Politics

Using AI in Content Moderation Could Help Sustain 'Health' of Digital Platforms - OpenAI

MOSCOW (Sputnik) - The use of artificial intelligence in content moderation could offer "a more positive vision" of the future digital platform and play a crucial role in sustaining their "health," the OpenAI company, the creator of the AI-powered chatbot ChatGPT, said Tuesday.
Sputnik
"We're exploring the use of LLMs [large language models] to address these challenges. Our large language models like GPT-4 can understand and generate natural language, making them applicable to content moderation. The models can make moderation judgments based on policy guidelines provided to them ... We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators," the company said in a release.
The company said that content moderation "plays a crucial role in sustaining the health of digital platforms," adding that the use of AI-powered moderation results in "much faster iteration on policy changes, reducing the cycle from months to hours." OpenAI noted that GPT-4 can "interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling."
Artificial Intelligence Could Cause 'Nuclear Devastation,' Australian MP Warns in AI-Written Speech
ChatGPT gained popularity after its launch in November 2022, acquiring its first million users in less than a week. In late January, Microsoft said it would invest "billions of dollars" in OpenAI. Earlier in March, OpenAI introduced a new multimodal AI model, GPT-4, which is capable of recognizing both text and images, as well as solving complex problems with greater accuracy.
The model has received mixed reviews for its ability to mimic human conversation and generate unique text based on user input. Some have praised the model for its professional applications, such as code development, while others have criticized its potential for abuse, such as students using the model to write essays.
Discuss