AI tools are helping students do coursework, pass exams and write winning SOPs. They're being used by professionals in various fields to make content creation more efficient. Straightforward to use, the tools are becoming increasingly popular for their ability to churn out more content in less time. This is the good part. There's also a somewhat dark side to AI-generated content, especially with regard to authoritativeness and ethics.
AI tools can generate racist and inflammatory content
AI models trained on data that reflects gender, race, or other biases will yield similarly biased outputs. Toolmakers understand the potential for such misuse and have measures in place to prevent it. For example,
ChatGPT's content moderation tooling aims to prevent sexual, hateful, violent, or harmful content. However,
rephrasing prompts have been seen to bypass these measures. AI chatbots have previously been called out for their biases. Back in 2016, Microsoft's Tay AI chatbot was withdrawn after it started posting racist, inflammatory, and sexually charged content.
Such problems take different forms in AI image generators. The AI photo editing app Lensa has been accused of sexualizing the images of women, including their childhood photos, with the sexualization being more pronounced for Asian women.
Experts propose a careful curation of training data rather than scraping content off the internet on a mass scale. Limiting the input data set would cut down on the toxic language produced by AI tools.
The correctness and completeness of AI-generated content are debatable
ChatGPT is very smart but its maker
OpenAI has admitted to the tool's limitations, explicitly stating it could write "plausible-sounding but incorrect or nonsensical answers." It's the reason the popular Q&A platform for programmers, Stack Overflow, temporarily banned ChatGPT, stating that while answers by the tool appeared to look good, they had a high rate of being incorrect. The issue poses the risk of disseminating information that's either misleading or plain wrong.
OpenAI countered by pointing out that there is no single source of truth and that training the model to be more cautious can lead to it declining questions it can answer correctly.
When it comes to confidence, correctness, and certainty, humans beat AI. Experts writing on a topic share knowledge with confidence but are also able to state the boundaries of their knowledge to create the right understanding among audiences.
AI content generators aren't as creative as humans
Putting across an idea cleverly or creatively is pretty much an impossible task for AI tools, which aren't trained to devise answers quite like how a human would put them. This translates to somewhat vanilla text that lacks the spark of human creativity, wit, and humor. AI tools are also distinguishable for their tendency to create uniform sentences, which runs counter to the human tendency to write sentences of varying lengths. They also aren't able to add contextual and cultural awareness, making for less engaging and original content.
Human oversight and edits can solve these issues. It's better to use AI generators for the assistive tools they are and ensure human intervention to eliminate inaccuracies and create impactful content.