Is AI content helping or hurting your website?
New AI content tools are popping up left and right. These tools can help you write a social post, or a whole blog post, answer any question, or even create a brand-new image! This is great and helpful when you only have a little time or need more inspiration. But the big question is, are AI tools actually helping us? Should we be using them to create our content? This blog post will discuss AI content and why we should be mindful of how we use it.
AI-generated content is skyrocketing
AI, or Artificial Intelligence, refers to systems that perform tasks that typically require human intelligence, such as perception, learning, reasoning, problem-solving, and decision-making. And right now, there’s an explosion of AI tools in all shapes and sizes.
The widespread adoption of AI-powered content generators makes it easier than ever to produce content quickly at scale. With just a few clicks, anyone can generate a half-hearted, generic article that a person must edit to fit the tone of their business and make it factual.
It’s easy to get swept up in all the excitement and generate lots of content using these new, shiny tools. There is, however, something we can’t and shouldn’t ignore when using these AIs. It might not surprise you that using an AI tool to create your content results in similar content to others. For one, this isn’t great for your SEO.
It also results in a much bigger issue that affects all of us. This content often isn’t diverse and inclusive at all. It’s created by AIs that were trained with biased content. And this content is often written by the same type of person. Let’s dive into this and find out what can be done!
One of the main concerns with AI-generated content is the lack of originality and authenticity. While algorithms can mimic the style and tone of existing content, they can’t replace the creativity and originality of real people.
AI-generated content often lacks nuance, depth, and originality, which can harm the credibility and reputation of a brand. Moreover, using AI-generated content can perpetuate stereotypes, bias, and exclusionary practices, as algorithms tend to replicate existing patterns and preferences.
Because AI content tools make it so easy to create content, it’s now easier than ever to produce the same content as everyone else. And if everyone uses the same AI to create content, no one is creating new content. We will, in effect, create an echo chamber with no new thoughts or ideas coming in. This leads to a narrow and non-inclusive view of the world.
François Chollet had a delightful tweet about his thoughts on AI content:
Related to this, Maggie Harrison at Futurism wrote an interesting article about ChatGPT essentially being an automated mansplaining machine. Having just this one, far from inclusive, point of view of the world is hurtful to society in so many ways. It doesn’t account for the vast diversity of people and points of view in our world. Nor does it champion groups of people that have often been neglected and marginalized in the past.
AI training sets have a bias
The Large Language ****** (LLMs) that power the likes of Google Bard, Microsoft’s Bing assistant, and OpenAI’s ChatGPT are trained on content from today’s internet. And while most people would like to believe that the internet is diverse and inclusive, it has some very questionable corners.
We should try to strive for a world that’s much more inclusive than it is today. Using public forums on the internet to train your AIs may not be the best idea. In recent years this has lead to AIs becoming racist and biased.
A few examples
Using the internet of today to train AIs has multiple inclusion problems. This means that the AIs themselves become racist, sexist, or ableist because the content they are being trained on is racist, sexist, or ableist. Let’s look at a couple of examples.
Amazon’s AI hiring debacle
Take, for example, Amazon’s hiring AI. They developed this tool as the “holy grail” of hiring to help them find the right people for the job. Amazon used ten years’ worth of mostly male resumes to train the AI. Of course, this is a reflection of the tech industry overall, but it also means that the tool became sexist.
They may not have intended to create a sexist AI, but because the data it had been fed was skewed towards more male hires, it thought it was doing the right thing. AI will always be biased if the data they are using to train is biased.
Image creation can also be problematic
Another example of AI’s going racist is AI image creation. If you want to generate an image of a ******** couple holding hands, it’s not uncommon to see that all the generators give back predominately white people. In July 2021, Dall-E 2 updated its tool to “more accurately reflect the diversity of the world’s population.” Unfortunately, it still produces photos that are non-diverse. It would only show people of color when you added the word “poor” to the prompt.
This isn’t just limited to people of color; the LGBTQI+ community also fell prey to these non-inclusive images. Of course, the tools can make adjustments to their systems. We still have a long way to go to reflect the world that we are in. As Zoe Larkin (Levity) writes in the blog post on AI bias: “Unfortunately, AI is not safe from the tendencies of human prejudice. It can assist humans in making more impartial decisions, but only if we work diligently to ensure fairness in AI systems.”
Don’t forget about the human edit
To some extent, it’s acceptable to use AI tools as shortcuts. Today, however, content creators use them without considering the data that fed the AIs. This leads to the reinforcement and expansion of echo chambers and contributes to the creation of similar content and the production of racist and non-inclusive/diverse images.
As a result, content creators need to be more aware of the data and algorithms used by AI tools to ensure that their content is authentic, diverse, and inclusive and does not perpetuate stereotypes or exclusionary practices.
A lot of online content is not representative
Amazon’s hiring tool and Dall-E 2 are a few examples of AI content generators going rogue. And it is not strange that AI content tools are going the same way because the internet is filled with content written by English-speaking mediocre white cis men.
For example, a study by Oxford University’s Internet Institute found that (mostly male) editors in the western part of the world made most of the contributions to Wikipedia, creating a skewed worldview.
Even if this is a part of your target audience, it is not the only audience. People from all backgrounds with all kinds of experiences currently make up just a small percentage of voices heard.
If we want to break the cycle of this continuous repeating of the same content, we need to improve at writing and creating more inclusive content. That way, we can train the AIs of the future on a more inclusive and diverse internet.
Make today’s content better for the future
Try not to be that person from the meeting who repeats what others say. Produce content in your voice and make it accessible to the broadest possible audience. All of this makes for a better internet for everyone.
Communicate appropriately with the audience that you are trying to reach. When you’re writing inclusively, you, my friend, are helping to create content that will make the internet of the future a better place.
Be aware of your own bias
It’s not only AIs that have this bias; we all have an unconscious bias that we are trying to unlearn and evolve. That’s what got us here in the first place. We all need to do better to write more inclusive content. Only by taking the time to write inclusive content will we shape today’s internet. This, in turn, means that we can train the AI tools of the future on more inclusive and less derogatory language.
That’s a big responsibility, we know. And this is not something that will change overnight; it will take time. We’ll undoubtedly get it wrong. But, by making an effort now to create diverse and inclusive content, we’ll start the ball rolling to a better internet.
So, what can we do?
You can use AI tools as part of your content creation process. However, you must do a human edit before hitting publish. Be critical of the content that rolls out of the AI tool. Make sure to do a fact-check. And make the much-needed adjustments. You shouldn’t just adjust the tone of voice in your content but also check it for diversity and inclusivity. You should pinpoint any problematic content. Improve it to a point where anyone can relate to it and you’re comfortable with it.
How to make your content more inclusive
It can be hard to know where to start. That’s where tools can help get you on the path to a more inclusive and diverse internet. For example, our inclusive language analysis in Yoast SEO. This new analysis helps you to spot when you may have unconsciously used a term that is not inclusive or is, in fact, racist, sexist or ableist.
Much like our readability analysis, it looks through your text for words from our database that are racist, sexist, non-inclusive or derogatory. It will help you become aware of those non-inclusive words and phrases. You’ll get feedback and proper alternatives that can improve your content to ensure that site visitors feel spoken to. With just a few small steps in the right direction, we can all hopefully make the world and the web a more inclusive and diverse place for future generations. And for future AIs.
Source link : Yoast.com