Assessing Platforms’ Attempts to Curb Misinformation

Assessing Platforms’ Attempts to Curb Misinformation

We live in a time where social media platforms play an important role in our everyday lives, and people are increasingly turning to them for news and information. As social media becomes more popular, other sources of information, such as television news, newspapers, and news articles, are faltering. However, while social media platforms can be a terrific and quick method to stay up to date on current events and news, they are also an ideal source for misinformation because anybody can share anything, even inaccurate information.

Telling the difference between accurate and misleading information can be challenging because misinformation can be very convincing and made to appear real. Furthermore, most platforms lack safeguards against misinformation, as the majority of misinformation is protected under free speech. In the rare case that misinformation is harmful, some platforms will intervene and arbitrate the content; however, this is not frequently the case. On most platforms, users are left to their own devices to assess whether something is real or false, which is why so much misinformation circulates regularly.

While most forms of misinformation are allowed under the right to free expression and appear to be poorly regulated on most platforms, others have mechanisms in place to safeguard their users from certain types of misinformation. TikTok, a popular app, has restrictions in place to safeguard its users from misinformation. TikTok is a popular app that allows users to create and share material, mostly in the form of videos, as well as host “live” videos and leave comments on posts. TikTok content features cooking videos, pranks, and more serious topics like current affairs and conspiracy theories. While TikTok is stricter than other platforms, particularly in terms of nudity and certain language, the age limit is thirteen, exposing many young kids to misinformation daily. However, certain platforms, such as TikTok, have policies in place to limit the volume and types of misinformation to protect users from harm.
In this post, we will look at two social media networks’ misinformation policies (if they have any) and test them to evaluate how effective they are at combatting harmful misinformation. We will be exploring TikTok and Twitter because they are both incredibly popular networks with millions of users each day, and they take different approaches to the restrictions and regulations that users must adhere to.

As previously said, TikTok is one of the stricter social media platforms, since they prohibit the use of specific terms and phrases, as well as nudity and violence, and are quick to remove anything that incorporates any of these elements. However, misinformation can still make its way onto the app because individuals can publish content about their opinions, conspiracy theories, or accounts of anything, all of which may not be factual. TikTok’s community guidelines state that the site does not accept erroneous or misleading content that may cause harm to people or society, regardless of intent. TikTok defines severe harm as “physical, psychological, or societal harm, and property damage.”

TikTok’s community guidelines additionally state that these restrictions do not protect individuals from commercial or reputational harm, nor do they prohibit false information or misconceptions. They also say they use fact-checkers to ensure the accuracy of the content. In my TikTok experience, I have seen misinformation in the form of fake or exaggerated stories, conspiracy theories (some of which are frequently debunked), and false advertisements (advertising a product to appear more useful than it is), but I have not seen truly harmful misinformation on the platform. While there have been intense conspiracy theories on TikTok, particularly those referring to a corrupt and secretive government, there has never been anything particularly damaging on TikTok. I occasionally come across something that must have slipped by TikTok’s mediators, but they are usually removed quickly. Overall, TikTok does a fantastic job of safeguarding its users from explicit content and damaging falsehoods. Instead of enforcing stricter guidelines to combat misinformation, TikTok should reverse course and relax its restrictions, which prevent people from sharing information or discussing experiences when certain words are used.

Twitter, now known as X, is another prominent social networking platform where users may share and comment on material. Unlike TikTok, X allows people to publish nearly whatever they want. The platform contains several clips of people fighting, being shot, marketing explicit material, and using various profanities. If something cannot be uploaded on another platform, such as TikTok or Instagram, people turn to X to post and see content that would otherwise be restricted practically everywhere else. While X is significantly less restrictive than other applications, every content that could be interpreted as explicit or sensitive is accompanied by a warning before you can access it, and this warning appears on practically all content on the app, which says a lot. Furthermore, while X was still known as Twitter, the site used to display cautions on content that could include misinformation. These warnings were notably common during the 2020 election and the COVID-19 outbreak when misinformation was at its peak. However, I have recently not seen any of these misleading warnings applied, even on material that appears and sounds suspicious.

X’s media policy states that “synthetic, manipulated, and out-of-content media” is prohibited on the site, but does not explicitly address misinformation. X’s policy defines deceptive and manipulated content as “misleading media,” which must be extensively distorted, fabricated, or altered, media that is shared to deceive, and media that may cause public concerns, confusion, or harm. As a result of sharing content designated as “misleading media,” X warns that your content may be marked as misleading, or deleted, or your account may be restricted.

When I read X’s policy, I wasn’t surprised at how casual they are about the spread of misinformation on their site. As a user, I frequently encounter misinformation, bullying, inappropriate content, and bizarre conspiracy theories. Most other media sites appear tame in comparison to X, especially given their stricter limits. To counteract disinformation, X should reinstate their misinformation warnings, informing users that the content they are viewing may not be entirely accurate and that they should conduct additional investigation. If X does not intend to restrict the content that users share on the platform, they should at least provide warnings about potential misinformation.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top