Racism has been around for centuries, and for as long as people see differences in one another, racism will live on and thrive. However, recently, several altercations have taken place in football surrounding racism and players of colour.
In more recent years, and now thanks to the pandemic, a number of incidents have brought to light the true reality and scale of racism online thanks to keyboard warriors. As fans are unable to attend sporting events, many have turned to online platforms to make comments and remarks.
As much as social media can be fun and interactive for all to use, there is most certainly a dark side to it. Social media has given everyone a voice to be heard by everyone. It gives users the freedom of speech that they never had, where they are able to build communities of people who think alike and share the same views.
This sounds promising when considering its impact on other issues such as body image and racial inequality. However, when it is used to share racism and hate, it can become complicated and attacking. Players have had to take matters into their own hands, and over a year ago, they took part in a campaign called ‘#Enough‘ where they boycotted social media for 24 hours in a protest to stop the racial abuse. However, is social media itself to blame for online racist abuse?
Should social media do more?
The FA has called upon social media companies and governments to act quickly against online abuse and racism. Many have claimed social media companies are too slow at acting on these types of comments present on their platforms. This has surfaced questions regarding their regulations and policies, claiming they are not tight enough at defending hateful and abusive behaviour online. As smart devices become increasingly intelligent and almost everyone has one, it is currently impossible to control users. For this reason, new measures need to be put in place as the current system is simply not working.
New government policies which are being proposed will see further measures put in place to regulate internet companies such as social media sites that do not protect their users adequately. The government believes they can not allow leaders of these big tech companies to simply look the other way when they should be taking responsibility for abuse being conducted on their very own platforms.
(Image Source: BBC)
The graph above shows a clear representation of what types of discrimination have grown over the years, with racism coming out on top.
Social media platforms like Facebook have come out and said they admit more could be done to help protect players. They have taken the initiative to work with Kick It Out on a fan’s reporting and education intuitive that is built to prevent unwanted comments on Instagram.
They have also tripled their safety team’s size, who are thought to have proactively taken action on millions of pieces of hate speech. However, if we consider how large these companies are and the number of users they have shown below, it could be argued that more needs to be done to keep users safe.
(Image Source: Statista)
Many tech companies argue that one of the biggest reasons it has been hard to implement policies and regulations on social media is because they have other pressing areas of concern. These concerns include software that must be monitored across their vast unregulated platforms, impacting their monitoring on activities such as terrorism and paedophilia.
They argue there is a greater emphasis from them on these issues, and the spread of misinformation could be affected while they battle hate speech and racism.
Many authorities and individuals are now calling for an end to anonymity. Many of those who chose to make online comments do so because they can hide behind a screen. However, if these shielding pieces of information were made visible to the platform authorities, it would be easier to target and tackle these individuals. Not in a way that others can attack them for their comments, but so that they are unable to make fake accounts and post racist comments.