The French Parliament began debating a bill on Wednesday regarding online speech and harassment, which aims to bring tech laws up to date with the country’s position on hate speech and incitement to violence.
This bill was initially announced back in February, by President Emmanuel Macron, who stated that “incisive, concrete” actions need to be taken to tackle such acts of hate speech, be they online or offline.
In the country, public provocation to hatred, justification of or incitement to terrorism, among other things, are actions that are considered crimes and are punishable by law. However, these laws have not been applied to social media, which is where Macron wants to bring the fight to.
The French President initially proposed the measure after it noticed that the number of anti-Semitic attacks and extremist language had seen an increase online. By announcing the bill, Macron hopes that tech companies will take more action on what happens across their platforms.
Germany was the first country that put this type of bill into effect back in 2018. The bill allowed it to fine Facebook with up to 50 million Euro if it did not remove illegal posts in 24 hours. This week, Germany took advantage of the law and fined the social media platform 2 million Euro for having failed to stick to it.
The critics said that the bill will do nothing but further censor media while also allowing the social networks too much power over what they consider to be illegal content, thus stifling free speech.
However, the French bill, which will be moving up to the Senate soon, mentions quite clearly that companies have to remove any content that encourages hateful feelings towards a race or a religion as well as child pornography.
If the companies will fail to remove the content within a pre-established time frame, they will be charged up to 1.25 million Euros in fines.
Social media platforms have come under fire a lot lately and have been continuously under pressure by politicians to step up with their efforts against hate speech and the distribution of extremist material.
The terrorist attacks that took place in Christchurch, New Zealand, which saw 51 people killed and dozens injured only added more fuel to the fire as the gunman live streamed the shootings via Facebook’s Live service, which allowed for the content to be recorded and shared all over the internet.
The Facebook Live stream was not flagged or interrupted throughout the duration of the attack.
This week, Facebook announced that it will roll out new policies specifically designed to stop hate speech across its platform and that the company will be “listening to feedback from the civil rights community and address the important issues they’ve raised so Facebook can better protect and promote the civil rights of everyone who uses our services.”