Facebook chat platform, WhatsApp, will take legal action against users who send too many messages from December 7, 2019.
The company hoped to put a stop to the abuse of the service by spammers, who send huge number of messages to particular users, it said.
Such rules are banned in its terms, and users can be banned for contravening them. But the company has now pledged to pursue other punishments too.
In an update to the social media platform’s FAQ section of its website, WhatsApp said that from December 7 this year, it will consider taking legal action against anyone deemed to be using the platform for activities such as “bulk or automated messaging.”
The firm said its system is meant to be used as a private messaging platform or for companies to interact with customers via its dedicated Business app, and that it will not tolerate it being used for spam messages.
“WhatsApp was designed for private messaging, so we’ve taken action to prevent bulk messaging and enforce limits on how WhatsApp can be used,” the firm stated.
“We’ve also stepped up our ability to identify accounts that misuse WhatsApp, which helps us ban two million accounts globally per month.”
The messaging service currently has more than 1.5 billion active users. The Guardian check showed that as at January, 41 per cent of Nigeria’s population use WhatsApp, with Instagram, YouTube, and Facebook Messenger far below, with just 25 per cent and 24 per cent of the population.
The updated page on the firm’s website on “unauthorised usage of WhatsApp” said the company is “committed to reinforcing the private nature of our platform and keeping users safe from abuse” and that it will use all the resources at its disposal to prevent abuse of its terms of service.
“Beginning on December 7 2019, WhatsApp will take legal action against those we determine are engaged in or assisting others in abuse that violates our terms of service, such as automated or bulk messaging,” the information page states.
WhatsApp warned that it would also consider legal action even if that decision is based on “information solely available to us off our platform”.
Earlier this year, WhatsApp released a white paper which warned of automated and spam messages being used to spread “problematic content”.
It said efforts to stop this behaviour are “particularly important during elections where certain groups may attempt to send messages at scale”.
Earlier this year, the company confirmed it would be limiting the number of times users can forward any single message to five in an attempt to stop false information spreading on the platform.
This feature was initially tested in India in 2018 after a string of mob attacks in the country were blamed on fake reports which were said to have spread via the app.