Only artificial intelligence (AI) can prevent social media firms from shutting their doors. Costs, fines and executive jail terms are threatened as the government tackles online harm.
Stricter regulation of social media companies is now high up on the agenda of many national governments. In the UK plans are in place to create a statutory duty of care toward social media users, and a new independent regulator with powerful sanctions is to be established.
Firms will be held to account if they fail to tackle a comprehensive list of perceived online harms and abuses. Initial details of the proposed legislation are set out in the UK Government’s Online Harms White Paper which is under consultation until July.
The new obligations cover a range of activities, from those that are illegal to those falling under the further duty of care; which extends breaches to include publishing content relating to harmful behaviours, even if the activity itself is legal.
Companies failing to comply could see individual executives facing criminal prosecutions, including fines, disqualification from directorships and even jail. Offending firms would also be the subject of substantial financial penalties calibrated to the size of the business - while more extreme corporate sanctions for the worst offenders will include the blocking of sites from search engines and UK ISPs, effectively putting them out of business.
Australia also recently passed legislation that could imprison executives if their platforms stream real violence, as occurred with the recent mosque shootings in neighbouring New Zealand.
With huge volumes of online content created each day, it is difficult to see social media firms and companies that offer online community services continuing to do so unless they automate the process of content control. Automation, through AI and machine learning, is thus essential. Firms cannot afford to take a laissez-faire attitude expecting communities to self-police; nor can they afford to employ the legions of human workers that would be needed to review all existing and previous online content. Investment in technology to review posts, images and video content numbered in the hundreds of billions will be essential. Only AI and machine learning can deal with such volumes in a user-friendly way; social media users, online shopping reviewers, bloggers and vloggers won’t tolerate posts being reviewed by a committee before publishing. Firms that don’t make these investments will likely not be able to continue their social media model.
The good news for such firms is that the technology to tackle these problems is developing fast. There has been significant progress in Natural Language Processing (NLP) in recent years with many good open source and commercial tools available. Image processing and in particular video processing remains harder due to the volume of data and the higher complexity/range of content to detect. But with technologies already existing to identify copyright infringement, it should be easier to prevent the proliferation of previously recognised content. However, an excellent opportunity exists for innovative firms that can develop these technologies at scale as to date the tech giants are so far mainly failing.
To operate at the right scale, companies need efficient, optimised back-end NLP and machine learning environments, such as those provided by Verne Global. These facilitate the training of social media analytics models and AI at industrial scale, allowing companies to scan and review content correctly and to incorporate feedback on any exceptions quickly.