How AI can keep your executives out of jail

AI NLP


Only artificial intelligence (AI) can prevent social media firms from shutting their doors. Costs, fines and executive jail terms are threatened as the government tackles online harm.

Stricter regulation of social media companies is now high up on the agenda of many national governments. In the UK plans are in place to create a statutory duty of care toward social media users, and a new independent regulator with powerful sanctions is to be established.

Firms will be held to account if they fail to tackle a comprehensive list of perceived online harms and abuses. Initial details of the proposed legislation are set out in the UK Government’s Online Harms White Paper which is under consultation until July.

The new obligations cover a range of activities, from those that are illegal to those falling under the further duty of care; which extends breaches to include publishing content relating to harmful behaviours, even if the activity itself is legal.

Companies failing to comply could see individual executives facing criminal prosecutions, including fines, disqualification from directorships and even jail. Offending firms would also be the subject of substantial financial penalties calibrated to the size of the business - while more extreme corporate sanctions for the worst offenders will include the blocking of sites from search engines and UK ISPs, effectively putting them out of business.

Australia also recently passed legislation that could imprison executives if their platforms stream real violence, as occurred with the recent mosque shootings in neighbouring New Zealand.

With huge volumes of online content created each day, it is difficult to see social media firms and companies that offer online community services continuing to do so unless they automate the process of content control. Automation, through AI and machine learning, is thus essential. Firms cannot afford to take a laissez-faire attitude expecting communities to self-police; nor can they afford to employ the legions of human workers that would be needed to review all existing and previous online content. Investment in technology to review posts, images and video content numbered in the hundreds of billions will be essential. Only AI and machine learning can deal with such volumes in a user-friendly way; social media users, online shopping reviewers, bloggers and vloggers won’t tolerate posts being reviewed by a committee before publishing. Firms that don’t make these investments will likely not be able to continue their social media model.

The good news for such firms is that the technology to tackle these problems is developing fast. There has been significant progress in Natural Language Processing (NLP) in recent years with many good open source and commercial tools available. Image processing and in particular video processing remains harder due to the volume of data and the higher complexity/range of content to detect. But with technologies already existing to identify copyright infringement, it should be easier to prevent the proliferation of previously recognised content. However, an excellent opportunity exists for innovative firms that can develop these technologies at scale as to date the tech giants are so far mainly failing.

To operate at the right scale, companies need efficient, optimised back-end NLP and machine learning environments, such as those provided by Verne Global. These facilitate the training of social media analytics models and AI at industrial scale, allowing companies to scan and review content correctly and to incorporate feedback on any exceptions quickly.




Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

HPC & AI on Wall Street - Rumours from the Trade Show Floor

New York is always an exciting, energetic city to visit and last week was made even more so as I attended the ‘HPC & AI on Wall Street’ conference which HPCWire are now championing. It was well worth the train ride from Boston and interesting to see the varied mix of attendees present and hear how HPC and AI is evolving in the world of finance.

Read more


AI needs to be ‘explainable’ but is that possible?

Amazon's AI made the news early in October after it was revealed that the company had scrapped a recruitment engine because it was 'sexist'. Private Eye, the UK's satirical news magazine, described it as "a reminder to take an extra big pinch of salt whenever you hear that AI will improve the world". However, the reality is more complicated...

Read more


AI and HPC Field Trip - Rumours from the trade show floor

The long-lasting Icelandic winter didn’t impact the momentum of our second AI and HPC Field Trip. Once again we gathered around 15 practitioners from the worlds of AI, machine learning, deep learning and high performance computing to network and brainstorm together, as well as tour our industrial scale campus on Iceland’s former NATO based near Keflavik.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.