Wolf in Open Source clothing

AI HPC


In an interesting Medium Article, Andrew Leonard wrote about how Amazon may be starting to compete with some of its Open Source software partners. Andrew’s article delved into the specifics of the case involving Elastic and their Elasticsearch open source software. Elastic has been happy to offer Elasticsearch in its Open Source form on the AWS platform, and many customers were happy to consume Elastic’s capabilities that way.

Now comes the rub. Elastic, like any good commercially driven organisations working with Open Source, adjusted it’s model so that it began to charge for the premium components in its offering. There is nothing inherently wrong with this practice and it can be seen as positive for the Open Source movement as a thriving ecosystem of commercial offerings built on Open Source technology makes it easier for enterprise customers to adopt the technology and increases innovation. This is a sensible business approach; many early Open Source commercial models worked on only providing software support, which proved a hard way to grind out a business. If someone is providing genuine innovation, and they haven’t taken Open Source code under say GPL, then it is not unreasonable that they can charge for it.

On the opposite side of the argument, Open Source adherents would see charging for such technology as an anathema and against the principles of the Open Source movement. Interestingly or conveniently depending on your point of view this was the view that seems to be the view taken by Amazon. In what may be seen as a move to Open Source evangelism, Amazon deciding it didn’t like Elastic’s approach. Its response was to develop replacement versions of Elastic’s premium products and rent them those out on its platform - effectively killing Elastic’s cloud business.

AI and technology startups do often run the risk from building up too much reliance on the software, services and APIs from hyperscaler by not having the ability to also run either their software or business independently of them. The danger is that not only that they might get locked into a specific platform but also fail to develop their own genuine IP, as developers take the lazy route with pre-packaged services.

Often a great, safer and lower cost alternative to using the hyperscale clouds can be to look at the diverse market of focused cloud service providers. Many are committed to Open Source, and by the nature of their businesses offer open platforms built using technologies such as OpenStack. Doing so allows software companies to retain control and portability of their code, with the ability to deliver it as both a cloud offering and as an on-premises capability and not get locked into specific cloud and APIs. It also means your Developers will need to build real IP. This last point is vital; you are not a real AI company, and cannot justify AI valuations if you are consuming someone else’s AI via an API. An issue magnified with the advent of serverless computing, where there is more significant potential for API lock-in unless proper standardisation comes about. At present Amazon's Lambda has some integration with OpenAPI3.0, and Google is championing Knative which codifies best practices shared by successful real-world Kubernetes-based frameworks across the cloud and private/third party data centers.

At Verne Global, we think of ourselves as offering equal and fair commercial opportunity. Yes, we run a commercial business and this grows through offering excellent service to satisfied customers, but we also want and need our customers to succeed.


Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

Iceland provides the power behind Germany's most pioneering AI start-ups

This week has seen the announcement of Analytic Engineering, a pioneering German AI engineering firm, choosing Verne Global’s data center in Iceland as the location for their intensive computing. This represents another impressive AI and Machine Learning client win for us, following DeepL joining us just before Christmas.

Read more


Explainable AI

SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.

Read more


What to look out for at SC18

SC: The big show with an international HPC audience celebrates its 30th year in 2018. It’s the World Cup of supercomputing and now it’s more than “just” supercomputing. Advancements in data analytics have topics like artificial intelligence (AI), including machine learning and deep learning, as stars of the show. Here's what I am looking forward to seeing in Dallas...

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.