Nvidia’s GPU Technical Conference (GTC) in San Jose always marks the start of spring. Although at this time of year the daffodils haven’t yet broken ground in Boston, but the days are warming, and the snow is becoming less frequent. In San Jose spring was in full swing with sunny days and temperatures above 20⁰C following a winter of record rains. The GTC conference addresses all things GPU and increasingly their AI use cases. Over the few years the number of attendees has ramped from a few thousand to currently in the region of 9,000. The largest grouping of attendees appears to be applying GPUs to AI deep neural network training and inference.
As a French citizen living in London with the dark Brexit cloud looming over us all, I am pleased to see some things moving in the right direction in my beloved home country such as the massive investments the Macron government has made to push for tech innovations throughout the country and the upgrades it so desperately needed.
Sometimes the combination of networking at a trade show and catching an insightful presentation provide a valuable insight into market dynamics. Earlier this year I attended the HPC and Quantum Computing show in London and following this, watched Addison Snell’s (CEO of Intersect360 Research) “The New HPC” presentation from the Stamford HPC Conference. Both confirmed my HPC suspicions garnered over the last year.
BeeGFS is a parallel file system suitable for High Performance Computing (HPC) with a proven track record in scalable storage solution space. In this blog hosted by Verne Global, we explore how different components of BeeGFS are pieced together and how we have incorporated them into an Ansible role for a seamless storage cluster deployment experience.
Generally, trade shows follow the sun and tourists to popular vacation destinations. Everyone loves a conference in San Diego or Orlando! The recently rebranded NeurIPS (formally NIPS) took a different road this year and visited Montreal in early December. Montreal is one of my favourite cities but in early December it’s the season for cold, cloudy weather and infrequent freezing rain. Here's a quick rundown on my experiences at the conference.
Today, data scientists and machine learning engineers can implement systems for tackling discrete tasks with a very high degree of success. Results in fields such as image processing to support cancer diagnosis can be extremely accurate, with the best performing algorithms even exceeding experienced clinicians for certain categories of cancer diagnosis.
Business historians might one day see 2018 as a pivotal year. We are in the midst of an AI revolution, with more and more data being processed by algorithms that will help us to make better decisions or simply make the decisions for us. But the collection and exploitation of this data is not without costs and historians might view this year as the year when society began to realise that.
FT's recent article “EU bankers step up warnings on threat from big tech” highlights moves by the European Union (EU) to provide a level playing field in the European finance sector. Undoubtedly EU financial institutions are aware of the threat to existing business models posed by the global tech giants, and this concern may have been recently exacerbated by the new Open Banking regulations, which as the article suggests may allow the easy identification and cherry picking of the most lucrative parts of the business.
SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.
SC18 gets underway in Dallas, Texas, today, kicking off a week of panels, workshops and tutorials. This is the 30th year of the conference, which is properly known as the International Conference for High Performance Computing, Networking, Storage and Analysis. It's an exhausting schedule to navigate and even experienced attendees will have to do some diligent research to pick out the highlights. Here are mine...
SC: The big show with an international HPC audience celebrates its 30th year in 2018. It’s the World Cup of supercomputing and now it’s more than “just” supercomputing. Advancements in data analytics have topics like artificial intelligence (AI), including machine learning and deep learning, as stars of the show. Here's what I am looking forward to seeing in Dallas...
Last week I was privileged to be part of our AI and HPC Field Trip to Iceland. The goal of the trip was to share insight and observations around the evolution of AI deep neural network (DNN) training and to tour our HPC-optimised data center. The attendees spanned DNN veterans like Eric Sigler of OpenAI, large enterprise data science leaders like Pardeep Bassi of Liverpool Victoria Insurance (LVE) and start-up pioneers like Max Gerer of e-bot7.
Deep learning is a current hot topic in HPC and I am sure it will be one of the key themes at SC18. In many cases, we have been hearing about artificial intelligence, machine learning, and deep learning discussed together, though in reality, machine learning is a type of AI, and deep learning is a subset of machine learning. In this article, we will try to best define deep learning and its industrial applications and, more specifically, the benefits of scale - from big data to high-performance computing - to the successful implementation of deep learning.