Generally, trade shows follow the sun and tourists to popular vacation destinations. Everyone loves a conference in San Diego or Orlando! The recently rebranded NeurIPS (formally NIPS) took a different road this year and visited Montreal in early December. Montreal is one of my favourite cities but in early December it’s the season for cold, cloudy weather and infrequent freezing rain. Here's a quick rundown on my experiences at the conference.
Thinking back, maybe it was a stroke of genius to host NeurIPS in a chilly location - hardly anyone skipped the conference sessions to go outside! Even the normally exuberant McGill students were focused on their finals and looking somewhat pensive. With NeurIPS in town, Montreal took the opportunity bestowed on it to showcase the tremendous amount of grassroots AI development taking place and the interesting, entrepreneurial start-up’s utilising deep neural network technologies which are sprouting up from every corner of the city. It was an excellent, thought-provoking few days in Quebec.
So onto the conference itself - NeurIPS was the thirty-second 'Conference on Neural Information Processing Systems' – the technical heart of AI. I felt rather undressed with my degree in Electrical and Electronic Engineering! Like other AI events the conference has been growing exponentially and boasted over 8,000 international attendees this year. Perhaps 90% were academics both students and staff. The enterprise attendees were either research boffins, compute equipment vendors or surprisingly recruiters trying to lock-in the next generation of hot data scientists and machine learning engineers.
This is the only AI event I've been to that I’ve seen more than one quant hedge fund attend. Quant funds are usually solitary, secretive creatures so to have fourteen or more recruiting next to one another was a bit of a shock.
Garnering admission to the conference is a trick, all the conference passes sold out within the first eleven minutes! I managed with an exhibit hall pass which was ideal considering my differential equation math etc. is rather rusty.
There is often a dichotomy between the compute resources needed for academic versus enterprise AI. Many of the papers presented at NeurIPS focused on the mathematical techniques which can help DNN training converge on a robust solution. Consequently, validating the results perhaps a hundred times is enough. Often this is done on a workstation or high-powered GPU augmented server. Enterprise AI often requires training DNNs on millions of images of all types, in all lighting conditions to produce a robust DNN fit for a production product. This often requires a large HPC compute cluster with hundred or thousands of CPU and GPUs.
I spent a fun day visiting all the booths in the exhibit hall trying multiple ways to convince the recruiters staffing them to introduce me to their IT management. The jury is considering carefully whether I succeeded.
Most attendees were very interested in the sessions and less so in the exhibits but during the breaks and evening social the exhibits were mobbed. The booths revealed some interesting and orthogonal things about their owners. Bosch has created an internal AI team which acts as a consulting group to guide point development projects and refresh their next generation of automotive technology – very smart.
The financial service companies are driving AI into a multitude of areas from program trading algorithm development, to data-mined news feeds to complex risk assessments. However, I was truly surprised that about 10% of the attendees were from Google or Deep Mind. The other hyperscalers were also well represented but not to this extent. Tom Diethe, from Amazon’s retail branch’s AI team in Cambridge is applying AI to sticky problems in supply chain management and really making a difference.
After the exhibits I worked the show app to facilitate interesting one-on-one meetings during the breaks which resulted in some fascinating discussions. Sizigi Studios are applying DNN technology to automate the coloring of animated video they have the potential to accelerate and repatriate much of the animation manual post design activity from east Asia to the USA. Perhaps a better, more entrepreneurial approach to making America great again, than simply trade tariffs!
Lightmatter, a Boston USA company, are developing a photonics CPUs to accelerate low-power consumption computation. I had a great chat with two of their physicists learning what aspects of DNN training compute could be synthesized in optics and perhaps more importantly what could not. It appears that we are still some way from implementing non-linear functions in optics but overall, I was impressed with what feels like a modern-day reincarnation of analog computing at light speed.
I also encountered my first machine vision application which likely doesn’t need retraining once in production. This small custom application uses a phone to read the final electric meter reading before a smart meter is installed and does QA on the manual final reading done by the technician. The machine vision is currently 97% accurate and is more than enough to identify process and equipment failures leading to large scale last reading errors. Once retrained the DNN would likely only improve slightly and the company elected to focus on their next puzzle.
All in all my three days at NeurIPS was well worth the trouble and I was yearning to stay longer and avoid the red-eye to London for far less technical discussions. I can’t wait until NeurIPS 2019 in Vancouver.
Following our first AI and HPC Field Trip to Iceland in October, we're pleased to announce we will be running a second Field Trip early in 2019. If you're working within the field of intensive AI and deep neural network development, and would like to be considered for attendance, please get in touch - firstname.lastname@example.org