Challenges of Scale in Deep Learning

Oxford e-Research Centre
October 18, 2017 - 13:00 to 14:00
Conference Room (278)

7 Keble Road, Oxford OX1 3QG

  • Seminar
  • No booking required
  • Open to all
  • Many-Core Series
  • Lunch provided

Adam Grzywaczewski from NVIDIA will present a seminar entitled:

 Challenges of Scale in Deep Learning


Deep learning algorithms require substantial computational resource and were made possible exclusively due the exponential nature of the Moore’s law. Even though this is common knowledge very few people understand just how much compute is required for real life problems (like the ones involved in development of the self-driving car). This compute requirement frequently exceeds not only the capability of a single GPU but also a single multi GPU system (leading to training times of months, if not years). As a consequence frequently it is critical to scale to tens if not hundreds of GPUs in order to allow for reasonable training time.

This talk will provide an overview of the challenges (both hardware, software and algorithm related) of achieving this required scale and ways of addressing them.

About the speaker

Adam Grzywaczewski is a deep learning solution architect at NVIDIA, where his primary responsibility is to support a wide range of customers in delivery of their deep learning solutions. Adam is an applied research scientist specializing in machine learning with a background in deep learning and system architecture. Previously, he was responsible for building up the UK government’s machine-learning capabilities while at Capgemini and worked in the Jaguar Land Rover Research Centre, where he was responsible for a variety of internal and external projects and contributed to the self-learning car portfolio.