Skip to main content

Posts

Showing posts from September, 2018

Deep learning with Multi-GPUs

Distributed deep learning is a hot topic as it increases the training time in compared to single GPU depending on the problems and the data that you deal with. However, modifying code to make your single-GPU program become multi-GPUs is not always straightforward! In this post, I'm going to talk about the topic of training deep learning models using multi-GPUs: why it is challenging, which factors are important when making deep learning distributed, and finally which libraries/frameworks to use to make this process easier. This post is based on the content of the course " Fundamentals of Deep Learning for Multi-GPUs ", offered by NVIDIA, which I have taken recently. GPU is the platform for deep learning, which makes deep learning accessible. Multi-GPUs are used to further speed up the computations, in cases single GPU memory is not efficient. The first question is: Do you need to use multi-GPUs or is it enough to train your deep learning models using single-GPU? It