Taking advantage of multiple GPUs to train on large images
TL; DR;
Model parallelism should be used when it is not possible to run training on a single GPU due to memory constraints. This technique splits the model across multiple GPUs and executes each part on a different accelerator. In this way huge models can be split-up and made to fit in memory.
This technique has been pretty complex to implement in the past. Fair splitting of the model, correct assignment to GPUs and efficient execution are features that aren’t easy to achieve.
Eisen 0.0.5 implements model parallelism in one line of code thus making model parallelism a no-brainer for anyone using more than one GPU and struggling with memory issues.

#python #medical-imaging #deep-learning #computer-vision #pytorch

Model Parallelism in One Line of Code
2.50 GEEK