Best practice - Cuda installed inside docker container or on host?

I have an ML project in python which I need to dockerize.
My project needs Tensorflow and Cuda.
This application needs to be deployed on 200 machines approx.

Currently, I have pulled the Tensorflow 1.2.0 image from the hub and built my application around it with Cuda 8.0 installed on the Host machine.
If in further versions, I have to upgrade my Cuda from 8.0, then I will be required to do so on all the 200 machines.

So, need to know if there are any disadvantages of installing Cuda within my docker image over having Cuda installed on the host machine?