5 minutes to #DeepLearning on #Azure #DataScience VM with #Docker #NVidiaDigits #Caffe–CPU version

Quick post following previous one , for anyone who can’t have a GPU for any reason you can still use the code & Docker images below to do the same, but yes it will take a lot more time to train models. Sad smile

Started from Digits CPU Docker image here (amazing work with these Docker files by Kai Arulkumaran) and just added volume configuration for digits data & jobs.

So you can use, on Docker/Ubuntu :

docker run –rm –name digits-cpu -d -v $PWD/digits-data/:/data -v $PWD/digits-jobs:/jobs -p 5000:5000 rquintino/digits-cpu && \

docker exec -it digits-cpu python -m digits.download_data mnist /data/mnist


On Docker/Windows:

docker run –rm –name digits-cpu -d -v "%cd:\=/%/digits-data"/:/data -v "%cd:\=/%/digits-jobs":/jobs -p 5000:5000 rquintino/digits-cpu

start http://localhost:5000

docker exec -it digits-cpu python -m digits.download_data mnist /data/mnist

One issue you’ll probably have with docker windows data volumes is performance. Think there are some known issues regarding some operations on volumes on Docker Windows, which the mnist prepare script should be triggering (60k images).

CPU Train Performance

Using the GPU on the Azure Data Science VM NC series model full train for 30 epochs is around 1-2mins, using the CPU version like above I had to wait ~19 minutes. So 5 minutes won’t be enough!



Image docker hub/github




5 minutes to #DeepLearning on #Azure #DataScience GPUs VM with #Docker #NvidiaDigits #Caffe

Shared previously on twitter but for the record, it’s really very easy to get a glimpse of Deep Learning on the new Linux GPUs powered Azure Data Science VMs. No code required aside from provided shell/nvidia-docker command line below.


We’ll be using Nvidia digits, that wraps Caffe deep learning framework, will do all the heavy lifting for you.

If you don’t have an Azure Subcription you can get a trial easily



Create the DataScience Linux VM

be sure to select HDD and NC series VM (with K80 GPUs)


Wait a few moments until it’s available for remote connections

Connect to the Linux VM using SSH (ex: putty, mobaxterm)

you can run dvsm-more-info to get a guide of what’s available, pretty much everything you need Smile


Start Digits Docker container with nvidia-docker, download & prepare mnist dataset

shell code: (also available here)

sudo usermod -aG docker $USER && \

mkdir ~/data && \

mkdir ~/digits-jobs && \

nvidia-docker run –name digits -d -v ~/data:/data -v ~/digits-jobs:/jobs -p 5000:5000 nvidia/digits && \

docker exec -it digits python -m digits.download_data mnist /data/mnist

this will:

  1. add current user to docker group so you don’t need to sudo constantly to use docker
  2. create data and digits-jobs host folders for persisting data and digits jobs outside of the container
  3. run the digits container (default gpu based, exposing digits port 5000, using nvidia-docker as to use the host GPUs)
  4. executing python script inside container to download & prepare mnist dataset

God, I love Docker….! (thank you Pedro Sousa, owe you for life! Smile )


SSH Tunnel for digits http/5000

Now, digits has no real authentication so I suggest you configure ssh port tunnel (don’t expose this endpoint!), so that you can open http://localhost:5000 on your laptop and will open the remote digits endpoint running on docker container running on azure data science vm host Smile.

Open Nvidia digits on http://localhost:5000

You may have to wait a few seconds, and then should open on your local browser.

You can follow the guide here, no code needed: Getting Started (NVidia Digits) 

No more code required to train your first deep convolutional network with GPUs and get 99% accuracy on mnist handwritten digits dataset. Should take only a few minutes

note: path for mnist data is /data/mnist/train for use when creating the dataset, you don’t need to download the mnist data again


Some additional screens

Exploring dataset images


Training default LeNet network


Network config


Digits built-in network visualizer


GPU training (1 tesla K80)



done, ~1min 37 seconds


testing trained model/epoch


it works!


activations, filter weights,…



Persisted mnist data and digits jobs on host vm

so you can stop docker/container if needed, later start a completely new docker container and continue where you left off


Having shown digits nocode approach here, the really truly inspiring is getting to know how this convolutional neural networks work and how they learn the filters on their own (they learn the $”%#@#”"&% reusable filters/weights! then how to use them to build other filters…and so on). Absolutely amazing.

If you’re interested few good sessions.

Tensorflow and deep learning – without a PhD by Martin Görner 

How Deep Neural Networks Work- Brandon Rohrer

Take care!


#SqlPort upcoming CTP hands on workshop–Cloud #MachineLearning & #DataMining using #AzureML 29 Nov Lisbon

It’s already this Saturday 29 that I’ll be hosting the first AzureML hands-on workshop in Portugal. It has been a pleasant surprise to see that it’s filling up rather quickly, much to thanks to the restless SqlPort community team (Particularly Niko & Paulo amazing work… allowing me to fully  focus on the workshop itself… more pressure for sure… no excuses!).

I’m rather curious to see how it goes, as there will be for sure an interesting mix of profiles attending this workshop. And I hope that will contribute to make this a very interactive, personally demanding  but enjoyable day!

The event is organized by SQLPort, all the resulting revenue will revert to community support & upcoming events.

Check out the workshop agenda below or in the event page here . The event will take place at Microsoft Portugal.

Also disclosing that  every participant will be granted access to my very own personal notes & best highlights on some of my favorites data books… (go to love Amazon/Kindle platform)  all of which I’m reviewing right now  ;) .

Data Science for Business: What you need to know about data mining and data-analytic thinking     The Signal and the Noise: Why So Many Predictions Fail-but Some Don't      Naked Statistics: Stripping the Dread from the Data     

See you there!


CTP – Community Technology Preview 2014 Edition – Cloud Machine Learning & Data Mining using AzureML


Sábado, 29 de Novembro de 2014 das 09:00 às 17:30 (WET)

Lisboa, Portugal



Full one day hands-on workshop including introductory course to Machine Learning process, use cases, models & algorithms and also how to model, test & deploy Machine Learning solutions with the new Microsoft AzureML cloud service.


  • Understand Machine Learning & Data Science Fundamentals
  • Model & Deploy Machine Learning Solutions with AzureML, ML Studio & AzureML Web Services

Attendee’s pre-requisites/profile

  • Data Analysts & Enthusiasts, Business Intelligence Professionals, Data Scientists & Machine Learning


  • Introduction to Data Science and Machine Learning
  • Machine Learning Use Cases, Models & Workflow
  • Introduction to Azure Machine Learning & ML Studio
    • o Data Cleansing & Transformation
    • o Data Analysis & Visualization
    • o Machine Learning Models: Classification, Regression, Clustering, & Text Mining
    • o Model Tuning, Scoring & Evaluation
    • o Using R in Azure ML
    • o Publishing Models as Web Services
    • o Using Excel with Azure ML
    • Books & Resources for AzureML, Data Science & Machine Learning