“Mom I need more money to buy a CUDA enabled GPU!”​ — a practical guide to machine learning with AMD GPUs.

Dewyan Thilakasiri
3 min readJun 2, 2021

When I built my PC, I was super excited and pumped up with the hype. I ordered all of the components from a local store and, until the lot arrived, I thought I’m going to explode from the hype. My main objectives were to play some old games and do some machine learning. Little did I know, unfortunately, AMD GPUs don’t support CUDA. All of that hype washed away even before I realised. Then the GPU shortage came, and couldn’t find a decent GPU for me to switch. So basically, at the moment, I’m stuck with my good old AMD Rx 570 4GB.

Nevertheless, I couldn’t prevent from ML, and tbh gaming feels a lot cooler on an AMD GPU compared to Nvidia. And after all, hey! AMD is cheap as cheap does. Who am I to complain? However, I sought alternatives for CUDA, and as a user who uses Windows as their primitive OS, I had to dual boot a Linux OS and install ROCm. It was quite a headache for me tho. Since I was so familiar with windows and their GIU stuff, it was pretty hard for me to get along with Ubuntu. When I say “Install”, it is not just clicking on the next button till something happens as you do in windows. You have to find the correct repo and find other repos that match with that one, so on and so forth. For example, ML uses Tensorflow, right? Well, did you know that TF had different versions like CPU version, GPU version and also TPU version? And what’s even worse is that from some point onwards, their repos don’t support ROCm (Not yet), and the sad part is they have this version control system where they name their packages with like a ton of decimal points and, yeah, you have to enter each of them in the terminal one by one.

I used OpenCL as the backend and ROCm as the API, which was an excellent alternative for CUDA. And the results were pretty amazing tbh. Yet, I didn’t want to hang up with Ubuntu all the time while all my favourite programs are on my Windows. So again, I was seeking an alternative. And that’s when I found Google Colab. I know I’m late for the party, but hey! It’s impressive, right? What’s more unique about Colab is does it have CPUs and GPUs, and also a TPU! Just made out with fresh MXUs and VPUs. While the VPU magnificently handles the float 32 and int 32 computations, the MXU deals with all of them, “Mixed Precision” 16–32 bit floating point formats! Isn’t that just amazing?!

So my point is this, why spend more and more money on upgrading your rig while you can access all of that on the cloud? If you have an AMD GPU that has no CUDA compatibility, you have two methods to go on with. Switch to Linux and install ROCm and match out all that PyTorch versions and TF Nightly versions and whatnot. And that is the hard way. Or, simply open your browser and head out to Google Colab, where you can get a considerable PC for free, and of course, you can upgrade your cloud computer for a very reasonable price.

--

--

Dewyan Thilakasiri

Wandering the internet with a great interest in technology.