Today, I was in need of a GPU cloud instance to run an experiment related to my final year project (FYP). From the early stages of my FYP, I’ve worked with multiple GPU instances, both cloud and local. So, I thought of sharing my honest opinion on each of these options and how easy (or frustrating) it was to get things up and running.
Here are the GPU platforms I’ve worked with, listed in the order of how easy they were for me to set up and use:
Local machine with a dedicated GPU
Apart from these, I’ve also spent thousands of GPU hours on Kaggle and Google Colab. However, I’m not including them in this comparison since they’re notebook-based environments that let you run code on GPUs, but with some major limitations.
The biggest downside is the session time limit. Kaggle allows a maximum of 12 continuous hours per session. On Google Colab, both the free tier and Colab Pro do not support background execution, where you just close the browser tabs and let things run. However, Colab Pro+ does allow background execution for up to 24 hours, and that still wasn’t enough for our needs. Some of our experiments needed to run continuously for several days, which made these notebook environments unsuitable. That’s why I had to go for more persistent GPU platforms.
Let’s start the comparison.
Runpod.io
This has been by far the most convenient and affordable option I’ve used. It’s genuinely great value for the money. After entering my billing details, it literally took just two clicks to configure the instance, and another two to start it and get in. Compared to my experience with GCP, this felt effortless. It also had multiple GPU options to choose from.

By default, it comes with an Ubuntu template that has PyTorch and CUDA pre-installed, which obviously saves a lot of time. Once the instance is running, you can connect to it via SSH directly from your browser. The best part? It has a clean, user-friendly interface that shows real-time resource usage and the hourly rate of credit consumption.
Local machine with a dedicated GPU
The Computer Science and Engineering Department of the University of Moratuwa provides access to a few NVIDIA RTX 4000-series GPUs for students working on their FYPs. We managed to get access to one of these machines, which was quite helpful, though not without its challenges.
Whenever we needed a GPU, we had to request access and wait in a queue until a slot became available. Once approved, the department provided SSH access to the computer. However, this machine was only accessible through the university’s internal network. If we wanted to connect from an external network, we had to use a VPN, and configuring that was an absolute nightmare. The documentation for the VPN setup was ancient and outdated, and even the IT helpdesk wasn’t entirely sure how to get it working. So in the end, we had no choice but to connect through the university's WiFi whenever we needed to use the GPU. Another limitation was power outages. When the department lost electricity, which, in Sri Lanka, isn’t exactly a rare event, the machine would shut down, and any ongoing experiment would be lost. Despite these drawbacks, the biggest advantage was that it was completely free.
When requesting the machine, we could also specify the operating system (OS) we wanted. We went with Ubuntu, and after installing the necessary dependencies, everything was ready to go. Despite the few inconveniences, we managed to make the most out of this opportunity.
Google Cloud Platform (GCP)

Well, this has been my least favourite option so far. Despite the high cost, setting it up and using it was quite frustrating, especially for someone trying to run their first-ever experiment on a cloud instance. Honestly, it took me almost a full week to fully understand how things worked and get a running instance ready for our experiments. Even the documentation wasn’t as beginner-friendly as I had hoped.
The first step was to create an instance with the required resources. Although GCP offers extensive customization options for almost every setting imaginable, that level of flexibility can be overwhelming for someone new to the platform. After a fair amount of trial and error, I managed to get an instance up and running.
Next came the network configuration. To access Jupyter notebooks via my browser, I had to set up custom firewall rules, and that was another layer of confusion. After all that effort, the instance was finally ready, but the whole process felt unnecessarily complicated for what should’ve been a straightforward setup.
Conclusion
If I had to pick one option out of all, the choice would be pretty obvious by now. Runpod.io easily takes the top spot. For me, it came down to two things that matter most:
It’s affordable.
It’s incredibly easy to use.
Of course, depending on your own situation, another option might make more sense. But based on my firsthand experience, Runpod.io offered the best balance between performance, cost, and convenience. Hopefully, this breakdown helps you pick the right GPU platform for your own experiments.
Comments