A training step takes 38 ms on an Nvidia K40 which I got for $100.
On Google's Colab, a training step takes 21ms. (I don't remember what GPU I've used).
Colab is not expensive, but it is annoying to do long training runs as a connection is likely to drop.
I'm willing to compromise speed in favor of ease of development and early testing on a local machine.
If needed, the final model can always be trained in the cloud on a beefy GPU.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!