It could have been a good option for a data center, but not for an office desk.
My next idea was to try liquid cooling. I got a GPU mounting bracket from NZXT and Corsair fan from e-bay.
The end result looks great and is quiet enough.
My noise baseline is the fridge. My laptop and the GPU enclosure are quieter than the fridge.
I had a broken Titan card and decided to use a fan from there.
I replaced the heat sink, glued the fan. The result worked, but still was loud.
To mitigate the issue, i got a fan controller. It slowed down the fan when the card was cool.
Initially I used a fan that is attached on the side of the card.
It did keep the card cool, but was very loud. The noise was too loud to do any work.
I'm using the card in an external enclosure. In theory, I could have gotten required airflow but I decided to go alternative way.
A training step on a CPU in a intel/intel-optimized-tensorflow-avx512 container takes 138 ms.
It is slower than my old GPU, but it might be fast enough to get the first version of a model.
A training step takes 38 ms on an Nvidia K40 which I got for $100.
On Google's Colab, a training step takes 21ms. (I don't remember what GPU I've used).
Colab is not expensive, but it is annoying to do long training runs as a connection is likely to drop.
I'm willing to compromise speed in favor of ease of development and early testing on a local machine.
If needed, the final model can always be trained in the cloud on a beefy GPU.
The conda environment.yml that I used to build the environment is here https://gist.github.com/dimazest/40571dcec7de84601abdfe7b12445040
LD_LIBRARRY_PATH might be needed to be redefined, I used this command to fire up a notebook
LD_LIBRARY_PATH="$CONDA_PREFIX/lib":$LD_LIBRARY_PATH jupyter-notebook
@dima you can block certain keywords like "RT", "Retweet", "Twitter" and "Birdsite" to filter out at least some content.
I got an old, but cheap nvidia gpu (k40) to play with deep learning while i'm searching for a reasonably priced modern card.
To my surprise, the i7-1165G7 CPU (8 cores) is about twice faster than the GPU doing classification of images with a CNN.
Is it something one would expect. Did CPU's get better recently? Is the GPU I got too slow?
Computer science, computational linguistics, running, swimming, photography.