Follow

I got an old, but cheap nvidia gpu (k40) to play with deep learning while i'm searching for a reasonably priced modern card.

To my surprise, the i7-1165G7 CPU (8 cores) is about twice faster than the GPU doing classification of images with a CNN.

Is it something one would expect. Did CPU's get better recently? Is the GPU I got too slow?

· · Web · 2 · 0 · 0

to be fair, ai-benchmark assigned the score of 5330 to the GPU and 1397 to the CPU.

@dima I don't know how old the k40 is but that seems very strange. Usually even weak GPUs are at least 5x faster than CPU

@maltimore That what I would expect. k40 was released in 2013.

It was inference, that was quicker on a CPU. Training is slower.

I have 2 possible explanations. It's an 8 core CPU with a DDR4 3200 memory.

@dima @maltimore ice lake cpus has deep learning boost which added some avx-512 operations that were particularly helpful with deep learning. Maybe tensorflow is using those?
en.m.wikipedia.org/wiki/DL_Boo

@ksteimel @maltimore maybe. It's pretty cool if tensorflow from conda-forge is compiled with these extensions.

@dima @maltimore according to this page intel.com/content/www/us/en/de the anaconda version of tensorflow is compiled with support for Intel's onednn library. Maybe I'll try out tensorflow inference on my 7900x and see if it's similarly fast.

Sign in to participate in the conversation
3dots.lv Mastodon Instance

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!