Previous | Next --- Slide 26 of 82
Back to Lecture Thumbnails
jgrace

Is there a limit to a GPU's computing resources? Is it upper bounded by power like how we saw earlier in the CPU context? Does this depend more on the application being data parallel and not having to look for independent operations in a program to execute in parallel? Is there ever an upper bound to data parallel operations or could we theoretically just keep adding cores to GPUs?

kevtan

@jgrace I'm curious about these questions as well! I think that power draw is definitely a concern. For instance, the new NVIDIA RTX 3080 GPU eats up 320W of power whereas the AMD Ryzen™ 9 5950 only eats up about 105W. That is, most already use way more power than the CPU!

haiyuem

@jgrace it's bounded by many things, and power is definitely one of them. Other factors limiting the number of computing units include the size of the "supporting" units for data movement, e.g. L2, crossbar and framebuffer, which are already huge sections on chip nowadays and moving data around is really costly. We also need to consider the !/$ trade-off because the applications running on GPUs are not infinitely large. So the number of SMs (computing units) on each GPU is carefully calculated and suitable for the target applications it is designed to run.

Please log in to leave a comment.