Previous | Next --- Slide 42 of 82
Back to Lecture Thumbnails
tspint

CPU and GPU code run in different address spaces. This means we need to allocate memory separately and move data between the two. This is an example of message passing!

SebL

What's the bandwidth of the CPU and GPU communication? Not sure if this speed might be the threasold for the performance.

l-henken

@SebL I think GPUs can communicate with CPUs through DMA, much like other peripherals. After some bus enumeration and CPU side setup, writes to a special region (some defined I/O segment) of CPU memory can be read by the GPU device. If this is the case, then the bandwidth of the communication is limited by both the CPU memory bandwidth as well as bus (think PCI, PCIe, or another bus protocol) bandwidth.

zecheng

Is it possible that the "CUDA device" in the slide can be multiple GPUs?

Please log in to leave a comment.