Previous | Next --- Slide 52 of 82
Back to Lecture Thumbnails
haiyuem

From the slide it seems like the shared memory space is dynamically allocated based on what the user specifies in CUDA code, but not a pre-defined amount.

pintos

@haiyuem I think the shared memory is carved out of L1 cache storage (http://cs149.stanford.edu/fall20/lecture/gpuarch/slide_59).

Please log in to leave a comment.