From the slide it seems like the shared memory space is dynamically allocated based on what the user specifies in CUDA code, but not a pre-defined amount.
pintos
@haiyuem I think the shared memory is carved out of L1 cache storage (http://cs149.stanford.edu/fall20/lecture/gpuarch/slide_59).
From the slide it seems like the shared memory space is dynamically allocated based on what the user specifies in CUDA code, but not a pre-defined amount.