Previous | Next --- Slide 3 of 47
Back to Lecture Thumbnails
swkonz

Is it fair to think about vectorization as not really running "in parallel" ever? I find it easier to think about vectorize operations as just a single operation rather than thinking about it as a parallel process. In reality all it is really just a single operation, but the hardware that handles that particular operation happens to operate on many collections of data at once, unlike the traditional thinking that the hardware always operates on a single collection of data at once.

rmjones

@swkonz I can see the validity in viewing it that way. In my experience, it's actually been somewhat helpful to think of vectorization as exploiting some level of parallelism (even if it's not on the process or thread level). For example, when I write NumPy code I will make a dedicated effort to utilize array-wide or matrix-wide operations rather than loop over each individual element, knowing that the vectorized ops will make use of some sort of parallelism (before this class I didn't know what actually was going on!) to give dramatic speedups over the sequential version.

l-henken

@skwonz The term "lane" helps me think about vectorization as a parallel execution method. It is not "thread context level parallelism", it is more "data level parallelism" or "logical thread level parallelism".

Please log in to leave a comment.