Previous | Next --- Slide 57 of 88
Back to Lecture Thumbnails
sasukelover420

On typical architectures, is it required that all execution contexts/threads on a core come from the same process (i.e. same PCB)? I would imagine that would dramatically speed up memory mapping and permissions checking, but is this a requirement in practice? If this requirement doesn't exist, then that would make cores much more efficient since you now have multiple processes whose threads can be scheduled on each core.

wooloo

I understand why hiding latency is preferred for hardware multi-threading, but why does having many small contexts increase the ability to hide latency compared to one big context?

jchen

@wooloo, I think that one of the biggest causes of latency in multi-threading is the cost of a context switch - every process has its own execution state, so when the OS switches from thread A to thread B, it needs to save the context/state of thread A and load the context of thread B. This might include updates to registers, the TLB, and other components necessary for a thread to run properly. As a result, context switching can be really expensive - if you constantly switch between two threads on a single-core, single-context processor, it might take far longer to complete both tasks than if you were to run them sequentially.

One of the ways to reduce the cost of context switches with optimized hardware is to store multiple execution contexts on the processor. Then, we no longer have the cost of trying to save and load thread states, and instead I'm guessing we would need some other mechanism on the processor to orchestrate which execution context it should be using at the moment.

Please log in to leave a comment.