Previous | Next --- Slide 14 of 88
Back to Lecture Thumbnails
yhgkm

What is "Out-of-order control logic" and "Fancy branch predictor"?

nickbowman

Not sure about "fancy branch predictor" but "out-of-order control logic" is the part of the processor hardware that is responsible for identifying potential independent instructions in a program (finding places where ILP is applicable) in order to be able to figure out which instructions can be run in parallel by the fetch/decode unit and ALU combo.

yhgkm

Got it. That makes sense. Thank you!

teapot

A branch predictor, in general, predicts whether the instructions in a branch will be executed or not (branch taken/not taken). In modern pipelined processors, the pipe can be quite deep and it will be extremely helpful if the instruction fetch units have a better way of predicting which way the instruction stream will be taking instead of random guessing.

kasia4

The study of "fancy branch predictors" is ever ongoing. A simple example of a branch predictor is a random guess. Then you can start to improve accuracy by keeping a counter that determines how likely a branch is to be taken based on the past couple of branches taken. Then you can think about storing more information, like noting that a branch is taken every 3rd check or some other regular pattern (two-level adaptive predictor). An even "fancier" example includes using information about past branch prediction accuracies to make future predictions with a perceptron model, such as is the case with neural branch prediction. There are always trade offs with accuracies, latencies, etc. You can read more about them here: Branch Predictors

swkonz

To expand on the branch prediction topic a bit more, branch prediction is: 1. Implemented in hardware, so there's a trade off between branch prediction complexity and silicon area 2. It can be quite accurate! This means that we can get a considerable speed up by using branch prediction rather than insert bubbles in our execution pipeline. If you think about the structure of code, most branch cases will be biased to either branch, or not branch, the majority of the time (i.e. if always select one way or the other, we'll likely have pretty good accuracy). That being said, you can get very reasonable prediction accuracy using pretty simple algorithms. For instance, the 2-bit prediction buffer scheme uses a small cache to track the outcomes of a particular branch, and uses the state of the cache to make decide when to branch or not.

suninhouse

What would be the requirements and trade-offs on the chip itself when incorporating machine-learning based branch predictors?

kevtan

@suninhouse I am by no means an expert on neural branch prediction, but a quick Google search on "the drawbacks of neural branch prediction" turned up this article https://www.microarch.org/micro36/html/pdf/jimenez-FastPath.pdf. In the abstract, it says "...neural prediction remains impractical because its superior accuracy over conventional predictors is not enough to off- set the cost imposed by its high latency..." so I think the main problem with neural branch prediction is that it takes too long to compute.

gleb

@kevtan I wonder how much this compute problem could be minimized by having a dedicated accelerator chip on device that is in charge of doing branch prediction.

Please log in to leave a comment.