Given a loss, one way to go about adjusting the weights would be to randomly pick a new value for the weights and observe changes to the loss function. However, this would be extremely inefficient, which is why we use algorithms such as gradient descent (following slides) to adjust network parameters.
Given a loss, one way to go about adjusting the weights would be to randomly pick a new value for the weights and observe changes to the loss function. However, this would be extremely inefficient, which is why we use algorithms such as gradient descent (following slides) to adjust network parameters.