Gradient descent is an optimization algorithm used in machine learning to find the minimum value of a cost function. It is an iterative algorithm that adjusts the parameters of a model in the direction of the steepest decrease in the cost function. At each iteration, the gradient of the cost function with respect to the model parameters is computed, and the parameters are updated in the direction of the negative gradient. The magnitude of the update is determined by the learning rate, which controls the step size of the update. The algorithm continues to iterate until the cost function converges to a minimum or reaches a stopping criterion, such as a maximum number of iterations. The result of the optimization is the set of parameters that produce the minimum value of the cost function.
Note: the previously scheduled Multiclass classification lecture has been rescheduled due to limited attendance at a prerequisite previous lecture.
United Nations
GDSC Lead