SGD, for example, randomly selects a subset of data points to compute gradients, which speeds up the training process and can help the model to escape local minima.