machine-learningoptimizationcalculus

Gradient Descent: Walking Down the Mountain

2026-05-20
Ryocantmath
Python 3.9
.ipynb
System Note: This document is generated from a Jupyter Notebook. Code cells are executable.

The Heart of Learning

At its core, training a neural network is just an optimization problem. We want to find the weights that minimize the error (loss). Imagine standing on top of a mountain in thick fog. You want to get to the bottom. What do you do?

##The Algorithm

  1. Look around and find the steepest slope downwards.
  2. Take a small step in that direction.
  3. Repeat until you hit the bottom (global minimum).

This "steepest slope" is the negative gradient.

##Python Implementation

Here is a simple implementation from scratch using NumPy:

import numpy as np

def gradient_descent(x, y, learning_rate=0.01, iterations=1000):
    m = len(y)
    theta = np.zeros(2) # Initial weights
    
    for _ in range(iterations):
        prediction = np.dot(x, theta)
        error = prediction - y
        
        # Calculate Gradient
        gradient = (1/m) * np.dot(x.T, error)
        
        # Update Weights
        theta = theta - (learning_rate * gradient)
        
    return theta

PROCESS FINISHED WITH EXIT CODE 0