Back to LSTMs Hub

LSTMs from Scratch

Part 1: The Math of Gated Recurrence

Introduction

Standard RNNs cannot learn long-range dependencies. Gradients decay exponentially during backpropagation through time, so the network has no way to connect events separated by more than a handful of steps.

LSTMs (Hochreiter & Schmidhuber, 1997) fix this with a single architectural idea: gated memory cells that use additive updates instead of multiplicative ones.

This is Part 1 of three. Here we cover the math -- the cell state, the three gates, and why gradients survive across hundreds of time steps. Part 2 implements everything in PyTorch. Part 3 trains on a long-range task and visualizes what the gates learn.

The Core Insight: Additive Memory

The central piece of an LSTM is the cell state $c_t$. It runs through the entire sequence and gets modified only through gates -- learned, differentiable switches that add or remove information.

The cell state update is additive:

$$ c_t = f_t \odot c_{t-1} + i_t \odot \tilde{c}_t $$

This matters because addition distributes gradients without scaling them. In a vanilla RNN, hidden states pass through repeated matrix multiplications, which either shrink or blow up gradients. The additive cell state sidesteps that entirely.

Think of the cell state as a conveyor belt running through time. At each step, the forget gate can selectively erase dimensions and the input gate can write new values, but the underlying transport mechanism is additive. No matrix is repeatedly multiplied into the signal. This is what makes LSTMs fundamentally different from every prior recurrent architecture -- the gradient does not need to survive a gauntlet of multiplicative transformations to reach early time steps.

The Three Gates

Forget Gate

Decides what to discard from the cell state:

$$ f_t = \sigma(W_{xf} x_t + W_{hf} h_{t-1} + b_f) $$

Outputs values in $(0, 1)$. A value near 0 erases that dimension; near 1 keeps it intact. In practice, the forget gate bias is often initialized to 1.0 rather than zero. This ensures the gate starts near saturation ($\sigma(1) \approx 0.73$), defaulting to "remember" rather than "forget." Without this initialization trick, the cell state can leak too aggressively in early training, degrading long-range gradient flow before the network has a chance to learn what to retain.

Input Gate

Controls what new information enters the cell state:

\begin{align*} i_t &= \sigma(W_{xi} x_t + W_{hi} h_{t-1} + b_i) \\ \tilde{c}_t &= \tanh(W_{xc} x_t + W_{hc} h_{t-1} + b_c) \end{align*}

$i_t$ selects which dimensions to write to, and $\tilde{c}_t$ proposes the candidate values.

Output Gate

Determines what the cell exposes as its output:

\begin{align*} o_t &= \sigma(W_{xo} x_t + W_{ho} h_{t-1} + b_o) \\ h_t &= o_t \odot \tanh(c_t) \end{align*}

The Cell State Update

Putting the gates together:

$$ c_t = \underbrace{f_t \odot c_{t-1}}_{\text{keep old}} + \underbrace{i_t \odot \tilde{c}_t}_{\text{add new}} $$
$$ h_t = o_t \odot \tanh(c_t) $$

Why LSTMs Don't Vanish

During backpropagation through time, the gradient of the loss with respect to the cell state satisfies:

$$ \frac{\partial c_t}{\partial c_{t-1}} = f_t + \text{(other terms)} $$

When the forget gate is close to 1, the gradient passes through nearly unchanged. The network can learn to hold $f_t \approx 1$ whenever long-range information matters, creating what Hochreiter and Schmidhuber called the "Constant Error Carousel" -- error signals that propagate backward for hundreds or thousands of steps without decaying.

In our own experiments, this is exactly what we observed. Training a 2-layer LSTM with 64 hidden units (204,290 total parameters) on a synthetic long-range dependency task, the model reached 90.50% test accuracy by epoch 5 and peaked at 95.70% by epoch 17. The forget gate visualizations confirmed the theory: for sequence positions that carried long-range information, the gate activations clustered near 1.0, holding the cell state open as a gradient highway. For irrelevant positions, the gate dropped toward 0, clearing noise from the conveyor belt. The network learned when to remember and when to forget -- and that learned selectivity is what separates LSTMs from vanilla RNNs.

Comparison to RNNs

Next: From Math to Code

That covers the mathematical foundation -- three gates controlling a dedicated additive memory cell. In Part 2, we implement these equations in pure PyTorch, building from a single LSTMCell up through multi-layer stacks, bidirectional processing, sequence classification, and the encoder-decoder Seq2Seq architecture.