Astronomy
Asteroids and Comets Black Holes Children Chemical Elements Constellations Earth Eclipses Environment Equations Evolution Exoplanets Galaxies Light Matter Moons Nebulas Planets Dwarf Planets Probes and Telescopes Scientists Stars Sun Universe Volcanoes Zodiac New Articles Glossary
RSS astronoo
Follow me on X
Follow me on Bluesky
Follow me on Pinterest
English
Français
Español
Português
日本語
Deutsch
 
Last update: February 27, 2026

The Physics of the Universe in 50 Equations: User Guide

The 50 fundamental equations that describe the Universe

The Universe in Equations: Understanding Physics Through Examples

Mathematics is the universal language through which the Universe tells its story. Each equation is a window open to the deep reality of things, whether it is the trajectory of a planet, the expansion of galaxies, the bubbling void of the Big Bang, or the dynamics of populations in biology.

1687 — Newton's Second Law: Force Accelerates, Mass Resists, Motion Changes

Isaac Newton (1643-1727) formulated in 1687, in his Principia Mathematica, this equation of deceptive simplicity. It summarizes in three symbols a revolutionary idea: motion does not need explanation, only its change does. Newton knew how to measure force, mass, and acceleration, but the intimate nature of these three quantities remained mysterious. He knew that when a force is applied, a mass resists it, and motion is born: \[ \Large \vec{F} = m\,\vec{a} \] \( \vec{F} = \text{force (N)}\), \( m = \text{mass (kg)}\), \( \vec{a} = \text{acceleration (m/s²)}\)

What the Equation Says

The equation tells us that to move the world, a force is needed. It does not say why an object is in motion, but what makes it accelerate or decelerate. This law unifies all cases, from absolute rest to the speed of light. Wherever motion accelerates or curves, it is the same law that applies. The supermarket cart: push an empty cart: it moves with a simple push. Fill it with water bottles: the same push barely moves it. Mass resists change in motion.
The kick of a ball: the harder you kick (force), the faster the ball moves (acceleration). A ball filled with water (large mass) will barely move. Mass resists, motion changes.
The truck and the car: a truck loaded with sand and a small car are stopped at the same red light. When the light turns green, the car darts like an arrow, the truck struggles to start. Same force (the engine pushing), the greater the mass, the lower the acceleration.

1687 — Newton's Third Law: Action and Reaction Are Inseparable

Isaac Newton (1643-1727) formulated in 1687, in his Principia Mathematica, this law which seems like a simple obviousness but governs everything. When one body exerts a force on another, the second exerts a force equal in magnitude, opposite in direction, on the first: \[ \Large \vec{F}_{A\to B} = -\vec{F}_{B\to A} \] \( \vec{F}_{A\to B} = \text{force exerted by A on B}\), \( \vec{F}_{B\to A} = \text{force exerted by B on A}\), \( \text{The two forces are always simultaneous}\)

What the Equation Says

Forces always come in pairs: for every action, there is an equal and opposite reaction. Nothing in nature acts alone; no force can exist in isolation. Hand against the wall: when you push against a wall, the wall exerts an equal and opposite force on you. That's why you don't go through it. The wall resists exactly as much as you push.
The apple and the Earth: when the Earth attracts an apple, the apple attracts the Earth with a force of the same intensity. The immense mass of the Earth makes its movement imperceptible, but the symmetry of the forces is absolute. The apple does indeed move the Earth, infinitely little.
The rocket taking off: gases are ejected backward at high speed, and the rocket is propelled forward by an equal reaction force. It moves forward because it pushes something else in the opposite direction.
Walking: when you walk, your foot exerts a backward force on the ground, and the ground exerts a forward force on you. It is this push from the ground that makes you move forward.
The helicopter in flight: it pushes air downward with its blades, and the air pushes the helicopter upward with an equal force. It stays in the air because it creates a downward wind.

1687 — Law of Universal Gravitation: Every Mass Attracts Every Other Mass

Isaac Newton (1643-1727) formulated in 1687, in his Principia Mathematica, the law uniting two masses by a force proportional to their product and inversely proportional to the square of their distance: \[ \Large F = G \frac{m_1 m_2}{r^2} \] \( m_1, m_2 = \text{masses of the two bodies (kg)}\), \( r = \text{distance between the bodies (m)}\), \( G = 6.674 \times 10^{-11} \text{ N·m²·kg}^{-2} = \text{gravitational constant}\)

What the Equation Says

This law states that the same force, gravity, acts on all scales. It is a simple but dizzying truth: two masses, wherever they are in the universe, attract each other. The tides: the imprint of the Moon on the oceans. Twice a day, the sea water rises, obeying the silent call of our satellite. The Moon pulls the ocean, and the entire Earth trembles under this gentle pull.
The planets: a dance on orbits traced by this single force. Jupiter, Saturn, Mars, Venus, all revolve around the Sun, held by an invisible thread. No rope, no contact, just the attraction that curves and holds them.
The stars: they die crushed by their own weight. When their fire goes out, nothing opposes gravity. The star collapses on itself, until it becomes a white dwarf, a neutron star, or a black hole, defeated by its own mass.
The entire universe: it is structured into galaxies under the effect of this silent attraction. Clouds of gas aggregate, stars are born, galaxies rotate. Everywhere, gravity weaves the cosmic web, patiently assembling matter.

1738 — Bernoulli's Equation: When the Fluid Accelerates, the Pressure Drops

Daniel Bernoulli (1700-1782) established in 1738 a fundamental relationship between pressure, velocity, and height of a flowing fluid. He shows that in a fluid, these three quantities are linked by a constant: \[ \Large P + \frac{1}{2}\rho v^2 + \rho g h = \text{constant} \] \( P = \text{pressure (Pa)}\), \( \rho = \text{fluid density (kg/m³)}\), \( v = \text{flow velocity (m/s)}\), \( g = \text{acceleration due to gravity}\), \( h = \text{height (m)}\)

What the Equation Says

This equation shows a counterintuitive exchange: when a fluid accelerates, its pressure drops. Wherever a fluid flows, velocity and pressure dance together, one cannot increase without the other decreasing. Air moves faster over the top of a wing than underneath: the pressure drops above the wing, while it remains higher below. This pressure difference sucks the wing upward: the plane takes off.
In a river that narrows: the water accelerates in the bottleneck, and its pressure decreases. When you compress a solid, you increase the pressure. But a moving fluid behaves differently: it exchanges its pressure for speed.
When the wind encounters obstacles: it is forced to rush between buildings, it accelerates like a river in gorges. This acceleration is accompanied by a local pressure drop that makes windows vibrate, doors slam, and in the most violent gusts, tears off tiles. The narrower the passage, the faster the wind accelerates, the more the pressure drops.

1746-1750 — Wave Equation: The Wave Curves, the Wave Accelerates, the Wave Propagates

Jean le Rond d'Alembert (1717-1783) established in 1746 the equation governing the vibration of vibrating strings, the first mathematical formulation of a wave phenomenon. Leonhard Euler (1707-1783) generalized this equation in 1750 to sound waves and fluids. The wave equation describes how a disturbance propagates in space and time, whether it is a vibrating string, a traveling sound, or a deforming wave: \[ \Large \frac{\partial^2 u}{\partial t^2} = v^2 \left( \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} + \frac{\partial^2 u}{\partial z^2} \right) \] \(u = \text{amplitude of the wave (m)},\) \(t = \text{time (s)},\) \(v = \text{propagation speed in the medium (m/s)},\) \(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} + \frac{\partial^2 u}{\partial z^2} = \text{sum of curvatures in the three directions of space}\)

What the Equation Says

The wave equation expresses a universal principle: a deformation does not stay in place, it travels. Whether it is a plucked string, an air compression, or a wave on the surface of water, the shape of the deformation determines how it propagates. The speed at which it travels depends on the medium (string, air, water). What changes from one wave to another is the nature of \(u\) and the propagation speed \(v\) in the medium. The guitar string: plucked, it deforms. This deformation travels along the string, reflects at the ends, and produces a sound. This back-and-forth material nature travels at ~100-150 m/s.
Sound in the air: when you speak, your vocal cords compress the air. These compressions and rarefactions of the air propagate to your listener's ear at 340 m/s.
Waves on the surface of water: throw a stone into a pond. The ripples of water move away at a speed of ~0.5 to 1 m/s.

1755 — Euler's Equation: Nature Finds the Most Economical Path

Pierre-Louis Moreau de Maupertuis (1698-1759) stated in 1744 a bold principle: nature is economical, it always chooses the path that minimizes a certain "action".
Leonhard Euler (1707-1783) sought to give a mathematical form to this intuition and, in 1755, discovered an equation that allows deducing the motion of a system from two quantities: the vis viva \(T\) (related to motion) and the force function \(V\) (related to position): \[ \Large \frac{d}{dt}\left(\frac{\partial T}{\partial \dot{q}}\right) - \frac{\partial V}{\partial q} = 0 \] \( q = \text{coordinate (position, angle...)}\), \( \dot{q} = \text{velocity}\), \( T = \text{vis viva (half the product of mass by the square of velocity)}\), \( V = \text{force function (dependent on position)}\), \( dt = \text{elementary instant (s)}\)

What the Equation Says

Nature balances two quantities: the vis viva (what the system does, it moves, it has speed) and the position force (what it could do, it is at height, it has potential).
Joseph-Louis Lagrange (1736-1813) will unify these two terms into a single function (T−V). A path that is too fast spends too much vis viva, a path that is too slow accumulates too much potential; nature finds at every moment the perfect balance between the two.
Today, vis viva is kinetic energy and the force function is potential energy. They are unified into a single entity called the Lagrangian: \(\mathcal{L} = T - V\). A swinging pendulum: the vis viva is large when it passes quickly at the bottom, zero when it stops at the top. Its force function is related to its height: the higher it goes, the more it increases. The motion results from the permanent balance between these two quantities.
A ball thrown into the air: at the top, it is slow but high, all its energy is "in reserve". At the bottom, it is fast but close to the ground, all its energy is "in action". Nature constantly negotiates between the two.
Light bending through a prism: in air, it travels fast; in glass, it slows down. Light itself obeys this economy, it "chooses" the angle that minimizes its travel time.

1785 — Coulomb's Law: Gravitation, Electricity Version

Charles-Augustin de Coulomb (1736-1806) established in 1785, through torsion experiments, the fundamental law of electrostatics. Structurally identical to Newton's law of gravitation, the Coulomb force is 1036 times more intense than gravity at the atomic scale. \[ \Large F = k_e \frac{q_1 q_2}{r^2} \] \( q_1, q_2 = \text{electric charges (C)}\), \( r = \text{distance between the charges (m)}\), \( k_e \approx 8.99 \times 10^9 \text{ N·m²·C}^{-2} = \text{Coulomb constant}\)

What the Equation Says

It is Coulomb's law that keeps electrons around nuclei, allows the formation of chemical bonds, and gives matter its consistency, hardness, and electrical properties. A magnet attracting a nail: the farther the nail moves away, the more the force collapses; if you double the distance, the force is divided by four.
A hair standing up after rubbing a balloon: a few displaced charges are enough to overcome the entire terrestrial gravity, so intense is the Coulomb force at short distances.
A hydrogen atom: a proton, an electron, and between them Coulomb's law, nothing else. It is this equation alone that sets the size of the atom, its energy, and the light it emits.

1822 — Heat Diffusion Equation: Heat Seeks Balance

Joseph Fourier (1768-1830) published in 1822 his analytical theory of heat, describing thermal propagation in a medium. This equation describes how temperature differences gradually disappear until thermal equilibrium: \[ \Large \frac{\partial T}{\partial t} = \alpha \nabla^2 T \] \( T = \text{temperature (K or °C)}\), \( t = \text{time (s)}\), \( \alpha = \text{thermal diffusivity of the material (m²/s)}\)

What the Equation Says

Any temperature difference is doomed to disappear. The greater the difference, the faster the heat transfers, until the inevitable equilibrium. To solve it, Fourier had to invent an entirely new mathematical tool: decomposing any curve into a sum of sinusoids called Fourier series. A pot removed from the fire: it cools quickly at first, then more and more slowly; the gap with the ambient air decreases, and so does the strength of the transfer.
A metal bar heated at one end: the heat progresses, spreads, becomes uniform; Fourier's equation traces this thermal front exactly, centimeter by centimeter.
The Earth itself: the oceans, the atmosphere, the poles, and the equator constantly exchange their heat. Modern climate models solve, on a planetary scale, this same equation.

1822 — Fourier Transform: Every Complex Signal Is a Sum of Simple Waves

Jean-Baptiste Joseph Fourier (1768-1830) published in 1822 his Analytical Theory of Heat, where he stated that any function (even discontinuous) can be decomposed into a sum of sines and cosines. The equation impresses with its form, but its meaning is simple: the symbol \(\int\) is just a continuous sum, and \(e^{-2\pi i x \xi}\) is just a sinusoidal wave. It therefore adds the contributions of all the frequencies present in a signal: \[ \Large \hat{f}(\xi) = \int_{-\infty}^{\infty} f(x) e^{-2\pi i x \xi} \, dx \] \(\displaystyle \hat{f}(\xi) = \text{representation of the signal in the frequency space}\), \(f(x) = \text{original signal}\), \(\xi = \text{frequency}\), \(e^{-2\pi i x \xi} = \text{complex sinusoidal wave}\)

What the Equation Says

Any wave form, no matter how complex, is only the sum of pure waves that add up, each with its own frequency and amplitude. The Fourier transform is like an inverted recipe: from the cake (the sum of waves), we retrieve the list of ingredients (the frequencies) and their quantities. It is also the prism that reveals the hidden rainbow in white light. Music and the equalizer: when you look at the light bars of an equalizer on a hi-fi system, you see in real time the Fourier transform of the music. Each bar represents the intensity of a particular temporal frequency (bass, midrange, treble).
JPEG compression: an image is a complex two-dimensional spatial signal. The Fourier transform (or rather its variant, the discrete cosine transform) allows removing details that the eye perceives poorly, to compress the image without apparent loss of quality.
Medical MRI: magnetic resonance imaging uses the Fourier transform to reconstruct images of the human body from radiofrequency signals emitted by hydrogen atoms.
Voice recognition: when you speak to your phone, it analyzes your voice using the Fourier transform to identify the characteristic frequencies of each sound, and thus recognize your words.

1822-1845 — Navier-Stokes Equations: The User Manual for Liquids and Gases

Claude-Louis Navier (1785-1836) published in 1822 the first equations describing the motion of viscous fluids, based on the work of Leonhard Euler (1707-1783), who had already established the equations for perfect fluids (without viscosity) in 1757. George Gabriel Stokes (1819-1903) reformulated and generalized these equations between 1845 and 1850. For a fluid in motion, this equation (actually four equations in one) plays the role that \(F = ma\) holds for a ball: it expresses, at each point, the conservation of mass and momentum. \[ \Large \rho \left( \frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v} \right) = -\nabla p + \eta \nabla^2 \mathbf{v} + \mathbf{f} \] \(\rho = \text{fluid density (kg/m³)},\) \(\mathbf{v} = \text{fluid velocity (m/s)},\) \(p = \text{pressure (Pa)},\) \(\eta = \text{dynamic viscosity (Pa·s)},\) \(\mathbf{f} = \text{external forces (gravity, etc.) (N/m³)}\)

What the Equations Say

The Navier-Stokes equations balance, for each drop of fluid, what makes it move and what holds it back. On the left, its acceleration. On the right, three actors: the push of pressure differences, the brake of viscosity that rubs against its neighbors, and external forces like gravity that pull or lift it. River water encountering a rock: in front of the rock, the water slows down and its pressure increases (term \(-\nabla p\)). On the sides, it accelerates (term \(\mathbf{v} \cdot \nabla \mathbf{v}\)). Behind, eddies are born: viscosity (\(\eta \nabla^2 \mathbf{v}\)) dissipates energy and creates these swirling motions. Each ripple tells a term of the equation.
Smoke rising from a cigarette: the hot smoke, less dense than air, is pushed upward (term \(\mathbf{f}\) which includes gravity and buoyancy). It first rises in a smooth stream, a balance between this push and the viscosity that slows it down. Then, suddenly, it starts to swirl. It is the term \(\mathbf{v} \cdot \nabla \mathbf{v}\) that takes over: the speed self-sustains and creates turbulence.
Cold honey: it flows in thick, smooth ribbons. Its viscosity (\(\eta\)) is so strong that it crushes all other terms. Honey shows us the regime where internal friction dominates.
The cup of tea: when you stir a spoon in a cup of tea, the liquid starts to move. Stop stirring, and the tea continues a bit on its own (inertia), but the tea leaves gather in the center. Why? Viscosity slows the liquid near the walls, creating a pressure gradient (\(-\nabla p\)) that pushes the leaves inward. Each term of Navier-Stokes is at work before your eyes.

1827 — Ohm's Law: Voltage, Current, and Resistance, the Electrical Trio

Georg Simon Ohm (1789-1854) published in 1827 a fundamental relationship that unites voltage, current, and resistance in an electrical circuit. He discovered that for a current to flow, there must be a voltage that pushes and a resistance that yields: \[ \Large U = R \cdot I \] \( U = \text{electrical voltage (volts, V)}\), \( R = \text{resistance (ohms, Ω)}\), \( I = \text{current intensity (amperes, A)}\)

What the Equation Says

Ohm discovered that electricity behaves like water in a river. Voltage is the slope that makes it flow. Resistance is the narrowness of the bed. Current is the flow that passes. Ohm highlights a permanent compromise: the greater the resistance, the more voltage is needed to pass the same current. Conversely, at a fixed voltage, a stronger resistance allows less current to pass. An incandescent bulb: its tungsten filament offers such resistance that the current passing through it heats it until it emits light without melting.
An electric heater: its resistance is chosen so that at the mains voltage, the current produces just the right amount of heat. Ohm's law tells how.
A fuse: a thin wire, calibrated to melt if the current exceeds a threshold. When the intensity doubles, the heat quadruples: the wire melts, the circuit is cut.
The human body: when dry, its resistance is high, current passes poorly. When wet, it drops, and the slightest current becomes dangerous. Ohm's law explains why water and electricity are such a bad combination.

1829 — Kinetic Energy: The Energy Hidden in Everything That Moves

Gaspard-Gustave Coriolis (1792-1843) published in 1829 his book On the Calculation of the Effect of Machines. He took up the idea of Gottfried Wilhelm Leibniz (1646-1716) on "vis viva" (\(mv^2\)), but added the factor \(\frac{1}{2}\) to harmonize it with the notion of mechanical work. He named this quantity kinetic energy, thus formalizing the energy that a body possesses solely by virtue of its speed.

\[ \Large E_c = \frac{1}{2} m v^2 \] \( E_c = \text{kinetic energy (joules, J)}\), \( m = \text{mass of the body (kg)}\), \( v = \text{speed (m/s)}\)

What the Equation Says

This apparently simple formula carries a formidable consequence: doubling your speed means quadrupling your energy. Everywhere an object is in motion, on the road, in sports, in industry, in the cosmos, no motion is an exception; this squared law is merciless. A car at 50 km/h: its energy is moderate, the brakes suffice. At 100 km/h, it stores four times more energy; the stopping distance is not twice, but four times longer.
A rifle bullet at 900 m/s: its speed is 30 times that of a tennis ball, its energy is 900 times greater. The square transforms a speed difference into an energy abyss.
A meteorite: its mass is large, its speed phenomenal. The squared energy becomes that of a nuclear bomb. The crater accounts for this energy.
A hammer: the faster you strike, the deeper it drives the nail. But its mass also counts; a light hammer thrown very fast can equal a heavy hammer thrown gently. The equation tells this balance.

1831 — Faraday's Law: Electromagnetic Induction Is Born

Michael Faraday (1791-1867), a genius of experimentation, discovered in 1831 a fundamental phenomenon: a changing magnetic field generates an electric current. In 1820, Hans Christian Ørsted (1777-1851) had shown that a direct current (from his battery) deflected the needle of his compass. Faraday proved that the reverse exists in nature. This brilliant intuition took mathematical form with Franz Ernst Neumann (1798-1895), who, in 1845, established the quantitative relationship. The minus sign, added by Heinrich Lenz (1804-1865), gives the law its deep meaning (if the field increases, the current tends to decrease it; if it decreases, the current tends to increase it): \[ \Large \mathcal{E} = -\frac{\Delta \Phi}{\Delta t} \] \( \mathcal{E} = \text{induced electromotive force (V)}\), \( \Phi = \text{magnetic flux (Wb)}\), \( t = \text{time (s)}\)

What the Equation Says

Faraday highlighted a hidden reciprocity in nature. One creates the other, and when the other moves, it recreates the first:
Direct current → Constant magnetic field
Variable magnetic field → Induced current A magnet approaching a coil: the magnetic flux varies, a current appears. The magnet moves away, the current changes direction. The lamp connected to the coil lights up with each movement.
An alternator in a power plant: a magnet rotates in front of coils, the field varies continuously, the current gushes out. All the electricity in the grid is born from this law.
A transformer, two coils facing each other: the alternating current in the first creates a varying field, which induces a current in the second. The voltage can go up or down depending on the turns.
An electric guitar: the metal string vibrates in front of a magnet, the magnetic flux varies, a current is born in the coil. This signal, amplified, becomes the sound you hear.

1833 — Hamilton's Equations: Total Energy Is Enough to Describe All Motion

William Rowan Hamilton (1805-1865) reformulated in 1833 mechanics in such a profound way that it still illuminates quantum physics today. Where Joseph-Louis Lagrange (1736-1813) described motion from positions and velocities, Hamilton introduced an inseparable duo: position \(q\) and momentum \(p\). A single function, the Hamiltonian H (generally the total energy of the system), contains all the dynamics: \[ \Large \dot{q} = \frac{\partial H}{\partial p} \quad \text{;} \quad \dot{p} = -\frac{\partial H}{\partial q} \] \(\dot{q} = \text{velocity},\) \(\dot{p} = \text{rate of change of momentum},\) \(q = \text{position (m)},\) \(p = \text{momentum (kg·m/s)},\) \(H(q,p) = \text{total energy (J)}\)

What the Equations Say

These two equations reveal a hidden symmetry: position and momentum are two sides of the same coin. Position is the height of the wave at a given moment (1 m above calm) and momentum is the reserve of motion accumulated by the wave (its mass and speed). A slow truck and a fast tennis ball can have the same reserve. The Hamiltonian says how the reserve of motion advances the position, and how the position, by changing, empties or fills this reserve. The two generate each other, like the height and speed of a wave. A skater on a hilly ice surface: the altitude at each point represents the Hamiltonian. At each location, two pieces of information are inscribed in the relief: the slope in one direction tells how fast it will slide; the slope in the other direction, inverted, tells whether it is pushed up or down. The skater only needs this relief map for all their motion to unfold, without any other law to know.
A ball rolling in a bowl: the very shape of the bowl (the Hamiltonian) determines everything. The local slope tells the ball to accelerate or slow down, and the curvature tells how its trajectory will turn. The ball obeys nothing else but the shape of the bowl that contains it.
The oak and the acorn: all the future is already contained in the acorn, it only remains to let it unfold over time. Total energy is this acorn. It is enough to predict, for all future times, the position and speed of each particle, at every moment, in every place, in the smallest details of their movements.

1834 — Ideal Gas Law: Pressure, Volume, and Temperature of a Gas Are Related

Robert Boyle (1627-1691) established in 1662 that for a fixed temperature, the pressure and volume of a gas vary inversely. Edme Mariotte (1620-1684) independently discovered the same law in France. A century later, Jacques Charles (1746-1823) and then Joseph Louis Gay-Lussac (1778-1850) showed that the volume of a gas increases with its temperature. Amedeo Avogadro (1776-1856) added in 1811 that the volume is proportional to the amount of substance. It was Émile Clapeyron (1799-1864) who, in 1834, synthesized these discoveries into a single universal equation of ideal gases: \[ \Large PV = nRT \] \(P = \text{pressure (Pa)},\), \(V = \text{volume (m³)},\), \(n = \text{amount of substance (mol)},\), \(R = 8.314 \text{ J·mol}^{-1}\text{·K}^{-1} = \text{ideal gas constant},\), \(T = \text{temperature (K)}\)

What the Equation Says

This law states that pressure, volume, and temperature are one. You cannot change one without affecting the others, just as you cannot squeeze a sponge without water coming out. The bicycle pump: when you push the piston, you reduce the volume. The pressure increases, and the compressed air eventually inflates the tire.
The pressure cooker: heat a gas, its temperature rises. At constant volume (the cooker is closed), the pressure increases dangerously. That's why a valve releases the excess before everything explodes.
The balloon that flies away: a helium-filled balloon rises because the pressure decreases with altitude. Inside, the gas expands, the volume increases until the envelope bursts if the balloon rises too high.
Breathing: your lungs are volumes that change. When the diaphragm lowers and the ribs spread, the volume of the thoracic cage increases, the pressure decreases, and the outside air enters (inhalation). When the diaphragm rises and the ribs tighten, the volume decreases, the pressure increases, and the air is expelled (exhalation).

1841 — Joule's Law: Every Electric Current Heats Its Path

James Prescott Joule (1818-1889) established in 1841 the relationship between the electric current flowing through a conductor and the heat that results from it. This law is a direct consequence of Ohm's law: by combining \(U = RI\) and \(P = UI\), we obtain: \[ \Large P = R \cdot I^2 \] \( P = \text{thermal power (watts, W)}\), \( R = \text{resistance of the conductor (ohms, Ω)}\), \( I = \text{current intensity (amperes, A)}\)

What the Equation Says

Current never passes through a conductor without leaving heat behind. The heat produced does not depend only on the current, but on its square. Doubling the current quadruples the heating of the path, and thus the heat to dissipate. An incandescent bulb: its tungsten filament offers such resistance that the current passing through it heats it until it emits light. But not too much current, otherwise it melts.
An electric heater: its resistance is calculated so that at the mains voltage, the current produces just the desired heat. Joule's law tells how.
A fuse: a thin wire, calibrated to melt if the current exceeds a threshold. When the intensity doubles, the heat quadruples: the wire melts, the circuit is cut.
High-voltage lines: to transport electricity over long distances without losing too much energy as heat, the voltage is increased and the current is decreased. Because the heat lost grows with the square of the current.

1847 — First Law of Thermodynamics: Energy Transforms

James Prescott Joule (1818-1889) experimentally established as early as 1843 the equivalence between work and heat, before Julius Robert von Mayer (1814-1878) formulated the general principle in 1847. It was Hermann von Helmholtz (1821-1894) who, in the same year, gave it the universal mathematical formulation. The first law states that the change in the internal energy of a system is equal to the sum of the heat received and the work done on it. It is the formalization of the adage Nothing is lost, nothing is created, everything is transformed applied to energy: \[ \Large \Delta U = Q + W \] \(\displaystyle \Delta U = \text{change in the internal energy of the system (J)}\), \(Q = \text{heat received by the system (J)}\), \(W = \text{work received by the system (J)}\)

What the Equation Says

Energy is a universal currency of exchange. Heat, motion, electricity are just different forms of the same quantity. Total energy is conserved; it simply changes appearance. Braking a car: when you brake, the kinetic energy of the car is transferred to the brakes as work (W). This work increases the internal energy of the brakes (ΔU), which manifests as an increase in their temperature; (Q) is the heat dissipated into the ambient air. A tiny part of this work also serves to wear down the brake pads (chemical transformation).
The heat engine: during the combustion of fuel in the cylinder, chemical energy is converted into thermal energy (Q). This heat raises the pressure of the gases, which, as they expand, exert a force on the piston. Thus, the gases transform part of the received thermal energy into mechanical work (W), enabling the piston's movement.
The heat pump: the refrigerant allows the heat pump to recover the heat (Q) present in the outside air, even at low temperatures. Colder than the outside air (for example at -10°C), it absorbs this thermal energy by evaporating, which increases its internal energy (ΔU). The compressor, by consuming electrical energy (W), then compresses the gaseous fluid, further increasing its internal energy (ΔU = Q + W). This operation raises its temperature, allowing the amplified heat to be returned inside the house.
The human body: the chemical energy from food is converted to sustain our vital functions. Part of this energy maintains our body temperature, in the form of heat (Q); another part enables movement and muscle work (W), while the surplus is stored as reserves. The reserves (glycogen, fats) are part of the body's internal energy (ΔU). When you eat, you increase ΔU. When you spend this energy (muscle work + heat), ΔU decreases.

1847 — Law of Conservation of Energy: Energy Never Disappears

Julius Robert von Mayer (1814-1878) and Hermann von Helmholtz (1821-1894) independently formulated in 1847 the universal principle of conservation of energy. In its simplest form, the total energy is reduced to the sum of kinetic energy and potential energy, and this sum remains constant: \[ \Large E_{\text{total}} = E_c + E_p = \text{constant} \] \(\displaystyle E_c = \text{kinetic energy (J)}\), \(E_p = \text{potential energy (J)}\).

What the Law Says

Kinetic energy and potential energy transform into each other without ever losing a single joule along the way. What one gains, the other loses. Their sum, however, does not vary. The swinging pendulum: at the top of its trajectory, the pendulum stops for an instant: its kinetic energy is zero, but its potential energy is maximal. At the bottom, its speed is maximal: the potential energy has become kinetic.
The swing: when you are at the highest point, you are charged with potential energy. As you descend, it transforms into speed, thus into kinetic energy. That's why you rise on the other side: the kinetic becomes potential again.
The apple falling from the tree: motionless on its branch, it only possesses potential energy. As it falls, this is gradually converted into kinetic energy. Just before touching the ground, all the initial potential energy has become kinetic.
The skier on a jump: at the top, their energy is almost entirely potential. As they descend the slope, they gain speed: the potential energy transforms into kinetic. At the moment of the jump, it is this kinetic energy that carries them through the air.

1853 — Gravitational Potential Energy: Every Elevated Mass Holds Energy in Waiting

In 1853, William Rankine (1820-1872) introduced the term potential energy to designate this stored energy, in opposition to the actual energy (kinetic) of a moving body. Gravitational potential energy is the energy that a body possesses due to its position in a gravitational field. The higher an object is, the more speed it can acquire by falling, as if the height were a reservoir of energy in waiting, as shown by this equation: \[ \Large E_p = m\,g\,h \] \(E_p = \text{potential energy (J)},\; m = \text{mass (kg)}\), \(g \approx 9.81\ \text{N·kg}^{-1} = \text{intensity of gravity}\), \(h = \text{height (m)}\)

What the Equation Says

This equation tells us that every elevated object carries within it a dormant, patient, and inexorable energy. The heavier the mass, the greater the height, the more important the stored energy. It is the energy of dangerous immobility: ready to leap, patient but powerful. The hydroelectric dam: the water accumulated at height in the reservoir lake possesses enormous potential energy. To measure the power of this waiting energy, imagine the sudden disappearance of the dam wall: the released water would devastate everything in its path.
The natural waterfall: a waterfall is not just a beautiful spectacle. The water that falls from tens of meters releases the accumulated potential energy, eroding the rock at the base and creating powerful whirlpools.
The clock weight: in a Comtoise clock, the weights are wound up. As they slowly descend, they release their potential energy to maintain the movement of the pendulum and turn the hands.
Bungee jumping: as you climb onto the bridge, you accumulate potential energy. When you jump, it transforms into speed (kinetic energy). The elastic, as it stretches, in turn converts this energy into potential energy, before sending you back up (kinetic energy).

1865 — Maxwell's Equations: Electricity, Magnetism, and Light Are One and the Same

James Clerk Maxwell (1831-1879) published in 1865 his memoir A Dynamical Theory of the Electromagnetic Field, where he unified electricity and magnetism. He relied on the work of Michael Faraday (1791-1867) on fields and lines of force, and that of André-Marie Ampère (1775-1836). Maxwell then formulated 20 equations with 20 unknowns using complex notations and a mechanical model of the ether. It was only later, around 1884, that Oliver Heaviside (1850-1925) and Josiah Willard Gibbs (1839-1903) rewrote them in the compact and elegant vector form we know today. The most spectacular consequence remains: the speed of electromagnetic waves calculated by Maxwell coincides with that of light. Light is therefore just a visible electromagnetic wave: \[ \Large \nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0} \quad\text{;}\quad \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \] \[ \Large \nabla \cdot \mathbf{B} = 0 \quad\text{;}\quad \nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0\varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} \] \(\nabla \cdot = \text{divergence (outgoing flux)}\), \(\nabla \times = \text{curl (vortex)}\), \(\mathbf{E} = \text{electric field}\), \(\mathbf{B} = \text{magnetic field}\), \(\rho = \text{charge density}\), \(\mathbf{J} = \text{current density}\), \(\varepsilon_0, \mu_0 = \text{fundamental constants}\)

What the Equations Say

These four equations are fundamental rules to which the electromagnetic field always and everywhere obeys. Their symmetry reveals the deep intimacy between electricity and magnetism. They say that an electric field can arise from charges or a varying magnetic field, and that a magnetic field can arise from currents or a varying electric field. The electromagnet: an electric current (\( \mathbf{J} \)) flowing through a wire creates a magnetic field (\( \mathbf{B} \)) that can lift masses of scrap iron. Electricity becomes magnetism.
The alternator: by rotating, a magnet creates a varying magnetic field (\( \frac{\partial \mathbf{B}}{\partial t} \)), which produces an electric field (\( \mathbf{E} \)) and thus a current. When you pedal, a small magnet rotates inside a coil of copper wire. The magnet rotates → the coil "sees" a magnetic field that changes direction and intensity at every moment → it is this change in flux that generates the current that lights the lamp. All our major sources of electricity, whether hydraulic, nuclear, or wind, rely on the same principle: turning an alternator.
Light: a self-propagating transverse electromagnetic wave. Electric and magnetic fields oscillate at right angles to each other and propagate perpendicular to the direction in which they move indefinitely unless absorbed by intervening matter. In other words, each type of field (electric and magnetic) generates the other in order to propagate the entire composite structure at the speed of light.
Radio waves: an antenna emits waves because an oscillating current (\( \mathbf{J} \) variable) creates a varying magnetic field, which creates a varying electric field, and so on. The wave travels to your receiver at the speed of light.

1865 — Conservation of Electric Charge: The Total Charge of the Universe Is Eternal

Suspected by Benjamin Franklin (1706-1790) as early as 1747, who observed that electricity is not created but transferred, the conservation of charge was formalized a century later as a direct consequence of the equations of James Clerk Maxwell (1831-1879) in 1865. The experiments of Michael Faraday (1791-1867) on electrolysis in 1834 had already confirmed that charge is quantized and indestructible. The law is simply stated: in an isolated system, positive and negative charges can neutralize each other, but never does a charge emerge from nothing without an opposite charge appearing elsewhere to balance the scale: \[ \Large \frac{\partial \rho}{\partial t} + \nabla \cdot \vec{J} = 0 \] \(\rho = \text{charge density (C/m}^3\text{)},\) \( \vec{J} = \text{current density (A/m}^2\text{)},\) \( t = \text{time (s)}\)

What the Equation Says

Electric charge is like a balance always in equilibrium: every time a weight (positive charge) is added to one side, an identical weight (negative charge) must be added to the other. The balance may oscillate, but its overall equilibrium is never broken. Electrification by friction: rub a plastic ruler on a sweater. Electrons (negative charges) move from the sweater to the ruler. The ruler becomes negatively charged, the sweater positively. The total charge remains zero: what one gains, the other loses.
The electric battery: inside a battery, chemical reactions separate charges. The + and - terminals accumulate opposite charges, but the battery remains globally neutral. When you connect a circuit, these charges move, but the battery neither creates nor destroys electricity: it merely circulates it.
Lightning: a storm separates enormous amounts of charge between the bottom of the cloud (negative) and the ground (positive). The lightning bolt abruptly restores the balance. The total charge before and after the lightning is the same.
The creation of particle-antiparticle pairs: in particle physics, an electron (negative charge) and a positron (positive charge) can be created from a photon. The total charge was zero before, it remains zero after.

1850-1865 — Second Law of Thermodynamics: The Entropy of the Universe Only Increases

Rudolf Clausius (1822-1888) formulated in 1850 the second law and introduced in 1865 the concept of entropy. He summarized in a famous sentence the essence of the first two laws of thermodynamics: "The energy of the Universe is constant" (first law), "the entropy of the Universe tends towards a maximum" (second law).
In an isolated system, there is no heat exchange, so entropy can only increase or remain constant (\(\Delta S \geq 0\)). But in general, for any system exchanging heat, the change in entropy can never be less than the heat received divided by the temperature at which the exchange takes place: \[ \Large dS \geq \frac{\delta Q}{T} \] \(S = \text{entropy (J/K)},\) \(\delta Q = \text{heat exchanged with the outside (J)},\) \(T = \text{absolute temperature of the source providing this heat (K)}\)

What the Equation Says

This principle is the only one in physics that distinguishes the past from the future. Heat flows spontaneously from hot to cold, never the reverse. A perfectly ordered house of cards has low entropy; once collapsed, its entropy increases. The second law states that, in the Universe, we cannot turn back time to reorder what has been scattered. The ice cube melting in a glass: the warm water and the ice cube form an unbalanced system. The ice cube melts, the temperature equalizes. Entropy increases. You will never see a glass of lukewarm water spontaneously produce an ice cube.
The cup that breaks: it falls, shatters into a thousand pieces. Entropy increases abruptly. The pieces will never reassemble themselves to reform the intact cup.
The coffee that cools: it gives off its heat to the ambient air until it reaches room temperature. The total entropy (coffee + air) increases. The coffee will not reheat itself by drawing heat from the air.
Our aging: our body degrades, our cells lose their ability to regenerate. The apparent order that keeps us alive is just a local illusion: it is maintained by constantly drawing order from our environment (food, oxygen) and rejecting disorder (heat, waste). When this fragile balance collapses, the entropy of our body inevitably joins that of the Universe, which has never ceased to increase.
A protostar: as it collapses under its own weight, it heats up. So is there a transfer of "cold" to "hot"? No, because it is not a spontaneous thermal exchange, but a gravitational collapse that releases energy. The total entropy (star + emitted radiation) increases nonetheless. The second law never applies to an isolated subsystem, but to the entire Universe. Locally, order can increase (a star, a living being), but it is always at the cost of even greater disorder elsewhere.

1879-1884 — Stefan-Boltzmann Law: Every Hot Body Radiates, and the Hotter It Is, the More Intensely It Radiates

Josef Stefan (1835-1893) experimentally established in 1879 that the power radiated by a hot body is proportional to the fourth power of its absolute temperature. His student, Ludwig Boltzmann (1844-1906), theoretically demonstrated this law in 1884 using the principles of thermodynamics and Maxwell's theory of electromagnetic radiation. This fundamental law relates the temperature of a body to the energy it emits as radiation: \[ \Large P = \sigma \, T^4 \] \(\displaystyle P = \text{power radiated per unit area (W/m}^2\text{)}\), \(\displaystyle \sigma \approx 5.67 \times 10^{-8}\ \text{W·m}^{-2}\text{·K}^{-4} = \text{Stefan-Boltzmann constant}\), \(\displaystyle T = \text{absolute temperature (K)}\)

What the Equation Says

Any body whose temperature is above absolute zero (0 Kelvin or −273.15 °C) emits radiation. The hotter it is, the more it radiates, and this increase is not linear: if you double the temperature, the radiated power is multiplied by sixteen. The filament of a bulb: heated to about 2500°C (or ~2800 K), it emits white light and intense heat. If its temperature were halved (1400 K), the radiated power would drop by a factor of 16: the bulb would be barely dark red.
The Sun: its surface is at ~5500°C (5778 K). Each square meter of its surface radiates a colossal power of 63 million watts. After traveling 150 million kilometers through space, only ~1360 W/m² reaches the top of our atmosphere. On the ground, under the best conditions (Sun at zenith, cloudless sky), the maximum insolation is ~1000 W/m². It is this energy, despite the distance, that lights and warms our planet.
An iron: at 200°C (473 K), it radiates in the infrared, invisible to the naked eye. You feel the heat without seeing the light. If it were heated to 800°C (1073 K), it would become cherry red.
The human body: at 37°C (310 K), we emit infrared radiation. Thermal cameras capture it to "see" in the dark or detect fever.

1877 — Boltzmann Entropy: Disorder Is the Most Probable State

Ludwig Boltzmann (1844-1906) proposed in 1877 a revolutionary interpretation of entropy. At a time when the very existence of atoms was fiercely debated, Boltzmann bet that matter is composed of invisible particles. He postulated that the entropy of a system measures the number of different ways to arrange its microscopic constituents without changing its macroscopic appearance. The more possible configurations there are, the greater the entropy. The equilibrium state is then nothing other than the most probable state, the one that corresponds to the greatest disorder: \[ \Large S = k \ln W \] \(S = \text{entropy (J/K)},\) \(k \approx 1.38 \times 10^{-23}\ \text{J/K} = \text{Boltzmann constant},\) \(W = \text{number of microstates corresponding to a given macrostate (dimensionless)}\) \(\ln = \text{natural logarithm (base } e \approx 2.718\text{)}\)

What the Equation Says

The equation links the visible world to the invisible world of atoms. Entropy is just a counter: it counts all the microscopic configurations (positions and speeds of particles) that give the same macroscopic appearance (same temperature, same pressure, same volume). The larger this number, the higher the entropy. Disorder is simply the state that has the most possible invisible versions. The deck of cards: take a new deck, perfectly ordered by suit and value. It is a very particular state (\(W = 1\) for this precise order). Shuffle the cards. The resulting disordered deck corresponds to a gigantic number of possible configurations (\(W \approx 10^{67}\)). Entropy has increased tremendously.
Coins: toss 100 coins. Getting 50 heads and 50 tails is very likely because there are countless combinations that lead to it. Getting 100 heads is only possible in one way. Disorder (balanced mix) is the most probable state.
The disappearance of a perfume: open a bottle of perfume in a room. The odor molecules, initially concentrated (\(W\) low), irreversibly disperse (\(W\) huge). They will never return to the bottle: disorder is too probable.

1900 — Planck's Law: Energy Does Not Flow Continuously but in Discrete Packets Called Quanta

Max Planck (1858-1947) proposed in 1900 a revolutionary hypothesis to solve the enigma of the black body, a theoretical object that absorbs all the light it receives. The physicists of the time had proposed formulas that worked either for low frequencies or for high frequencies, but none were universal. Planck, seeking an explanation, assumed that the energy of the oscillators emitting light could only take discrete values, multiples of an elementary quantum: \[ \Large E = h \nu \] \(E = \text{energy of the quantum (J)},\) \(h \approx 6.626 \times 10^{-34}\ \text{J·s} = \text{Planck constant},\) \(\nu = \text{frequency of the wave (Hz)}\)

What the Equation Says

Energy does not flow continuously like water. It comes in quanta, like sugar sold in pieces that cannot be divided. But not all pieces are the same size: those of blue light (high frequency) are larger and more energetic than those of red light (low frequency). The photoelectric effect: under the effect of light, a metal can release electrons. Paradoxically, red light, no matter how intense, produces no effect, while violet light, even faint, is enough to tear them away.
The colors of neon lights: in a neon tube, excited atoms return to their ground state by emitting photons. Each photon is a quantum of light, whose energy is exactly the difference between two energy levels of the atom. Each gas (neon, argon, mercury) has a colored fingerprint. Each color corresponds to very specific energy quanta.
Lasers: laser radiation is produced by synchronized quantum jumps between atoms. All emitted photons have exactly the same energy (same color) and travel in phase. This perfect coherence, impossible with a classical source, directly results from the quantization of energy.

1902 — Law of Radioactive Decay: Atoms Are Not Eternal

Henri Becquerel (1852-1908) discovered radioactivity in 1896 by observing that uranium spontaneously emits invisible radiation. Pierre (1859-1906) and Marie Curie (1867-1934) isolated polonium and radium, demonstrating that certain elements naturally transform into others. Ernest Rutherford (1871-1937) and Frederick Soddy (1877-1956) established between 1900 and 1902 the fundamental law of radioactive decay. The number of nuclei that decay per unit time is proportional to the number of nuclei still present: \[ \Large N(t) = N_0 \, e^{-\lambda t} \quad\text{;}\quad t_{1/2} = \frac{\ln 2}{\lambda} \] \( N(t) = \text{number of atoms at time } t\), \( N_0 = \text{initial number of atoms}\), \( \lambda = \text{decay constant}\), \( t_{1/2} = \text{half-life}\)

What the Equations Say

Each unstable nucleus has a constant probability of decaying at any moment, but the exact moment is unpredictable. The law is only valid on average, over a large number of nuclei. The half-life is the time required for half of the nuclei to have decayed, regardless of the initial quantity. Carbon-14 dating: living organisms absorb carbon-14 (radioactive) during their lifetime. At their death, this intake ceases and the carbon-14 decays with a half-life of 5730 years. By measuring the remaining proportion, ancient samples can be dated up to 50,000 years.
Radon in homes: this radioactive gas from radium present in the soil seeps into homes. Its half-life of 3.8 days is short enough for it to decay before being inhaled, but long enough to accumulate in poorly ventilated basements.
Nuclear medicine: a radioactive tracer (such as technetium-99m, half-life ~6 hours) is injected into the patient. Its decay emits radiation detected by a camera to visualize an organ. The half-life is chosen to be short enough to limit exposure.
Nuclear power plants: radioactive waste contains nuclei with very long half-lives (thousands or millions of years). Their danger decreases over time according to the same exponential law, but on time scales that defy imagination.

1904 — Lorentz Transformation: Time and Space Dilate According to the Observer's Speed

Hendrik Lorentz (1853-1928) established in 1904 the equations that allow transitioning from one reference frame to another when approaching the speed of light. He sought to explain why the experiments of Michelson and Morley (1887) had not detected the famous "ether" supposed to carry light. Henri Poincaré (1854-1912) gave these equations the name "Lorentz transformations" and showed that they form a coherent mathematical group. The transformation relates the space and time coordinates between two reference frames in relative motion: \[ \Large t' = \gamma \left(t - \frac{v x}{c^2}\right)\quad\text{;} \quad x' = \gamma (x - v t)\quad\text{with} \quad \gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}} \] \(t, x = \text{time and position in the fixed frame},\) \(t', x' = \text{time and position in the moving frame},\) \(v = \text{relative velocity (m/s)},\; c = \text{speed of light (m/s)},\) \(\gamma = \text{Lorentz factor (dimensionless)}\)

What the Equations Say

When an observer watches an object move very fast relative to them, they measure that the object's time flows more slowly and that its lengths contract in the direction of motion. These effects, imperceptible at our scale, become enormous near the speed of light. The tap that closes: imagine a tap that closes gradually; the more it closes, the less water flows, and the more it closes slowly. \(\gamma\) measures how fast it closes.
A train traveling at high speed: if an observer on the platform simultaneously measures the two ends of a moving train, they obtain a length shorter than that of the stationary train. At our usual speeds, the effect is imperceptible, but at 90% of the speed of light, the train would appear twice as short.

1905 — Mass-Energy Equivalence: Every Mass at Rest Is an Energy Reservoir

Albert Einstein (1879-1955) published in September 1905 a short three-page article, "Does the Inertia of a Body Depend Upon Its Energy Content?", which revolutionized our conception of matter. He established that mass and energy are two faces of the same reality. What appears to us as solid and heavy matter is in fact only "crystallized" energy, frozen in a stable form. Conversely, all energy possesses inertia, an equivalent mass: \[ \Large E = mc^2 \] \(E = \text{energy (J)},\; m = \text{mass (kg)},\; c \approx 3 \times 10^8\ \text{m/s} = \text{speed of light in vacuum}\).

What the Equation Says

A tiny bit of matter, even completely still, contains a gigantic amount of energy. One gram of matter at rest (a drop of water, a grain of sand, a bread crumb), if entirely converted into energy, could power a city of 100,000 inhabitants for a day. The mass of our body: if we added up the masses of the protons, neutrons, and electrons that make us up, we would find much less than our weight on a scale. Most of the mass does not come from the particles themselves: more than 99% of a proton's mass comes from the energy that agitates its quarks inside. We are pure confined energy.
The Sun and the stars: at the heart of the Sun, 4 million tons of matter disappear every second, converted into pure energy. Without this colossal reserve, our star would have gone out long ago: it could only have shone for a few million years, instead of the 5 billion already elapsed.
The atomic bomb: in a bomb like the one in Hiroshima, less than one gram of uranium was actually transformed into energy. Yet, this tiny amount of matter released a power equivalent to 15,000 tons of TNT. Matter holds unsuspected energy.
Nuclear power plants: the fission of a uranium nucleus releases energy because the mass of the fission products is slightly less than that of the initial nucleus. This mass difference, multiplied by \(c^2\), becomes the heat that drives the turbines. One kilogram of enriched uranium produces as much energy as 1,500,000 kg of coal or 1,000,000 kg of oil.
Antimatter: when a matter particle meets its antiparticle, they annihilate into pure energy, exactly following \(E=mc^2\). This is the perfect conversion, where all mass becomes radiation. This is how PET medical scanners (positron emission tomography) work.

1913 — Bohr Model: Electrons Only Occupy Discrete Orbits Around the Nucleus

Niels Bohr (1885-1962) published in 1913 a model of the atom that revolutionized physics. He relied on the nucleus of Rutherford (1911) and the quanta of Planck (1900). Classically, a rotating electron should radiate and collapse into the nucleus in a flash. Bohr postulated, on the contrary, stable and non-radiating orbits. The electron only changes orbit by a sudden jump, emitting or absorbing a photon of precise energy. This idea finally explains the spectral lines: \[ \Large E_n = -\frac{13.6 \text{ eV}}{n^2}, \quad n = 1, 2, 3, \ldots \] \(E_n = \text{energy of orbit } n \text{ (eV)},\) \(n = \text{principal quantum number (integer ≥ 1)},\) \( 13.6 \text{ eV} = \text{ionization energy of hydrogen}\)

What the Equation Says

The atom is like a very special building, whose floors are not evenly spaced: the higher you go, the closer they get. An electron occupies one floor or another; it is never in the stairs that connect them. To change floors, it must absorb or emit a quantum of light whose energy exactly corresponds to the difference between two levels. Below the ground floor (\(n=1\)), there is nothing: it is the fundamental state, the most stable. The spectral lines of hydrogen: heat hydrogen, it emits light which, when decomposed by a prism, reveals not a continuous rainbow but a series of well-separated colored lines: one red, one blue-green, one blue, and one violet. Each line corresponds to an electron jump between two Bohr orbits. Conversely, if cold hydrogen (at room temperature) is illuminated with white light, it absorbs these same colors, leaving black lines in the spectrum. The same applies to all gases: each has its own unique spectral signature. This is how astronomers, by analyzing the light from stars, identify the black or colored lines and determine the composition of their atmospheres.
Sodium vapor lamps: the yellow-orange lighting of street lamps comes from sodium atoms. Their electrons jump between two very close energy levels, emitting almost monochromatic light (two intense yellow lines). This is the signature of sodium.
Fireworks: the colors of the rockets come from excited atoms: strontium gives red, barium green, sodium yellow, copper blue. Each excited atom, returning to its normal state, emits photons with the colors of its quantum jumps.

1913 — Bragg's Law: Seeing the Arrangement of Atoms with X-Rays

William Lawrence Bragg (1890-1971) established in 1913 the fundamental condition for the diffraction of X-rays by crystals. He understood that the regularly spaced planes of atoms in a crystal can act as a diffraction grating for X-rays, whose wavelength is comparable to interatomic distances: \[ \Large n\lambda = 2d\sin\theta \] \(n = \text{diffraction order (integer)},\) \(\lambda = \text{wavelength of X-rays (m)},\) \(d = \text{distance between two atomic planes (m)},\) \(\theta = \text{angle between the incident ray and the atomic plane}\)

What the Equation Says

X-rays pass through the crystal like shards of light in a room full of mirrors. Some reflections return exactly together: they overlap, become more intense, and light up a bright spot. Others return out of phase: their glows blur or disappear. The iridescence of a CD: turn over a CD, you see rainbow colors. The micro-grooves of the disc, regularly spaced, diffract light like atomic planes diffract X-rays. Bragg's law explains why a certain color appears at a certain angle.
The photograph of DNA: under X-rays, it reveals the characteristic cross of the double helix. In 1953, Rosalind Franklin (1920-1958) captured this image, which allowed Crick and Watson to decipher the structure of life.

1918 — Noether's Theorem: Every Symmetry of Nature Hides a Conservation Law

Emmy Noether (1882-1935) published in 1918 a theorem that reveals the hidden unity behind conservation laws. Physicists already knew the conservation of energy, momentum, or electric charge, but did not understand why these quantities remained invariable. Noether demonstrated that behind each conservation law lies a symmetry of nature. Every continuous transformation that does not alter the laws of physics (whether acting on time, space, or particles) corresponds to a quantity that remains immutable: \[ \Large \frac{d}{dt}\left(\frac{\partial \mathcal{L}}{\partial \dot{q}}\right) = 0 \] \(\mathcal{L} = \text{Lagrangian of the system (kinetic energy - potential energy)},\) \(q = \text{generalized position},\) \(\dot{q} = \text{generalized velocity}\)

What the Equation Says

Our physical laws are the same everywhere and at all times. If the laws of physics changed from one day to the next or from one place to another, science would not be possible. Energy is conserved because the laws of physics are the same yesterday and tomorrow. Momentum is conserved because they are the same here and there. Angular momentum is conserved because there is no privileged direction in space. Invariance by translation in time: a planet revolves around the Sun without ever stopping. If the laws of gravity changed over time, its orbit would deviate. The fact that it conserves its energy over billions of years proves that the laws are immutable.
Invariance by translation in space: a satellite in the vacuum of space, far from any influence, conserves its speed because space is the same everywhere.
Invariance by rotation: when a skater pulls in their arms, they spin faster. Their "rotational momentum" (angular momentum) remains constant. By bringing their mass closer to the axis, they decrease their resistance to turning, and their speed automatically increases to compensate.

1915 — Einstein's Field Equations: Gravity Is Not a Force but the Curvature of the Fabric of the Universe

Albert Einstein (1879-1955) presented in November 1915 his theory of general relativity, a new conception of gravitation that revolutionized our vision of space and time. His field equations describe how the presence of matter and energy curves the surrounding spacetime. It is no longer a force that attracts bodies, but the geometry itself that guides them in their trajectories: \[ \Large G_{\mu\nu} + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu} \] \(G_{\mu\nu} = \text{Einstein tensor (curvature of spacetime)},\) \(g_{\mu\nu} = \text{metric (distance in spacetime)},\) \(\Lambda = \text{cosmological constant},\) \(T_{\mu\nu} = \text{energy-momentum tensor (content in matter/energy)},\) \(G \approx 6.67 \times 10^{-11}\ \text{m}^3\text{kg}^{-1}\text{s}^{-2} = \text{universal gravitational constant},\) \(c = 299,792,458\ \text{m/s} \approx 3 \times 10^8\ \text{m/s} = \text{speed of light in vacuum}\)

What the Equation Says

Mass tells spacetime how to curve, and curved spacetime tells mass how to move. The planets simply follow the natural lines of this curved landscape: they perpetually fall around the Sun without ever reaching it. The deflection of light by the Sun: light, although massless, follows the curvature of spacetime. During the 1919 eclipse, Arthur Eddington (1882-1944) measured that the light from stars passing near the Sun is deflected exactly as predicted by general relativity.
The advance of Mercury's perihelion: Mercury's orbit slowly rotates on itself (advances by 43 arcseconds per century). General relativity explains it perfectly: it is the curvature of spacetime due to the Sun that slightly deforms the planet's trajectory. Newtonian mechanics could not explain this.
Black holes: when a massive star collapses, it curves spacetime to such an extent that nothing, not even light, can escape. The object becomes a black hole, confirmed by observations of gravitational waves and images of the event horizon.
Gravitational waves: when two colossal masses (such as black holes) revolve around each other, they create ripples in the fabric of spacetime. These undulations travel through the Universe at the speed of light, like the ripples on the surface of a pond after a stone is thrown in.

1924 — Friedmann Equations: The Universe Is Not Static, It Has a History

Alexander Friedmann (1888-1925) demonstrated in 1922 that general relativity does not require a motionless Universe: space can expand or contract. In 1924, he generalized his solutions to an infinite Universe with negative curvature, thus becoming the first to speak of an "expanding Universe". His equations describe how the scale factor \(a(t)\) (the "size" of the Universe) evolves depending on its content of matter and energy: \[ \Large H^2 \equiv \left( \frac{\dot{a}}{a} \right)^2 = \frac{8\pi G \rho}{3} - \frac{k}{a^2} \] \(a(t) = \text{scale factor (dimensionless)},\) \(H = \text{expansion rate (s⁻¹)},\) \(\rho = \text{energy density (kg/m³)},\) \(G = \text{gravitational constant},\) \(k = \text{spatial curvature parameter}\)

What the Equation Says

The Universe cannot be static. The equation tells us at what speed it expands, and how this speed depends on what it contains (matter, radiation) and its shape (curvature). Depending on the density, three destinies are possible. Closed Universe (k > 0): if the density is sufficient, gravity will eventually win. The expansion slows, stops, then reverses. The Universe collapses on itself in a Big Crunch.
Flat Universe (k = 0): the density is exactly at the critical value. The expansion slows without ever stopping, asymptotically tending towards zero. This is the perfect balance between the initial momentum and gravity.
Open Universe (k < 0): the density is too low to stop the expansion. The Universe expands eternally, at a speed that tends towards a non-zero constant. The galaxies move away indefinitely, space becomes colder and emptier.

1924 — De Broglie Relation: Matter Is Also a Wave, the Wave Is Also Matter

Louis de Broglie (1892-1987) proposed in 1924 a bold idea that was experimentally verified three years later. Since light, which was thought to be a wave, can behave like a particle (photon), why shouldn't matter, which was thought to be a particle, also behave like a wave? He postulated that every material particle is associated with a wave, whose wavelength is inversely proportional to its momentum: \[ \Large \lambda = \frac{h}{p} \] \(\lambda = \text{associated wavelength (m)},\) \(h \approx 6.626 \times 10^{-34}\ \text{J·s} = \text{Planck constant},\) \(p = \text{momentum of the particle (kg·m/s)}\)

What the Equation Says

Everything that has momentum also has a wavelength. The more massive or faster a particle is, the smaller its wavelength. For everyday objects, it is so tiny that it is imperceptible. But for electrons or atoms, it becomes measurable: matter then reveals its wave nature. The iridescence of butterfly wings: the magnificent changing colors of certain butterfly wings (like the Morpho) do not come from pigments but from microscopic structures shaped like a grid. When light strikes them, it interferes as on a CD. This phenomenon is purely wave-like. Now, if you replace light with a beam of electrons, you observe exactly the same type of iridescence on a screen: the electrons bounce off the crystal lattice and interfere with each other, proving their wave nature.
The electron microscope: the wavelength of an accelerated electron can be thousands of times smaller than that of visible light. By using electrons instead of photons, we can observe much finer details, down to the atomic scale. This is the principle of the electron microscope.
The matter wave of an atom: entire atoms, cooled near absolute zero, can interfere like waves. Today, experiments are conducted where rubidium atoms pass through two slits and produce interference fringes, proving that the wave-particle duality applies to all matter.
The matter waves of a tennis ball: a 50 g tennis ball thrown at 100 km/h has a de Broglie wavelength of about \(10^{-34}\) m, or a billion times smaller than a proton. No instrument can detect such a tiny undulation. The wave nature of matter only appears at the microscopic scale.

1926 — Schrödinger Equation: How a Probability Wave Evolves Over Time

Erwin Schrödinger (1887-1961) published in 1926 the fundamental equation of quantum mechanics. He relied on the idea of Louis de Broglie (1892-1987) that matter has a wave nature, and sought an equation that describes how these waves evolve. Unlike classical waves (sound, water waves), the Schrödinger wave is not a material wave but a probability wave: its value at each point indicates the probability of finding the particle there. The equation tells how this wave propagates, deforms, and interferes with itself over time: \[ \Large i\hbar \frac{\partial \Psi}{\partial t} = \hat{H} \Psi \] \(i = \text{imaginary unit},\) \(\hbar = \frac{h}{2\pi} \approx 1.055 \times 10^{-34}\ \text{J·s} = \text{reduced Planck constant},\) \(\Psi = \text{wave function (probability)},\) \(\hat{H} = \text{Hamiltonian operator (total energy of the system)}\)

What the Equation Says

Just as Newton's laws predict where a planet will be tomorrow, the Schrödinger equation predicts how the probability cloud surrounding a particle evolves. It does not give a precise position; it establishes a map of the places where the particle is likely to be found. This map spreads, undulates, and wrinkles like a sheet being shaken. And as long as we do not look, the particle is everywhere on the map. It is the act of observing that forces it to choose a place. Throw a stone into a pond: a single wave propagates in concentric circles. If this wave encounters a barrier with two holes, it passes through both openings. On the other side, the two new waves overlap, creating areas where the water is agitated and others where it remains calm. The Schrödinger equation describes the same phenomenon for a particle: its probability wave can pass through two obstacles at once and interfere with itself.
Sprinkle fine sand on a metal plate: make the plate vibrate with a bow. The sand gathers on the lines where the plate does not move (the nodes), forming geometric patterns (circles, squares, stars) depending on the frequency. These patterns are the image of atomic orbitals. The electrons in an atom form equally precise patterns, but in three dimensions.
Look at a CD under light: rainbow iridescence appears. The light reflects off the micro-grooves and interferes with itself, some colors canceling out, others reinforcing. The Schrödinger equation predicts that a beam of electrons produces exactly the same patterns when passing through a crystal. Matter undulates like light.
A puff of smoke in a room: it is impossible to say where each molecule will go. Yet, the smoke patch spreads according to a precise law, like an ink stain in water. The Schrödinger equation describes this spread, but for a probability wave. The quantum particle is everywhere at once in the patch, like the smoke.
The tunnel effect: throw a ball against a wall, it bounces back. In the quantum world, a particle can sometimes pass through the wall without damaging it. Its probability wave does not stop abruptly at the obstacle; it seeps in and gradually weakens, like a sound passing through a partition. If the wall is thin enough, a tiny part of the wave emerges on the other side. This residual wave is the probability that the particle has passed through.

1927 — Heisenberg's Uncertainty Principle: Quantum Fuzziness Is a Property of Reality

Werner Heisenberg (1901-1976) stated in 1927 a fundamental principle that imposes an absolute limit on what we can know about the quantum world. Unlike classical physics, we cannot measure a quantity with infinite precision. Heisenberg showed that certain pairs of quantities (such as position and momentum) are linked by a fundamental fuzziness. The more precisely we know one, the less we can know the other. This is not a flaw in our instruments, but an intrinsic property of reality: nature itself is fuzzy at this scale. If \(\Delta x\) is small, then \(\Delta p\) is large: \[ \Large \Delta x \cdot \Delta p \geq \frac{\hbar}{2} \] \(\Delta x = \text{uncertainty in position (m)},\) \(\Delta p = \text{uncertainty in momentum (kg·m/s)},\) \(\hbar = \frac{h}{2\pi} \approx 1.055 \times 10^{-34}\ \text{J·s}\)

What the Equation Says

In the world of the infinitely small, there is an insurmountable limit. We cannot know everything about a particle: it is not a flaw in our measurements, it is how the world is made. The particle does not have a precise position and speed; it is only a cloud of possibilities, and it is by observing it that we force this cloud to condense into a precise reality. Photograph a bird with a very short exposure time: you will clearly see its feathers (precise position), but you will not be able to know how fast it was flying (unknown speed). Lengthen the exposure time: the bird becomes a blurred streak (uncertain position), but this streak reveals its speed. You cannot have both at the same time.
Electron microscopes: to see a tiny object, it must be illuminated with a wave of short wavelength, shorter than the object itself. This requires fast electrons, hence a large momentum. But the more precisely we know the momentum of these electrons, the less we can know their position. The uncertainty principle sets the ultimate limit of what we can see: there is a fundamental fuzziness that prevents knowing both the position and speed of what we observe.
A tightrope walker holds a long pole to stay stable: the longer their pole (very stable position), the more time it takes to move it (slow and uncertain speed). To change position quickly (fast speed), they must shorten their pole, but then they wobble more (unstable position). We cannot have both a perfectly stable position and great agility.
The electron in the atom: think of someone trying to hold a stick in vertical balance on their hand. To keep it stable, they must constantly move their hand, never too slowly, never too quickly, always in perpetual fuzziness. The electron is condemned to perpetual fuzziness; too precise, it would fall into the nucleus; too fast, it would escape. The uncertainty principle keeps it in a cloud, neither too close nor too far, thus stabilizing all the matter of reality.

1927 — Vacuum Energy: The Vacuum Is Not Nothingness, It Bubbles

The uncertainty principle of Werner Heisenberg (1901-1976) in 1927 forbids a system from being perfectly still: if its position were fixed, its momentum would be infinite, which is impossible. Consequently, even the lowest energy state (the vacuum) retains residual activity, inevitable energy fluctuations. This energy is sufficient to make virtual particle pairs emerge from the vacuum, which immediately annihilate: \[ \Large \Delta E \cdot \Delta t \geq \frac{\hbar}{2} \] \(\Delta E = \text{uncertainty in energy (J)},\) \(\Delta t = \text{uncertainty in time (s)},\) \(\hbar = \frac{h}{2\pi} \approx 1.055 \times 10^{-34}\ \text{J·s} = \text{reduced Planck constant}\)

What the Equation Says

The vacuum is not empty. It teems with ghost particles that borrow energy from the future to exist for an instant, then return it. The shorter the time interval, the greater the energy fluctuation can be. This is how matter-antimatter pairs are born, and all the invisible dances that populate nothingness. The Casimir effect: two perfectly parallel mirrors placed in a vacuum attract each other weakly. Why? Between the plates, the space is too narrow to accommodate all the waves of the vacuum; only the shortest survive. Outside, all the waves dance freely. The richer external vacuum therefore pushes the plates against each other.
The always agitated sea: even in calm weather, the sea is never perfectly flat. Infinitesimal wavelets, tiny ripples, ceaseless fluctuations run across its surface. It is the energy of the vacuum: a permanent agitation, even when everything seems still.
Dust in a sunbeam: in a dark room, we see nothing. But when a sunbeam passes through the air, myriads of dancing dust particles appear, revealing an agitation previously invisible. The vacuum is this dark room, and the virtual particles are this dust that only very intense radiation can reveal.

1928 — Dirac Equation: The Unification That Revealed Antimatter

Paul Dirac (1902-1984) published in 1928 an equation that marries quantum mechanics and special relativity. The Schrödinger equation, valid for slow electrons, fails near the speed of light. Dirac constructed an equation that treats time and space equally. Its solution is groundbreaking: the wave function becomes a four-dimensional spinor that contains, without seeking them, states of negative energy. These states, far from being an error, reveal the existence of antimatter, discovered in 1932 by Carl Anderson (1905-1991): \[ \Large i\hbar \frac{\partial \psi}{\partial t} = \left( c \boldsymbol{\alpha} \cdot \mathbf{p} + \beta m c^2 \right) \psi \] \(\psi = \text{four-component spinor},\) \(\boldsymbol{\alpha},\) \(\beta = \text{Dirac matrices (4×4)},\) \(\mathbf{p} = \text{momentum operator},\) \(m = \text{mass of the electron},\) \(c = \text{speed of light},\) \(\hbar = \text{reduced Planck constant}\)

What the Equation Says

The equation perfectly describes the electron in the atom and reveals its spin, this internal rotation property, without needing to invent it. But it hides much more. Just as the equation \(x^2 = 4\) has two answers (+2 and -2), the Dirac equation has two families of solutions. For every electron of positive energy, there is a twin of negative energy. Far from being a mere artifact, these solutions announce the existence of another world: that of antimatter. Stand in front of a mirror: you see your twin, identical in every way, but whose right hand is your left hand. The Dirac equation predicts for each matter particle a double in antimatter, symmetric but inverted. The electron and the positron are like you and your reflection: same properties, opposite charges.
A wave on the surface of water: it has a crest (positive energy), but it cannot exist without a trough (negative energy) that precedes or follows it. The Dirac equation shows that matter (crest) and antimatter (trough) are inseparable. When crest and trough meet, they cancel each other out: the surface becomes flat again, and the wave's energy dissipates into pure energy.
A photographic film: the image we see (the positive) is only half the story. Its negative also exists, latent, inverted, ready to reveal its double if exposed to light. The Dirac equation works like this negative: it shows that for each matter particle (positive image) there corresponds an antiparticle (its negative) that slumbers in the vacuum. Give enough energy, and this negative becomes embodied in an actual antimatter particle.

1927-1929 — Hubble-Lemaître Law: The Faster a Galaxy Is, the Farther Away It Is

Georges Lemaître (1894-1966) published in 1927 an article in which he deduced from the equations of general relativity that the Universe must be expanding, and that the speed at which galaxies move away is proportional to their distance. In 1929, the American astronomer Edwin Hubble (1889-1953) confirmed this law through observation. The writing of the equation is a modern convention, collectively attributed to these two founding fathers: \[ \Large v = H_0 \times d \] \(v = \text{recession speed of the galaxy (km/s)},\) \(d = \text{distance of the galaxy (Mpc)},\) \(H_0 \approx 70\ \text{km/s/Mpc} = \text{Hubble-Lemaître constant (current expansion rate)}\)

What the Equation Says

Imagine points distributed on a balloon being inflated: each point moves away from the others, and the more the balloon is inflated, the faster the points seem to move away. It is not the points that are moving but the surface of the balloon that is swelling. The equation tells us that space is expanding, carrying galaxies like points drawn on an inflating balloon. Put raisins in cake batter: the batter rises, pushing the raisins apart. Each raisin sees its neighbors moving away. The farther apart two raisins are in the batter, the faster they seem to move away. Yet, they are not moving in the batter: it is the batter itself that is expanding.
Ants on a rubber band: stretch the rubber band, the ants move away from each other without walking. An ant looking at its neighbor will see it moving away all the faster the farther apart they were initially. This is exactly what Hubble measures with galaxies.
A rubber strip with marks every centimeter: stretch it at a constant rate, for example 1% per second. Mark #10 and mark #11, 1 cm apart initially, move away at 0.01 cm/s. Mark #1 and mark #100, 99 cm apart, move away at 0.99 cm/s. This 1% per second is our Hubble constant: it sets the expansion rate.

1926 — Klein-Gordon Equation: Spinless Relativistic Particles

Oskar Klein (1894-1977) and Walter Gordon (1893-1939) published in 1926 the relativistic version of the Schrödinger equation for spin-zero particles. Schrödinger himself had derived it first, but abandoned it because it did not give the correct spectrum of the hydrogen atom; the spin of the electron, then unknown, was missing. The Klein-Gordon equation simply follows from the relativistic relation \(E^2 = p^2c^2 + m^2c^4\) to which the rules of quantum mechanics are applied: \[ \Large \left( \Box + \frac{m^2 c^2}{\hbar^2} \right) \psi = 0 \quad \text{with} \quad \Box = \frac{1}{c^2}\frac{\partial^2}{\partial t^2} - \nabla^2 \] \(\Box = \text{d'Alembertian operator},\) \(\nabla^2 = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2} = \text{Laplacian (spatial curvature)},\) \(m = \text{mass of the particle (kg)},\) \(c = \text{speed of light (m/s)},\) \(\psi = \text{field (or wave function)},\) \(\hbar = \frac{h}{2\pi} \approx 1.055 \times 10^{-34}\ \text{J·s} = \text{reduced Planck constant}, \) \(\frac{\partial^2}{\partial t^2} = \text{second partial derivative with respect to time}\)

What the Equation Says

While the Dirac equation describes matter (electrons, protons, neutrons), the Klein-Gordon equation describes the messengers of forces: bosons, which transmit interactions between particles. It applies to particles with integer spin, called bosons, such as pions (which ensure the cohesion of the atomic nucleus) and the famous Higgs boson, discovered at CERN in 2012. A swing oscillates regularly: the equation says that its shadow on the ground swings in the opposite direction, as if an invisible twin swing accompanied its movement. Nature works this way: for every ordinary particle (the real swing) there corresponds an antiparticle (its shadow) that is identical but inverted in time or charge.
A soap bubble: it is its surface tension that keeps it round. The smaller the bubble, the greater its internal pressure and the more it resists deformation. The Klein-Gordon equation follows an inverse logic: the more massive the particle, the harder its field is to curve, like a thick and rigid ice sheet, whereas a light field would be fluid and undulating like liquid water.
A stone falling into water: the pressure wave propagates at a finite speed, about that of sound in water. If the water were incompressible like a block of concrete, this wave would go faster; any point in the water would instantly feel the impact. In the Klein-Gordon equation, mass plays this role: without mass, the field's waves travel at the speed of light; with mass, they slow down. The more massive the particle, the "heavier" its field is to set in motion, the more slowly its waves propagate.

1926 — Lotka-Volterra Equations: Eternal Oscillations Between Prey and Predators

Alfred Lotka (1880-1949) published in 1925 a model describing oscillations in chemical reactions. Independently, Vito Volterra (1860-1940) established in 1926 the same equations to explain a surprising observation: during the First World War, as fishing decreased in the Adriatic, the proportion of predatory fish increased. Volterra showed that the interaction between prey and predators naturally produces cycles, without any external influence. These two coupled equations describe the eternal dance of populations that balance without ever stabilizing: \[ \Large \frac{dx}{dt} = \alpha x - \beta xy, \quad \frac{dy}{dt} = \delta xy - \gamma y \] \( x = \text{prey population},\) \( y = \text{predator population},\) \( t = \text{time},\) \( \alpha = \text{prey growth rate},\) \( \beta = \text{predation rate},\) \( \delta = \text{prey-predator conversion rate},\) \( \gamma = \text{predator mortality rate}\)

What the Equations Say

The Lotka-Volterra equations tell the endless story of natural selection in action. When prey abounds, predators thrive and multiply. Too numerous, they deplete their prey, which decline. Starving, predators decrease in turn, allowing prey to recover. And the cycle begins again, perpetually. The lynx and hares of Canada: the fur records of the Hudson's Bay Company, over nearly a century, show regular cycles of about ten years. The peaks of lynx always follow the peaks of hares with a characteristic delay.
The fish of the Adriatic: sharks and other predators were more numerous in Italian catches just after the war. Less fishing meant more prey, hence more predators.
Aphids and ladybugs: in a garden, the explosion of aphids in spring attracts ladybugs. These multiply, devour the aphids, then disappear for lack of food, allowing a new colony of aphids. Every gardener unknowingly observes the Lotka-Volterra equations.
Epidemics and immunized populations: healthy people play the role of prey, infectious sick people that of predators. The epidemic dies out when enough people are immunized, just as predators die when prey becomes scarce.

1932 — Von Neumann Equation: Quantum Mechanics at the Border of the Classical World

John von Neumann (1903-1957) distinguished in 1932 two types of evolution in quantum mechanics. Where the Schrödinger equation describes a pure and isolated quantum state (pure wave function), the von Neumann equation describes a statistical ensemble of states, accounting for the uncertainty about the system's actual state. It applies where the quantum system meets the outside world, at the border where the infinitely small tips into our classical reality: \[ \Large i\hbar \frac{\partial \hat{\rho}}{\partial t} = [\hat{H}, \hat{\rho}] \] \(\hat{\rho} = \text{density operator (statistical state of the system)},\) \(\hat{H} = \text{Hamiltonian (total energy)},\) \([\hat{H}, \hat{\rho}] = \hat{H}\hat{\rho} - \hat{\rho}\hat{H} = \text{commutator},\) \(\hbar = \text{reduced Planck constant}\)

What the Equation Says

The von Neumann equation is the tool that allows following a quantum system when it is no longer alone, when it touches the outside world. It describes how the strange properties of the quantum world (superposition, entanglement) gradually fade to give way to the classical reality we know. A deck of cards: perfectly ordered (pure state), it represents a quantum system of which we know everything. Shuffle the cards: you lose the exact order, but you know there is a 1/52 probability for each card in each position (52!≈8.07×1067). This is the density operator. The equation describes how this mixture evolves if you continue to shuffle.
In a silent room: each word stands out clearly; it is a pure state. In a noisy crowd, voices blend and form only a hubbub: it is a statistical mixture. The von Neumann equation describes how these quantum voices blur upon contact with the environment, until they become indistinguishable.
A drop of ink fallen into a glass of water: at first, it forms a well-localized spot (pure state). Then it diffuses, spreads, dilutes until it obtains a uniform color (mixture). The equation describes this diffusion of quantum information into the environment; it is decoherence.
Chinese shadows on a wall: a single lamp projects a clear shadow (pure state). Several lamps lit create multiple, overlapping, blurred shadows (mixture). The equation tells how a quantum system, bombarded by various interactions, sees its clear and coherent state gradually blur, until it becomes a simple statistical cloud.

1935 — Nuclear Binding Energy: The Force That Unites Atomic Nuclei

Hideki Yukawa (1907-1981) proposed in 1935 a simple and powerful idea: if the protons in the nucleus should repel each other because of their electric charge, there must be a glue that holds them together with the neutrons. He then proposed a new particle, the meson, which creates an extremely intense adhesive force. This force, called the strong interaction, is one of the four fundamental forces of nature. It manifests itself by a mass defect: the mass of a nucleus is always less than the sum of the masses of its constituents, the difference being converted into energy according to Einstein's famous formula: \[ \Large E_b = \left(Z m_p + N m_n - M\right) c^2 \] \( E_b = \text{binding energy of the nucleus (J)},\) \( Z = \text{number of protons},\) \( N = \text{number of neutrons},\) \( m_p = \text{mass of the proton},\) \( m_n = \text{mass of the neutron},\) \( M = \text{mass of the nucleus},\) \( c = \text{speed of light}\)

What the Equation Says

The equation tells us that the nucleus possesses a hidden energy reserve, and that this energy comes precisely from the mass that disappeared when the nucleons stuck together. A wall of cemented bricks: a wall weighs a little less than the sum of the bricks + cement + water separately. It is as if the wall had stored some of the energy of its bricks to solidify. As long as it retains this energy, it remains stuck. To dismantle it, you must return what it has absorbed.
Compress a spring: it stores potential energy. Release it, and it returns this energy. In a nucleus, the nucleons are compressed by the strong interaction. If you separate them, you must provide energy to overcome this attraction. The binding energy is the energy of the nuclear spring.
The uranium nucleus is like a hyper-compressed spring: its 235 nucleons (protons and neutrons) are held together by a colossal force. Yet, the nucleus weighs 0.1% less than the sum of its 235 constituents weighed separately. This mass defect has transformed into binding energy, the cement that prevents the nucleus from exploding under the repulsion of the protons. When a neutron strikes this nucleus, it destabilizes it, the spring relaxes, and this binding energy is suddenly released as heat: this is nuclear fission.

1948 — Shannon Entropy: Putting a Number on the Amount of Information

Claude Shannon (1916-2001) published in 1948 a foundational article that gave birth to the theory of information. Until then, the notion of information was vague and subjective. Shannon proposed a precise mathematical definition: the information contained in a message is related to its degree of surprise or uncertainty. The more unpredictable an event is, the more information it provides. He borrowed from physics the term entropy to name this measure, thus formalizing what would become the universal language of digital communications: \[ \Large H = -\sum_{i} p_i \log_2 p_i \] \(H = \text{Shannon entropy (in bits)},\) \(p_i = \text{probability of appearance of symbol } i,\) \(\sum_{i} \text{denotes the sum over all possible symbols}\)

What the Equation Says

Shannon entropy is a surprise counter. A predictable message (like a rigged coin that always lands on heads) has zero entropy: it teaches nothing. An unpredictable message has high entropy: each symbol provides a lot of information. The \(\log_2\) measures this information in bits, the universal currency of the digital world. This limit is fundamental: a message cannot be compressed below its entropy without losing information. The colors of the sky: a blue and uniform sky, almost entirely predictable, carries low entropy because it provides little new information, while a chaotic sky with swirling clouds, whose rain and wind cannot be anticipated, has high entropy: the more the sky surprises, the more it informs.
Forests: a pine forest from a monoculture, where each tree repeats the previous one, has low entropy because the scene offers little variety and almost no surprise, while a primary forest in autumn, teeming with colors, shapes, and different densities, manifests high entropy: the diversity of possibilities is such that each glance reveals a new configuration, like a visual message rich in information.
The sea: an oily sea, whose future state is almost certain, corresponds to low entropy, while a rough sea, where the shape of the next wave remains unpredictable, translates to high entropy: the more the surface surprises, the more information it delivers.
Secure passwords: a password "123456" has very low entropy: it is predictable. A password like "G7k#9pL$2" has high entropy because each character is unpredictable. Shannon entropy exactly measures the number of security bits in your password.
ZIP files: when you compress a text file into a ZIP, the computer analyzes the frequency of the letters. The very frequent "e" is encoded with fewer bits than the rare "z". The theoretical minimum size of the compressed file cannot go below Shannon's entropy. This is the absolute limit, regardless of the software.

1963 — Chaos Theory: A Deterministic System Can Be Unpredictable

Edward Lorenz (1917-2008) discovered in 1963 a phenomenon that revolutionized our conception of prediction. By simulating a simplified model of atmospheric convection, he realized that a tiny variation in initial conditions quickly produced completely different results. He called this the butterfly effect: can the flap of a butterfly's wings in Brazil trigger a tornado in Texas? The system is nevertheless deterministic (its equations are perfectly known), but its evolution is unpredictable in the long term due to this extreme sensitivity. This is the birth of chaos theory, which today invades all fields, from weather to biology and economics: \[ \Large \frac{dx}{dt} = \sigma (y - x),\quad \frac{dy}{dt} = x (\rho - z) - y,\quad \frac{dz}{dt} = xy - \beta z \] \(\displaystyle x = \text{convection rate},\) \(y = \text{horizontal temperature gradient},\) \(z = \text{vertical temperature gradient},\) \(\sigma, \rho, \beta = \text{system parameters (dimensionless numbers)}\)

What the Equations Say

Chaos is not disorder. It is a hidden order, a deterministic dance in three dimensions, so sensitive that the slightest breath changes the choreography. We know the laws, but we cannot predict the future because we will never know the initial state with infinite precision. Uncertainty grows exponentially with time. The game of billiards: strike a billiard ball against another. A tiny difference in the shooting angle can send it on radically different trajectories after a few bounces. Yet, the laws of physics are perfectly known. This is chaos: perfect laws, but an unpredictable future.
Snowflakes: no two snowflakes are identical. Each tells the chaotic story of its fall through the atmosphere, with its encounters with dust, temperature variations, and humidity. Yet, the laws of crystallization are deterministic. Atmospheric chaos makes them unique.
Traffic jams: a bottleneck suddenly forms on a fluid highway, for no apparent reason. One driver brakes a little too hard, the next a little more, and the wave of deceleration amplifies until it blocks traffic. A deterministic phenomenon, but unpredictable on a large scale.
The orbits of the planets: is the solar system stable? We know today that the gravitational interaction between planets can generate chaos over very long periods. The orbit of Pluto, for example, is chaotic over scales of millions of years. It is impossible to predict its exact position in 100 million years.
Heartbeats: before a heart attack, the heart rate becomes abnormally regular. A healthy heart has a slightly chaotic beat, capable of adapting. The loss of this chaos is a sign of danger. Chaos can be synonymous with health.

1964 — Higgs Mechanism: The Field That Gives Elementary Particles Their Mass

François Englert, Robert Brout, and Peter Higgs published in 1964 three articles that solve a puzzle: why do particles like W and Z bosons have mass when the symmetry of the theory forbids it? Their idea: space is filled with an invisible field, the Higgs field. By passing through it, particles acquire mass, a bit like a body moving through a fluid feels inertia. This mechanism predicts a particle, the Higgs boson, discovered at CERN in 2012: \[ \Large m = \frac{g v}{\sqrt{2}} \] \(m = \text{mass of the particle (kg)},\) \(g = \text{coupling constant to the Higgs field (dimensionless)},\) \(v \approx 246\ \text{GeV} = \text{average value of the Higgs field in the vacuum}\)

What the Equation Says

The Higgs field is like an invisible molasses that fills all space. Elementary particles passing through this molasses feel a resistance, an inertia: this is what we call mass. The more a particle interacts strongly with the field, the heavier it is. Some, like the photon, do not interact at all and remain massless. The Higgs boson is a small wave in the cosmic field, like the ripples that run across a lake when you throw a pebble into it. A dense crowd in a corridor: you walk in an empty corridor: you go fast, without effort (massless particle). If the corridor is filled with a dense crowd, you move slowly, as if you had become heavier. The crowd is the Higgs field. The more you interact with it, the more your progress is slowed, the greater your "mass".
A skier in fresh snow: a skier on a groomed slope glides fast (massless). In fresh, deep snow (the Higgs field), they sink, slow down, must push to move forward: they acquire inertia, mass. The deeper the snow, the stronger the interaction, the greater the mass.
A spider's web: imagine an invisible web stretched throughout space. This is the Higgs field. A small fly (the electron) gets slightly caught in it and barely slows down. A large bumblebee (the top quark) gets completely entangled and remains almost immobile. The Higgs boson is the vibration that travels through the web when it is struck.
The photon and light: the photon glides through the Higgs field as if it did not exist, never getting caught. It remains eternally massless, traveling at the speed of light.

1976 — Logistic Equation: Simplicity Can Give Birth to Chaos

Robert May (1936-2020) published in 1976 a resounding article in which he showed that an apparently innocuous equation, used in population dynamics, can produce behaviors of unsuspected complexity. The logistic equation describes the evolution of a population with a limited resource. Depending on the value of a single parameter \(r\), it can converge to a fixed point, oscillate between several values, or become totally chaotic. Yet, it contains neither noise nor randomness. This apparent simplicity can hide an abyss of complexity: \[ \Large x_{n+1} = r \, x_n (1 - x_n) \] \(x_n = \text{population in year } n \text{ (between 0 and 1)},\) \(r = \text{growth rate (dimensionless parameter)},\) \(n = \text{year (integer)}\)

What the Equation Says

The logistic equation is a summary of life: to be born, to grow, but also to encounter the limits of the world. Depending on the value of \(r\), the fate of the population changes completely. When \(r\) exceeds a certain value (about 3.57), order tips into chaos. The cycles double indefinitely until they become unpredictable, as if nature itself were hesitating. A traffic light regulates the flow of cars: with low traffic, everything is fluid (fixed point). If traffic increases, regular traffic jams appear (cycle). If the density becomes critical, traffic becomes totally unpredictable, with bottlenecks appearing without apparent reason.
The spread of an epidemic: a virus spreads in a population. If its contagion rate is low, the epidemic dies out. If it is medium, it returns in regular waves. If it is high, the waves become unpredictable, with sudden peaks impossible to anticipate.
The stock market: a financial market follows simple rules (buying, selling). Depending on the degree of investor confidence or speculation, it can be calm, follow predictable cycles, or sink into chaos. Crashes come without warning.
Fireflies synchronize their flashes: if there are few, they all flash together (fixed point). If their density increases, they can split into two groups that alternate (cycle 2). With even more, their flashes become totally unpredictable (chaos). Yet, each firefly follows a simple rule: imitate its neighbors.
Locusts: naturally solitary, they do not flee their kind; they simply avoid them by instinctive behavior. But beyond a certain number, physical contacts become inevitable. These repeated tactile stimulations, especially on the hind legs, cause a release of serotonin in their nervous system, initiating the shift to gregarious behavior. The transition then accelerates on its own: individuals already transformed attract new ones, the density increases, and the migratory process is triggered irreversibly as long as the overpopulation persists.

1974 — Hawking Temperature: Black Holes Are Not Completely Black

In 1974, Stephen Hawking (1942-2018) formulated a troubling prediction. Black holes were thought to be eternal and absolutely black: nothing, not even light, could escape from them. By combining quantum mechanics and general relativity, Hawking showed that black holes nevertheless emit a weak thermal radiation and eventually slowly evaporate. This phenomenon, called Hawking radiation, builds an unprecedented bridge between gravity and quantum physics, and gives black holes a temperature, all the higher the smaller their mass: \[ \Large T = \frac{\hbar c^3}{8\pi G k_B M} \] \(T = \text{Hawking temperature (K)},\) \(\hbar = \text{reduced Planck constant (J·s)},\) \(c = \text{speed of light (m/s)},\) \(G = \text{gravitational constant (m³·kg⁻¹·s⁻²)},\) \(k_B = \text{Boltzmann constant (J/K)},\) \(M = \text{mass of the black hole (kg)}\)

What the Equation Says

The smaller a black hole is, the hotter it is. A stellar black hole is icy, while a microscopic black hole would be scorching. By emitting this radiation, the black hole loses mass, thus shrinks, thus becomes hotter, thus radiates faster, a runaway process that leads it to an explosive end. The kayaker and the current: a kayaker desperately paddles to go up a river whose current accelerates towards a waterfall. Closest to the fall, the current becomes too strong: they are irresistibly pulled back towards the abyss, unable to escape. This is the event horizon of the black hole. Yet, on the other side, a faint noise is sometimes heard: pairs of bubbles are spontaneously born; one falls, the other rises. These are Hawking particles.
The waterfall and its bubbles: at the foot of a powerful waterfall, the fallen water creates a tumult. Most bubbles are drawn towards the bottom, but some, lighter ones, rise to the surface and escape. The event horizon of the black hole is like the line where the water tips: below it, all is lost; beyond it, a few particles (the bubbles) can still flee. The continuous rumble of the waterfall is Hawking radiation.
The sound barrier: an airplane breaks the sound barrier, creating a wave front behind which no sound wave can go back up the supersonic flow. This is an acoustic horizon, twin to that of a black hole. And just as the black hole emits radiation, this sound front emits phonons (sound particles) through a sonic Hawking effect, now observed in the laboratory.

1972 — Black Hole Entropy: Volume Is Just an Illusion

Jacob Bekenstein (1947-2015) proposed in 1972 a bold idea: black holes must have entropy. Bekenstein wondered how many different histories could lead to the same black hole; two apparently identical black holes can hide radically different pasts. The answer is an astronomical number, and its formula states that this number depends on the area of the horizon, not the internal volume. In 1974, Stephen Hawking (1942-2018) refined this idea. \[ \Large S = \frac{k_B A}{4 \ell_P^2} \quad \text{with} \quad \ell_P = \sqrt{\frac{\hbar G}{c^3}} \] \(S = \text{entropy of the black hole (J/K)},\) \(k_B = \text{Boltzmann constant (J/K)},\) \(A = \text{area of the horizon (m²)} \text{ — the surface, not the volume},\) \(\ell_P \approx 1.6 \times 10^{-35}\ \text{m} = \text{Planck length}\)

What the Equation Says

The number of histories that lead to the same black hole is colossal. They are inscribed on the surface of its horizon and not inside its volume. This is the holographic principle: our three-dimensional Universe could be just an image projected from a two-dimensional surface. The black hole is its miniature version; everything that falls into it is inscribed on its sphere like on a cosmic hard drive. Volume is just an illusion; the horizon keeps track of everything. The Library of Babel: imagine a library containing all possible books. If you threw them one by one into a black hole, their matter (paper, ink, binding) would disappear forever behind the horizon. But the information they contain (each letter, each word, each story) would not be lost. It would be inscribed on the surface of the horizon, encoded in its geometry. Matter is swallowed, the meaning of the story remains engraved.
A soap bubble: its surface is iridescent, it reflects all colors. Inside, there is nothing but a little air without history. The horizon of the black hole is like this bubble: all its richness is on the surface; the inside is just an apparent void.

1961-1973 — Lagrangian of the Standard Model: All Particle Physics in One Formula

The Standard Model is the culmination of half a century of work that unifies three fundamental forces (electromagnetism, weak force, strong force) and describes all known matter: twelve particles (quarks and leptons), four messengers (photon, W, Z, gluons), and the Higgs boson. The Lagrangian of the Standard Model condenses all the particles and forces of the microscopic world: \[ \Large \mathcal{L}_{SM} = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu} + i\bar{\psi}\not{D}\psi + \bar{\psi}_i y_{ij}\psi_j\phi + |D_\mu\phi|^2 - V(\phi) \] \(\mathcal{L}_{SM} = \text{Lagrangian of the Standard Model},\) \(F_{\mu\nu} = \text{tensor describing the force fields},\) \(\psi = \text{matter fields (quarks, leptons)},\) \(\not{D} = \text{covariant derivative (coupling matter-forces)},\) \(y_{ij} = \text{Yukawa couplings (masses of particles)},\) \(\phi = \text{Higgs field},\) \(V(\phi) = \text{Higgs potential (symmetry breaking)}\)

What the Equation Says

The Lagrangian of the Standard Model is the user manual for the infinitely small. Four chapters: the forces that traverse space, the matter that couples to them, the Higgs field that gives particles their mass, and how nature makes the messengers of the weak force massive without weighing down light. The Lagrangian of the Standard Model: it is the score of a symphonic orchestra. Each instrument (particle) has its score: the strings (quarks) play a melody, the brass (leptons) another, the percussion (bosons) mark the rhythm of the forces. The conductor (the Higgs field) gives the pitch, and the whole produces the music of the Universe. One score, hundreds of musicians, a cosmic symphony.
DNA: in a few molecules, it contains the entire construction plan of a living being. The Lagrangian of the Standard Model is the cosmic DNA: in a few lines, it encodes the fabrication of all matter. The quarks are the nucleotides, the forces are the enzymes that link them, the Higgs field is the cellular machinery that expresses the code.
A box of Lego: bricks of all shapes (the particles), connectors (the bosons), and plans to build models (the forces). Yet, a single instruction manual suffices to build everything, from the castle to the rocket. This is the Lagrangian: the unique assembly manual of the Universe.

Articles on the same theme

The Physics of the Universe in 50 Equations: User Guide