The Cartesian Coordinate System
Every digital image, 3D model, and pixel on your screen exists within a mathematical space known as the Cartesian Coordinate System. Developed by René Descartes, it bridges the gap between algebra and geometry, allowing us to define any point in space using numerical coordinates (X, Y, and Z).
Without this foundational grid, computer graphics as we know them would be impossible. It provides the absolute reference frame necessary to map abstract shapes into a format the machine can render.
Move the sliders to translate the point across the coordinate plane, mapping numerical values to spatial positions.
The Geometry of Sight
At its core, computer graphics is an exercise in linear algebra. Every object is defined by a collection of vertices—points in 3D space. To render these on a 2D screen, we must project them. This involves multiplying them by a series of matrices: Model, View, and Projection.
Drag the sliders to see how the mathematical transformation affects the cube's vertices.
The Rasterization Engine
Once our vertices are projected onto the screen, they form triangles. But a screen isn't made of triangles—it's made of a grid of pixels. Rasterization is the process of determining which pixels are covered by a triangle and interpolating values (like color or depth) across its surface.
Watch how the continuous mathematical triangle is "sampled" into discrete pixel points.
The Painter's Algorithm
In a 3D scene, many objects may overlap. To ensure the correct objects appear "in front," we must solve the visibility problem. One of the simplest methods is the Painter's Algorithm.
Just as a painter paints the background first and then layers the foreground objects on top, this algorithm sorts polygons by their distance from the viewer (depth) and renders them in back-to-front order. While efficient for simple scenes, it can fail when polygons intersect or form cycles.
Drag the triangles to reposition them. Toggle sorting to see how the draw order (Painter's Algorithm) resolves visibility.
Z-Buffering: The modern Standard
The Painter's Algorithm fails when objects intersect or form cycles. To solve this, modern graphics hardware uses a Z-Buffer (or Depth Buffer). Instead of sorting whole polygons, we store the "depth" of the closest object at every single pixel.
When drawing a new pixel, the hardware compares its depth (Z-value) against the value already stored in the Z-buffer. If the new pixel is closer, it's drawn and the Z-buffer is updated; otherwise, it's discarded. This per-pixel test handles any degree of complexity without the need for expensive sorting.
Observe how the Z-buffer (right) stores the depth of each pixel (brighter = closer). Toggle testing to see how it resolves intersections that the Painter's Algorithm cannot.
Anti-Aliasing: Smoothing the Edges
Because pixels are discrete squares on a grid, rendering continuous mathematical lines or curves results in "jaggies"—stair-step edges known as aliasing.
Anti-aliasing (AA) solves this by calculating the fractional coverage of a pixel by the mathematical geometry. Instead of simply turning a pixel 100% on or 100% off (Bresenham's algorithm), it blends the pixel's color with the background proportionally based on how much the geometry overlaps it (Wu's algorithm), tricking our eyes into perceiving a perfectly smooth edge.
Toggle algorithms to see how the stark stair-stepping (aliasing) becomes a visually smooth gradient edge under anti-aliasing.
Fragment Shaders: Painting with Math
Modern graphics hardware allows us to run small programs, called shaders, for every single pixel. Fragment shaders calculate the final color. By coding complex lighting equations, textures, and effects directly in GLSL, we can achieve stunning visual fidelity in real-time.
A procedural plasma effect generated entirely by mathematical logic for every pixel.
The Future: Ray Tracing
While rasterization is fast, it struggles with realistic shadows, reflections, and global illumination. Ray Tracing simulates the physical path of light rays. By shooting rays into a scene and calculating their intersections with objects, we can accurately model realistic phenomena. When a ray hits a surface, it can reflect, refract, or scatter, recursively generating new rays to sample indirect lighting.
Observe how light rays emit from a source, interact with obstacles, and bounce to calculate accurate lighting paths.
Virtual Reality: Hacking Perception
Traditional computer graphics map a 3D world to a flat 2D rectangle. Virtual Reality (VR) hacks the human brain's binocular vision by rendering two slightly different perspectives of the same scene—one for each eye.
When combined with ultra-low latency head tracking and distortion lenses, the brain fuses the two 2D images into a solitary, immersive 3D space. The mathematics of projection matrices must perfectly mirror the physical optics of the headset to maintain the illusion of absolute presence.
Adjust the IPD (distance between the cameras) to watch how the left and right perspectives shift horizontally to form stereoscopic depth. Cross your eyes to fuse the two cubes manually!
Augmented Reality: Anchoring the Digital to the Physical
Unlike Virtual Reality, which replaces your environment entirely, Augmented Reality (AR) superimposes digital geometry onto the real world. This requires extreme precision in camera pose estimation: the machine must constantly track the physical space to align virtual objects flawlessly.
By detecting feature points in a camera feed or tracking physical markers, AR systems compute a projection matrix relative to the real-world camera lens. When this math is perfectly synchronized, digital objects behave as if they physically exist in your room.
Drag the AR marker (the green square) on the canvas to see how the 3D projection anchors itself to the physical coordinate system in real-time.