Earthquake (2013): rebuilding a planetarium moment

· Matt Baker

Look, I'm calling this my first blog post of 2025. It's also my first in decades, which tells you everything you need to know about my relationship with writing. I've wanted to maintain a blog for longer than some JavaScript frameworks have existed, and I don't have a grand thesis for the year. The plan is simple: write more docs. Ship thoughts. Leave myself breadcrumbs. Build a small, consistent flow. We'll see how long that lasts.

This post is about a web experiment I built back in 2013 called Earthquake—a WebGL/Three.js thing inspired by a moment at the California Academy of Sciences' Earthquake exhibit that I couldn't shake.

I know, I know. "Senior dev writes about his old side project." But here's the thing: if you've ever felt that best feeling—the one where you see something so good that you physically cannot stop yourself from going home and building your own version of it—this was one of those times for me. And that feeling doesn't care how old your code is.

The spark: an inverted globe and a flick of an iPad

At the Academy, the exhibit was called Earthquake: Life on a Dynamic Planet. After a presentation on the 1906 San Francisco earthquake—which, sidebar, remains genuinely terrifying no matter how many times you hear about it—the presenter pulled out an iPad and started flicking around an "inverted globe" view showing large earthquakes from the past 24 hours.

It was mesmerizing. The kind of visualization that makes your brain feel the scale of the planet. Not "intellectually understand." Feel. There's a difference, and it's hard to pull off.

The vibe I tried to replicate (reference clip)

The planetarium view looks something like the visuals in this clip:

Earthquake (2013): building a tiny version of the feeling

In 2013, WebGL still felt new. I'd been messing with Three.js enough to be dangerous—which is, let's be honest, the sweet spot for side projects. You know just enough to build something cool and not enough to talk yourself out of it with "architecture concerns."

So I built an experiment: a textured Earth sphere, a pile of quake markers, and controls that let you rotate the world and click into individual events.

You can run it here: /experiments/earthquake/.

Fair warning: the code is from 2013. JSONP, jQuery 1.10, global variables—I'm not proud, but I'm not sorry either.

How it works (a quick code tour)

All of the experiment code lives under src/experiments/earthquake/ and is passed through as a static asset in Eleventy.

Below are the parts I still find genuinely interesting, twelve years later.

Camera movement: orbit-ish, but smoothed

Instead of directly orbiting the camera, I parent the camera to an empty object (camGroup) that lives at the origin, then rotate the group. That keeps the camera on a stable arc around the globe, and it makes rotation math simpler. This is still a solid pattern, by the way—it's not some 2013 hack.

The other key detail is the "feel": mouse/touch input updates target rotations, and the render loop eases toward those targets. That smoothing is what gives it a slightly "planetarium drift" vibe instead of snappy, clinical controls.

  • Code: src/experiments/earthquake/js/main.js (camGroup, targetRotationX/Y, and the easing inside render()).

Here's the core of it:

// camera is a child of camGroup, so rotating camGroup orbits the camera around the globe
camGroup.rotation.x += (-targetRotationX - camGroup.rotation.x) * 0.05;
camGroup.rotation.z += ( targetRotationY - camGroup.rotation.z) * 0.05;
camGroup.rotation.y += (targetCameraRotationY - camGroup.rotation.y) * 0.15;

Keeping selection accurate while the camera is inside a rotating parent

Clicking a marker uses raycasting. But the camera isn't sitting directly on the scene root—it's inside a rotated parent. So the code updates world matrices each frame (scene.updateMatrixWorld()), then extracts the camera's world position from camera.matrixWorld.

One small but nice design choice: the mouse handler just records "the user clicked at x/y", and the real pick test happens during the render loop when the transforms are guaranteed to be fresh. Separation of input and simulation—one of those things that seems obvious in retrospect but took me a few attempts to get right.

  • Code: src/experiments/earthquake/js/main.js (cameraWorldPosition, checkSelectInfo()).

The line that makes this robust is:

scene.updateMatrixWorld();
cameraWorldPosition.setFromMatrixPosition(camera.matrixWorld);

Lat/lng/depth → 3D position (and the texture seam problem)

Each earthquake comes in as latitude/longitude + depth. Placement is classic spherical conversion:

  • Convert degrees to angles (phi/theta).
  • Compute XYZ using sin/cos.
  • Reduce radius by depth so deeper quakes sit "inside" the crust.

The extra wrinkle—and this is the kind of thing that eats an afternoon before you figure it out—is that the world texture has a seam. To keep the data aligned with the texture, the conversion subtracts a texture_edge_longitude so "longitude 180" in data lines up with the texture edge used in the sphere's UVs.

  • Code: src/experiments/earthquake/js/three.geospatial.js (addGeoSymbol() and texture_edge_longitude).

The conversion looks like:

var phi = (90 - lat) * Math.PI / 180;
var theta = (180 - (lng - texture_edge_longitude)) * Math.PI / 180;

x = (radius - depth) * Math.sin(phi) * Math.cos(theta);
y = (radius - depth) * Math.cos(phi);
z = (radius - depth) * Math.sin(phi) * Math.sin(theta);

Marker sizing: a magnitude LUT + glow sprite

Markers are a small sphere plus a much larger "spark" sprite behind it. The size is driven by a lookup table of radii (magnitudeRadii[]) using the rounded magnitude as an index.

Two details that make it feel readable instead of like a data dump:

  • The sprite scale is much larger than the core sphere, so you get that "soft halo" cluster effect.

  • Opacity is proportional to magnitude (magnitude / 10) so big events visually pop.

  • Code: src/experiments/earthquake/js/main.js (magnitudeRadii, createMarker()).

The mapping is intentionally simple:

var r = magnitudeRadii[Math.round(magnitude)];
sprite.scale.set(10 * r, 10 * r, 10 * r);
mesh = new THREE.SphereGeometry(r, 4, 4);

Data: USGS GeoJSONP → normalized objects

Quake data comes from the USGS feed and loads via JSONP. Yes, JSONP—a very 2013 move. The USGS actually supported it at the time, and CORS headers on government APIs were... aspirational. The loader normalizes each feature into a small object:

{ lat, lng, depth, magnitude, title }

Then the main script loops and creates a marker for each quake.

  • Code: src/experiments/earthquake/js/geojson.js.

If I rebuilt it today

If I were doing this now, I'd modernize the Three.js setup (it's been through about fifteen breaking API changes since 2013), clean up the input model (two-finger rotate/zoom on mobile is rough), and rethink the data pipeline (fetch + async/await, maybe a serverless proxy for the USGS feed).

But honestly? The code works. It's been working for twelve years. Some of my "properly architected" projects didn't last twelve months. There's a lesson in there somewhere about shipping versus perfecting, but I'm too tired to make it profound.

The interesting part to me isn't "perfect code"—it's the loop:

experience → inspiration → tiny artifact.

That loop is what I'm trying to restart in 2025. The code quality can come later. The habit comes first.