← Back to Tech Blog

Architecture Overview

This is a fairly terse rundown of the structure for anyone curious.

Discover Zoos is the App, which has and displays all the zoo info, and the Pipeline, which processes and renders map data. The App uses Rust, PostgreSQL, HTMX, and Web Components. The Pipeline uses Python, Planetiler, and OpenStreetMaps data. During development, everything is orchestrated with Make. For hosting, the data is exported into blobs, and there is no PostgreSQL. The production deployment of the App is on Fastly Compute, with the larger static files (the map tiles) hosted on Fastly Object Storage.

The App’s Data

I manage zoo data via SQL migrations, using sqlx with PostgreSQL. There are two data backends, one that uses PostgreSQL and the other that uses serialized blobs in the filesystem.

  • PostgreSQL: SQL retrieves structured blobs of data (zoo listing, then large subsets of each zoo’s data). For development.
  • Blobs: postcard serialization, prebaked with the same SQL used for development. For production.

Using PostgreSQL + sqlx gives me typechecked structured data and queries with excellent semantics. By putting all the zoo data in migrations, data flows along with development. Even if I eventually move off of using migrations for the zoo data, that will be a smooth transition since the SQL structure will remain.

Using serialized blobs for production makes my hosting in Fastly Compute very clean. I considered sqlite, but decided the blob approach had stronger alignment, and would also allow me to start putting some blobs in Fastly Object Storage if blob size started to become an issue, while keeping others (such as the one for initial zoo listings) inside the Fastly Compute bundle.

The App’s Frontend

I have used and like React, but ultimately I decided I could deliver a better experience by sticking to HTML as much as possible. I like HTML, and browsers like HTML. Using htmx + web components I can provide a very dynamic and modern web app experience while still being a wellbehaved hypertext citizen.

This also lets me mostly not have a web API. If I have a new page, I have new HTML, rendered from my data. I can share chunks of HTML by sharing the functions that generate HTML. If I want to display the same data differently, each place can use a function to get the piece of data they want, then render it into HTML like they want. This is essentially everything that a web API (perhaps pushing JSON around) would get me, but with far less hassle.

The Pipeline

For rendering maps, I start with OpenStreetMap data, then cut it down to the area around each zoo and store a set of PMTiles, which are perfect for hosting on Fastly Object Storage + Fastly CDN and displaying with MapLibre. Some OpenStreetMap data I use as-is, so I slice that out and make a layer of just that. Other pieces, that represent the actual features of the zoo itself, I extract into GeoJSON, which is managed separately, and never automatically updated from the original OSM data. Over time the GeoJSON files will become fully self-managed map data for each zoo, but for now most of it is still the original OSM data. The final stage of the pipeline combines both the OSM “background” data and the managed GeoJSON into a single set of tiles.