Automated terrain classification of Saturn's largest moon using deep learning on Cassini RADAR imagery
This is an interactive, pixel-level terrain classification map of Titan — Saturn's largest moon and the only body in the solar system besides Earth with stable surface liquids. The map covers the entire surface imaged by the Cassini spacecraft's Synthetic Aperture Radar (SAR) instrument at 351 metres per pixel.
Unlike previous maps of Titan's surface which were drawn by hand by planetary scientists, this map was generated entirely by a neural network. The model learned to recognise terrain types from their radar backscatter signatures and can classify every pixel independently — producing a far more granular map than manual polygon-based approaches.
Toggle between "Our Model" and "Lopes 2020" to see the difference. The Lopes map shows smooth, hand-drawn polygon boundaries. Our model follows the actual pixel-level texture in the SAR data — you can see how terrain types interleave at a much finer scale than any human could practically draw.
Between 2004 and 2017, NASA's Cassini spacecraft orbited Saturn and made over 100 close flybys of Titan. On each pass, its RADAR instrument bounced microwave pulses (Ku-band, 13.78 GHz) off Titan's surface and measured the backscattered energy. This technique — called Synthetic Aperture Radar — can penetrate Titan's thick, opaque atmosphere of nitrogen and methane, which blocks all visible and infrared light.
Each flyby imaged a narrow swath of terrain. Over 13 years, these swaths were stitched together by the USGS Astrogeology Science Center into a global mosaic at 128 pixels per degree (~351 metres per pixel at the equator). This HiSAR Global Mosaic covers roughly 24% of Titan's surface — the black regions on the map are areas Cassini never imaged at SAR resolution.
The brightness of each pixel represents the radar backscatter coefficient (σ₀) — how strongly the surface reflects radar energy back to the spacecraft. Different terrains have distinctive signatures: smooth lake surfaces reflect almost nothing (appearing dark), sand dunes create moderate returns with characteristic patterns, and rough mountainous terrain scatters strongly (appearing bright).
In 2020, planetary scientist Rosaly Lopes and colleagues published the first global geomorphological map of Titan in Nature Astronomy (Lopes et al. 2020). Through years of expert analysis, they manually classified Titan's surface into six terrain types:
Plains (65% of mapped surface) — low-relief, radar-dark expanses dominating the mid-latitudes. Likely organic sediment deposits. Dunes (12%) — linear features concentrated in equatorial regions, analogous to Earth's longitudinal sand dunes but composed of organic particles. Hummocky/mountainous terrain (12%) — radar-bright, rough topography, possibly the oldest exposed terrain on Titan. Lakes and seas (4%) — liquid methane and ethane bodies clustered near the north pole, appearing radar-dark due to specular reflection. Labyrinth terrain (4%) — dissected, canyon-like networks possibly carved by methane rainfall. Craters (<1%) — remarkably few impact craters, suggesting a young, geologically active surface.
The published shapefiles from this map are available on Mendeley Data. We rasterised these polygons to match the SAR mosaic's pixel grid and used them as training labels for the neural network.
The full workflow from raw Cassini data to the map you see above:
Downloaded the USGS HiSAR Global Mosaic (1 GB GeoTIFF, 46,080 x 23,040 pixels) and the Lopes 2020 geomorphological shapefiles from Mendeley Data.
Divided the mosaic into 10,711 tiles of 256x256 pixels. Each tile's SAR values were normalised using 2nd/98th percentile clipping. The Lopes shapefiles were rasterised (reprojected from Titan geographic CRS to the mosaic's equirectangular grid) to create matching label tiles.
Tiles were grouped into 10-degree spatial blocks and randomly assigned to train (70%), validation (15%), and test (15%) splits. This geographic blocking prevents spatial autocorrelation from leaking between splits — adjacent tiles tend to look similar, so a naive random split would overestimate performance.
U-Net with an EfficientNet-B4 encoder pretrained on ImageNet. The single-channel SAR input is replicated to 3 channels to utilise the pretrained weights. The decoder outputs 6-class probability maps at full input resolution. Trained with Dice loss, AdamW optimiser (lr=5e-4), cosine annealing schedule, and augmentations (flip, rotate, Gaussian noise).
150 epochs on an RTX 3090 via RunPod (~5 hours, ~$2). The model converged to 0.455 mean Intersection-over-Union (mIoU) on the held-out test set. For context, perfect agreement with the labels would be 1.0, and a model predicting "plains" everywhere would score about 0.18.
The full mosaic was classified using a sliding window with 50% overlap and cosine-weighted blending of softmax probabilities. This eliminates tile boundary artifacts — each pixel's classification is informed by multiple overlapping spatial contexts, and the model's most confident predictions (from tile centres) are weighted highest.
Titan is the primary target for NASA's upcoming Dragonfly mission (launching 2028), which will send a rotorcraft lander to explore Titan's surface. Understanding the global distribution of terrain types is critical for mission planning — identifying safe landing zones, predicting surface conditions, and selecting scientifically interesting targets.
Manual geological mapping is slow, subjective, and doesn't scale. Different experts can disagree on boundaries. An automated model produces consistent, reproducible classifications and can be instantly re-run when new data arrives. While Dragonfly carries cameras and spectrometers rather than SAR, the terrain classification framework demonstrated here — training on expert labels and generalising at pixel level — could be adapted to whatever imagery the mission returns.
The pixel-level resolution of this map also reveals structure that polygon-based maps obscure. Terrain types on Titan don't have clean boundaries — they intergrade, with patches of one type embedded within another. The model captures this complexity in a way that hand-drawn polygons cannot.
The model is only as good as its labels. It was trained on the Lopes 2020 map, which is itself an interpretation — expert, peer-reviewed, and the best available, but still a human judgement call in ambiguous areas. The 0.455 mIoU reflects both model error and label noise.
The "craters" class is essentially unlearnable at 0.3% of the training data. The model cannot reliably identify impact craters. This is a fundamental data limitation, not an architecture problem.
Coverage is incomplete. Cassini only imaged ~24% of Titan's surface at SAR resolution. The black regions are unknown. Some flyby swaths have different imaging geometries and incidence angles, which can affect radar backscatter independent of terrain type.
ImageNet pretraining contributes almost nothing. A randomly initialised encoder achieved 0.348 mIoU vs 0.350 with ImageNet weights — the domain gap between photographs of Earth objects and radar imagery of an alien moon is too large for transfer learning to help meaningfully.