Lunar Crater Detection: Computer Vision in Space

Not a Medium member? Access this article for free here.
In this article, we will develop a crater detection algorithm (CDA) for an autonomous crater-based optical navigation system for spacecraft in orbit of the Moon. The aim of an autonomous crater-based optical navigation system is to estimate spacecraft position and attitude from observed crater rims on the surface of a cratered celestial body.
The use of optical navigation systems in spacecraft is well-established [1]. Star trackers, a type of optical navigation instrument, have been deployed outside the Earth's atmosphere from as early as 1959 [2]. Star trackers determine spacecraft attitude by capturing images of the surrounding star field, identifying known stars within those images and comparing the positions of these stars with a catalogue of celestial objects – with this data the precise attitude of the spacecraft can be determined.
Ground-based systems currently used in guidance, navigation and control (GNC) of spacecraft in lunar orbit are becoming overtaxed. Demand for these ground-based systems will continue to grow as the number of spacecraft in orbit of the Moon increases. With an increased focus on a return to the Moon by manned and unmanned spacecraft this decade, methods for autonomous determination of spacecraft state are increasingly important for sustaining a long-term presence in orbit of, and on the surface Moon.
Tracking Artemis I in Near Real Time Using Python and… Twitter
Applying invariant theory, Christian et al. [3] present a mathematically rigorous treatment for the general crater identification problem. They show that by calculating projective invariants from triads of crater rims and matching these against a crater catalogue it is possible to directly estimate spacecraft position and attitude.
In this article, we focus on the upstream Computer Vision task of extracting crater rims from images of the lunar surface. Using THREE.js
, we develop an application for generating synthetic images of the lunar surface and present a new segmentation-based CDA. Finally, we discuss future steps to improve the algorithm.
Code for the THREE.js
application used for generating synthetic lunar surface images and the CDA we develop in this article is available from their respective GitHub repositories [4, 5].
Crater detection algorithm
There exist many CDAs developed to automate the process of identifying and analyzing lunar surface features. Two well-established and openly available models are DeepMoon [6] and PyCDA (PyCrater Detection Algorithm) [7].
DeepMoon is a TensorFlow-based pipeline for training convolutional neural networks (CNNs) to determine the position and radii of craters in images. PyCDA also uses CNNs for crater detection. Both output pixel coordinates x
, y
and crater radii r
.
DeepMoon and PyCDA both rely on CNNs for performing crater detection. The process of training a CNN is computationally expensive and requires that we first generate labelled training data. Additionally, these CDAs only output crater position and radii. Craters observed at low incident angles, projected as ellipses to the observer, are not sufficiently described with this approach.
The CDA we develop in this article is distinct from DeepMoon and PyCDA as it does not rely on CNNs for crater detection. Instead, the CDA presented in this article is based on image Segmentation. Our new CDA is structured as follows:
- Input an image of the lunar surface
- Using a segmentation model, segment the image
- Fit an ellipse to each mask in the segmented image
- Determine if each mask is a crater based on the ratio of ellipse area to segment area
- Output candidate craters to a table with column headers
x
,y
,majorAxis
,minorAxis
andangle
Generating synthetic images in lunar orbit

Acquiring a dataset of lunar surface images solely from real images of the Moon would be expensive and impractical due to the limitations of access, variability in lighting conditions and poor knowledge of observer position and attitude. Hence, developing a simulated environment that can be used to generate synthetic images of the lunar surface is an important first step on the path to developing a CDA.
The generation of synthetic images offers a cost-effective solution for building a database of lunar images at different viewing angles and under various lighting conditions.
By generating images in a simulated environment we benefit from having precise knowledge of the camera position and attitude. An autonomous crater-based optical navigation system ultimately aims to estimate these values so accurate knowledge of them is crucial for assessing the accuracy of developed algorithms.
Through the use of synthetic images of the lunar surface, it is also possible to simulate entire spacecraft flight scenarios, providing a holistic testing environment for autonomous crater-based navigation systems.
Many 3D software tools can be used to generate synthetic images of the lunar surface. Here we will build a 3D application using THREE.js
that can run in a browser. We will call it threejs_synthetic_moon
. With THREE.js
it is possible to programmatically construct a 3D scene and add custom controls for navigating it.
To build the application, we first need to source the textures required for rendering the Moon. We need a colour map for colouring the lunar surface, a displacement map for displacing the vertices of the sphere that the textures are projected onto and a normal map for modifying surface illumination (i.e. to simulate shadows).

NASA provide colour and displacement maps in their CGI Moon Kit [8] at 27360×13680 and 23040×11520 resolution respectively. I could not find a high-resolution normal map online so I decided to go down a rabbit hole of calculating one myself using the displacement map. The interested reader can find the script used for this in the threejs_synthetic_moon
repository [4].
For controls, we need an easy way to navigate around the Moon and modify lighting conditions so that we can generate realistic lunar surface images. dat.gui
[9] provides a lightweight graphical user interface that can be used for changing variables in JavaScript applications. Using this we add lightControls
(position and intensity), cameraControls
(position and rotation), and meshControls
(rotation, displacement and normal scale). With these controls implemented it is possible to quickly navigate around the Moon and take all the photos you want – a kind of lunar safari for impact craters.
The application logic is all contained in the main.js
file:
import * as THREE from 'three';
import * as dat from 'dat.gui';
// Helper function to convert spherical coordinates to cartesian coordinates
function sphericalToCartesian(longitude, latitude, radius) {
const phi = (90 - latitude) * (Math.PI / 180);
const theta = (360 - longitude) * (Math.PI / 180);
const x = radius * Math.sin(phi) * Math.cos(theta);
const y = radius * Math.cos(phi);
const z = radius * Math.sin(phi) * Math.sin(theta);
return { x, y, z };
}
// Setup scene, renderer, camera and light
const scene = new THREE.Scene();
const renderer = new THREE.WebGLRenderer({
antialias: true,
encoding: THREE.sRGBEncoding,
toneMapping: THREE.ACESFilmicToneMapping
});
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 10000);
camera.position.z = 3000;
camera.setFocalLength(35);
let cameraControlsObject = {
longitude: 0,
latitude: 0,
radius: 3000,
lookAtCenter: true,
rotationX: 0,
rotationY: 90,
rotationZ: 0,
};
const light = new THREE.DirectionalLight(0xffffff, 5)
light.position.set(2000, 0, 2000);
light.castShadow = true;
light.shadow.mapSize.width = 2048;
scene.add(light);
let lightControlsObject = { longitude: 0, latitude: 0, intensity: 5};
// Setup moon mesh
const geometry = new THREE.SphereGeometry(1728.28, 4000, 2000);
const textureLoader = new THREE.TextureLoader();
const texture = textureLoader.load('/moon_texture.png');
const displacement = textureLoader.load('/scaled_moon_displacement.png');
const normal = textureLoader.load('/moon_normal_map.png');
texture.minFilter = THREE.LinearFilter;
displacement.minFilter = THREE.LinearFilter;
normal.minFilter = THREE.LinearFilter;
const scale = 19.87;
const material = new THREE.MeshLambertMaterial({
color: 0xffffff,
map: texture,
displacementMap: displacement,
displacementScale: scale,
normalMap: normal,
normalScale: new THREE.Vector2(1.0, 1.0),
reflectivity: 0,
shininess: 0
});
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
const meshControlsObject = { rotationX: 0, rotationY: 0, rotationZ: 0, displacementScale: scale, normalScale: 1.0 };
// Setup GUI controls
const gui = new dat.GUI();
// Light controls
const lightControls = gui.addFolder('Light Controls');
const lightLongitude = lightControls.add(lightControlsObject, 'longitude', -180, 180).name('Longitude');
const lightLatitude = lightControls.add(lightControlsObject, 'latitude', -90, 90).name('Latitude');
const lightIntensity = lightControls.add(lightControlsObject, 'intensity', 0, 10).name('Intensity');
lightControls.open();
// Camera controls
const cameraControls = gui.addFolder('Camera Controls');
const cameraLongitude = cameraControls.add(cameraControlsObject, 'longitude', -180, 180).name('Longitude');
const cameraLatitude = cameraControls.add(cameraControlsObject, 'latitude', -90, 90).name('Latitude');
const cameraRadius = cameraControls.add(cameraControlsObject, 'radius', 0, 10000).name('Radius');
const cameraLookAtCenter = cameraControls.add(cameraControlsObject, 'lookAtCenter').name('Look At Center');
const cameraRotationX = cameraControls.add(cameraControlsObject, 'rotationX', -180, 180).name('X Rotation');
const cameraRotationY = cameraControls.add(cameraControlsObject, 'rotationY', -180, 180).name('Y Rotation');
const cameraRotationZ = cameraControls.add(cameraControlsObject, 'rotationZ', -180, 180).name('Z Rotation');
cameraControls.open();
// Mesh controls
const meshControls = gui.addFolder('Mesh Controls');
const meshRotationX = meshControls.add(meshControlsObject, 'rotationX', -180, 180).name('X Rotation');
const meshRotationY = meshControls.add(meshControlsObject, 'rotationY', -180, 180).name('Y Rotation');
const meshRotationZ = meshControls.add(meshControlsObject, 'rotationZ', -180, 180).name('Z Rotation');
const displacementScale = meshControls.add(meshControlsObject, 'displacementScale', 0, 2000).name('Displacement Scale');
const normalScale = meshControls.add(meshControlsObject, 'normalScale', 0, 10).name('Normal Scale');
meshControls.open();
// Function to update light controls from GUI sliders
function updateLightControlsFromGUI() {
const longitude = lightLongitude.getValue();
const latitude = lightLatitude.getValue();
const radius = 1;
light.intensity = lightIntensity.getValue();
const { x, y, z } = sphericalToCartesian(longitude, latitude, radius);
light.position.x = x;
light.position.y = y;
light.position.z = z;
}
// Function to enable/disable camera rotation controls based on the value of lookAtCenter
function toggleCameraControls(enabled) {
cameraRotationX.__li.style.pointerEvents = enabled ? 'auto' : 'none';
cameraRotationY.__li.style.pointerEvents = enabled ? 'auto' : 'none';
cameraRotationZ.__li.style.pointerEvents = enabled ? 'auto' : 'none';
}
// Function to update camera controls from GUI sliders
function updateCameraPositionControlsFromGUI() {
const longitude = cameraLongitude.getValue();
const latitude = cameraLatitude.getValue();
const radius = cameraRadius.getValue();
const { x, y, z } = sphericalToCartesian(longitude, latitude, radius);
camera.position.x = x;
camera.position.y = y;
camera.position.z = z;
const lookAtCenter = cameraLookAtCenter.getValue();
if (lookAtCenter) {
camera.lookAt(0, 0, 0);
}
toggleCameraControls(!lookAtCenter)
camera.updateProjectionMatrix();
}
// Function to update camera rotation controls from GUI sliders
function updateCameraRotationControlsFromGUI() {
const rotationX = cameraRotationX.getValue();
const rotationY = cameraRotationY.getValue();
const rotationZ = cameraRotationZ.getValue();
camera.rotation.x = rotationX * Math.PI/180;
camera.rotation.y = rotationY * Math.PI/180;
camera.rotation.z = rotationZ * Math.PI/180;
camera.updateProjectionMatrix();
}
// Function to update material controls from GUI sliders
function updateMeshControlsFromGUI() {
const rotationX = meshRotationX.getValue();
const rotationY = meshRotationY.getValue();
const rotationZ = meshRotationZ.getValue();
const normalScaleValue = normalScale.getValue();
mesh.rotation.x = rotationX * Math.PI/180;
mesh.rotation.y = rotationY * Math.PI/180;
mesh.rotation.z = rotationZ * Math.PI/180;
material.displacementScale = displacementScale.getValue();
material.normalScale = new THREE.Vector2(normalScaleValue, normalScaleValue);
}
// Event listeners for GUI control changes
lightLongitude.onChange(updateLightControlsFromGUI);
lightLatitude.onChange(updateLightControlsFromGUI);
lightIntensity.onChange(updateLightControlsFromGUI);
cameraLongitude.onChange(updateCameraPositionControlsFromGUI);
cameraLatitude.onChange(updateCameraPositionControlsFromGUI);
cameraRadius.onChange(updateCameraPositionControlsFromGUI);
cameraLookAtCenter.onChange(updateCameraPositionControlsFromGUI);
cameraRotationX.onChange(updateCameraRotationControlsFromGUI);
cameraRotationY.onChange(updateCameraRotationControlsFromGUI);
cameraRotationZ.onChange(updateCameraRotationControlsFromGUI);
meshRotationX.onChange(updateMeshControlsFromGUI);
meshRotationY.onChange(updateMeshControlsFromGUI);
meshRotationZ.onChange(updateMeshControlsFromGUI);
displacementScale.onChange(updateMeshControlsFromGUI);
normalScale.onChange(updateMeshControlsFromGUI);
function animate() {
requestAnimationFrame(animate);
renderer.render(scene, camera);
}
animate();
To run the application locally, first download and generate the textures – see notes on the GitHub repository [4] then from the threejs_synthetic_moon
directory, run npx vite
and open localhost:5173
in a browser.
Image segmentation
Image segmentation is the process of dividing a digital image into multiple image segments or masks. Image segmentation aims to derive meaning from an image by identifying objects within it. Image segmentation is a widely used computer vision technique.

The Segment Anything Model (SAM), was released by Meta in April 2023 [10]. Trained on a dataset of 1 billion masks and 11 million images [11], SAM is one of the most advanced image segmentation models available today. One of the most exciting features of SAM is its zero-shot generalization. This allows the model to identify unfamiliar objects and images without requiring additional training.
Without specifically training a model to detect craters in images of the Moon (a computationally expensive task), we instead exploit zero-shot generalisation in the SAM to help identify craters.
Finding craters in lunar images
Before we can begin detecting craters in lunar images we first need to segment the images. To illustrate the stages of the CDA it is useful to visualise the image data at each stage. For this, we will use this 1500 x 1500 pixel image generated using threejs_synthetic_moon
:

We will use the SAM model to segment the image. It is convenient to wrap everything we need for segmenting the image into a single class. We will call this class SamSegmentor
:
from segment_anything import sam_model_registry, SamAutomaticMaskGenerator
from src.constants import DEVICE
class SamSegmentor:
def __init__(self, checkpoint_path='src/sam_vit_h_4b8939.pth', model_type='vit_h'):
self.checkpoint_path = checkpoint_path
self.model_type = model_type
self.model = sam_model_registry[self.model_type](checkpoint=self.checkpoint_path).to(device=DEVICE)
self.mask_generator = SamAutomaticMaskGenerator(self.model)
With SamSegmentor
implemented, the first part of the CDA can be written in just half a dozen lines of code. We will use the cv2
package for reading and writing images from and to file and the supervision
package for annotating the segmented image:
import argparse
import cv2
import supervision as sv
from src.segmentors import SamSegmentor
def main(input_path):
# Initialize the segmentor
segmentor = SamSegmentor()
# Load the input image
image = cv2.imread(input_path)
# Generate masks using the segmentor
results = segmentor.mask_generator.generate(image)
detections = sv.Detections.from_sam(sam_result=results)
annotated_image = sv.MaskAnnotator().annotate(scene=image.copy(), detections=detections)
# Output the annotated image
cv2.imwrite(f"{input_path.split('.')[0]}_segmented.png", annotated_image)
Segmenting the example image results in 163 masks. Among these, there is a mix of different types of masks. Many are well-defined craters, some are overlapping craters, some are clusters of craters in a single mask and some are other types of surface features.

Next, we need a way to determine if a mask is a crater. We want to reject any mask that is not a crater. We also want to reject any mask that is a cluster of craters since we can't reliably fit an ellipse to the crater rims for this type of mask.
To do this, we will fit an ellipse to each mask and reject any masks with an area that is greater or smaller, by some threshold, than the ellipse area. This way we can quickly construct a list of crater candidates. To do this, we implement the EllipseFilter
class which takes upper and lower area thresholds and calculates the ellipses via a filter_ellipses
method:
import numpy as np
import cv2
import pandas as pd
class EllipseFilter:
def __init__(self, lower_area_threshold_ratio=0.95, upper_area_threshold_ratio=1.05):
self.lower_area_threshold_ratio = lower_area_threshold_ratio
self.upper_area_threshold_ratio = upper_area_threshold_ratio
self.ellipse_data = []
def filter_ellipses(self, detections):
for mask in detections.mask:
mask_uint8 = mask.astype(np.uint8) * 255
contours, _ = cv2.findContours(mask_uint8, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if contours:
ellipse = cv2.fitEllipse(contours[0])
mask_area = np.sum(mask)
ellipse_area = np.pi * ellipse[1][0] * ellipse[1][1] / 4.0
area_ratio = mask_area / ellipse_area
if self.lower_area_threshold_ratio <= area_ratio <= self.upper_area_threshold_ratio:
x, y = ellipse[0]
major_axis = ellipse[1][0]
minor_axis = ellipse[1][1]
angle = ellipse[2]
self.ellipse_data.append({'x': x, 'y': y, 'majorAxis': major_axis, 'minorAxis': minor_axis, 'angle': angle})
return pd.DataFrame(self.ellipse_data)
In the main function, we will save a list of crater candidates and an image with the overlayed ellipses to file. We can then visualise the final output of the CDA:

The CDA successfully fits 65 ellipses to the 163 masks in the segmented example input image.
Here is the complete code for the CDA:
import argparse
import cv2
import supervision as sv
from src.segmentors import SamSegmentor
from src.filters import EllipseFilter
from src.utils import plot_ellipses_on_image
def main(input_path):
# Initialize the segmentor and ellipse filter
segmentor = SamSegmentor()
ellipseFilter = EllipseFilter()
# Load the input image
image = cv2.imread(input_path)
# Generate masks using the segmentor
results = segmentor.mask_generator.generate(image)
detections = sv.Detections.from_sam(sam_result=results)
annotated_image = sv.MaskAnnotator().annotate(scene=image.copy(), detections=detections)
# Filter the ellipses
ellipse_df = ellipseFilter.filter_ellipses(detections)
# Print the number of masks and ellipses
print(f"{len(detections.mask)} masks")
print(f"{len(ellipse_df)} ellipses")
# Save ellipses_df to a CSV file
ellipse_df.to_csv(f"{input_path.split('.')[0]}_ellipses.csv")
# Output the annotated image and image with overlaid ellipses
cv2.imwrite(f"{input_path.split('.')[0]}_segmented.png", annotated_image)
plot_ellipses_on_image(image, ellipse_df, f"{input_path.split('.')[0]}_ellipses.png")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('input_path', type=str, help='Path to input image')
args = parser.parse_args()
main(args.input_path)
Examples
It is not possible to determine the accuracy of the CDA from a single example. The lighting, camera pose and camera position are all factors that impact the accuracy of the CDA. In this section, we present several different examples and assess CDA accuracy using the number of masks, number of fitted ellipses and number of good crater candidates.
We define a "good crater candidate" as an ellipse that, on inspection, closely follows the rim of a crater. This definition is subjective so a conservative approach has been followed when deciding if an ellipse is a good crater candidate or not.
Each example below shows the original image, the image with SAM masks overlaid and the image with fitted ellipses overlaid. So that each image can be recreated, the caption for each example contains the settings used in threejs_synthetic_moon
. These are the light longitude [deg] and latitude [deg] and camera longitude [deg], latitude [deg], radius (from Moon centre) [km], x rotation [deg], y rotation [deg] and z rotation [deg]. All other settings take their default value.





Under different lighting, camera pose and camera locations the number of masks detected in the example images ranges between 25 and 182. To quantify the accuracy of the CDA we calculate the percentage of ellipses that are also good crater candidates. For the examples presented, this ranges between 77% (example 3) to 85.5% (example 1).
Summary
In this article, we have developed a CDA for accurately identifying crater rims in lunar surface images. The ultimate goal is to use these detected crater rims as input for an autonomous crater-based navigation system for estimating spacecraft position and attitude when in orbit of cratered celestial body like the Moon.
We also built threejs_synthetic_moon
, a web-based 3D application for generating synthetic images of the lunar surface that can be used as input to the CDA.
The CDA utilises a segmentation model to segment lunar images, fits ellipses to the resulting masks, and assesses the suitability of each mask as a crater candidate based on the ratio of mask to ellipse area.
Qualitatively we observe that the generalized zero-shot segmentation-based approach does a good job of extracting masks that align with craters in the example images. Masks do sometimes pick up clusters of craters or other types of surface features. Using the fitted ellipse approach we reject the majority of these "bad" masks.
Initial testing of the algorithm demonstrates promising results, with 77% to 85.5% of fitted ellipses identified as good crater candidates across the 4 example images presented in this article. Further testing should be carried out to better quantify to accuracy and suitability of the CDA.
There remain several areas in threejs_synthetic_moon and the CDA itself where improvements can be made.
In threejs_synthetic_moon
the textures used are scaled down from the 23040×11520 (displacement and normal maps) and 27360×13680 (colour map) to 16384×8192 by the browser. This appears to be due to a browser/hardware limitation on the maximum texture size. As a result, the fidelity of the images we can generate using the application is limited. A solution to this problem is to implement a level of detail model so that as we zoom in we load local high-detail textures instead of wrapping the entire surface in a single, very large texture.
The CDA can be in a number of ways:
- The algorithm speed can be improved. The segmentation stage of the algorithm is by far the most computationally expensive. Fortunately, several alternative segmentation models exist that are faster than the SAM. This is particularly important if the CDA is to be used in a real autonomous crater-based optical navigation system onboard spacecraft.
- While the segmentation model does a good job of extracting masks that align with craters in the example images, it also frequently misses others. This can be addressed by fine-tuning the segmentation model on masks of craters in lunar images.
- It is desirable to increase the percentage of ellipses identified as good crater candidates. One way of addressing this would be to introduce additional filters for qualifying crater candidates. We could, for example, utilise CNNs for this purpose.
When I set out writing this article I had the idea of using a segmentation model to drive a CDA. I hadn't planned to also build a full THREE.js application on top of implementing the CDA. I learned a lot along the way by doing so and hopefully, you can say the same now you've made it to the end of the article.
Enjoyed reading this article?
Follow and subscribe for more content like this – share it with your network – explore computer vision tasks like the one presented here in some of your own projects.
All images unless otherwise noted are by the author.
References
[1] Owen, W.M. (2011). Methods of optical navigation. JPL Open Repository
[2] Kollsman Instrument Corp. (1970). Spacecraft star trackers. NASA SP-8026
[3] Christian, J. A., Derksen, H., & Watkins, R. (2020). Lunar Crater Identification in Digital Images. arXiv, 2009.01228
[4] GitHub (2024), threejs_synthetic_moon
[5] GitHub (2024), imseg_cda
[6] GitHub (2018), DeepMoon
[7] GitHub (2019), PyCDA
[8] NASA (2024), CGI Moon Kit
[9] GitHub (2022), dat.gui
[10] Meta, Segment Anything Model (2023)
[11] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y., Dollár, P., & Girshick, R. (2023). Segment Anything. arXiv, 2304.02643