Skip to content

Annotation Pipeline

Overview

After running SharkTrack on your videos, you’ll do two things: review the output and then compute MaxN.

  1. Review (clean + label) — Delete false detections and label true detections with a species ID.
  2. Compute MaxN — Generate species-specific MaxN metrics from your reviewed output.

Understand the Output

TL;DR: SharkTrack groups consecutive detections of the same animal into a track (with an ID). For each track, it saves one screenshot (.jpg) that you can review quickly. Your labels (via filenames) are then used to compute MaxN.

Output folder structure

Locate your output directory (default: ./output). It contains:

File / FolderDescription
internal_results/output.csvRaw detections (per frame/time)
internal_results/overview.csvSummary per video (number of tracks found)
internal_results/*/One subfolder per video
internal_results/*/<track_id>.jpgOne screenshot per track (this is what you review)

How to read a detection screenshot

In each video folder you will see one screenshot per track. The screenshot shows a bounding box around the shark/ray you should label.

Detection example

The screenshot also shows the video file and timestamp, so you can refer back to the original footage when you’re unsure.


Step 1: Review (Clean + Label)

  1. For each video, open its output subfolder. It contains detection files named {track_id}.jpg.
  2. Scroll through the images and focus on the animal inside the bounding box.
  3. If it’s not a shark/ray, delete the file.
  4. If it is a shark/ray, rename the file from {track_id}.jpg to {track_id}-{species}.jpg.

Example: 5.jpg5-great_hammerhead.jpg

FAQ

What is a track?

The same elasmobranch appears in multiple consecutive frames. A track is SharkTrack’s “this is the same animal I saw before” group, with an ID.

Why are there multiple animals in the screenshot, but only one bounding box?

SharkTrack saves one screenshot per track, so you will always see only one bounding box. Only label the animal in the bounding box.

Why does this matter?

You only need to label one screenshot per track instead of thousands of frames. The script applies your label to all detections in that track when computing MaxN.

One frame isn’t enough to determine species

Each detection image shows the video name/path and timestamp, so you can go back to the original video to confirm the species.

I see the same elasmobranch in multiple detections

The model may split the same shark into two or more consecutive tracks. Classify all of them — this won’t affect MaxN accuracy, it just requires classifying more images.

Pro Tips

  • Do a first pass to remove wrong detections, then assign species labels in a second pass.
  • Unsure about species/validity? Check the text at the bottom of the detection image for the video name and timestamp, then review the original video.
  • Windows shortcuts: F2 to rename, Ctrl+D to delete.
  • macOS shortcuts: Gallery view, Cmd+Delete to remove, Enter to rename.

Collaborating

Want multiple users to annotate? Upload the entire output folder to Google Drive, Dropbox, or OneDrive and perform the cleaning steps there!


Step 2: Compute MaxN

Once you have reviewed all track screenshots, it’s time to generate MaxN.

  1. Open Terminal at the sharktrack folder (same as when running the model in the User Guide)
  2. Activate the virtual environment:

Anaconda:

Terminal window
conda activate sharktrack

Windows:

Terminal window
venv\Scripts\activate.bat

macOS / Linux:

Terminal window
source venv/bin/activate
  1. Run the MaxN computation:
Terminal window
python utils/compute_maxn.py

You will be asked for:

  • the path to your output folder (the one you just reviewed)
  • the path to the folder containing the original videos (optional, used to compute visualisations)

You will see a new folder called analysed, with the maxn.csv file, as well as subfolders showing you the visualisations of MaxN per each video.

Example MaxN Visualisation


Need Help?

  • Issues: Submit on GitHub
  • Questions: Email us
  • Contributions: Pull requests, issues, or suggestions welcome — email

Now Jump to the Code!