Skip to content

Building Custom Images

Scion agents run inside container images that bundle an LLM harness (Claude, Gemini, etc.) with the Scion toolchain. By default, Scion uses pre-built images from the upstream registry. This guide shows how to build your own images and configure Scion to use them.

  • Self-hosted registries: Push images to a registry you control (GHCR, Artifact Registry, ECR, etc.).
  • Pinned versions: Tag and version images to match your deployment lifecycle.
  • Custom modifications: Add tools, certificates, or configurations to the base images.

Scion images are built in layers:

core-base System dependencies (Go, Node, Python, Git)
└── scion-base Scion CLI, sciontool binary, scion user, entrypoint
├── scion-claude Claude Code harness
├── scion-gemini Gemini CLI harness
├── scion-opencode OpenCode harness
├── scion-codex Codex harness
└── scion-hub Scion hub server

The core-base layer changes infrequently, but needs to be built at least once as it is a prerequisite for all other layers. Most rebuilds only need scion-base, the harness layers, and the hub layer (the common build target).

For security and compatibility across runtimes (especially Kubernetes), Scion agents are required to run as a non-root user.

  • User: The base images create a scion user.
  • UID: The user must have UID 1000.
  • Permissions: Ensure your custom images do not require root privileges at runtime and that any added files or directories are accessible by the scion user. Home directory structure (/home/scion) and environmental variables (HOME, USER, LOGNAME) are automatically injected by the runtime.

A single orchestrator script — image-build/scripts/build-images.sh — owns the build DAG (which images depend on which, in what order, with which tags). The execution backend is selected with --builder. Three backends ship today:

BuilderBackendMulti-archPush behavior
local-docker (default)docker buildxyes (auto-promotes to --push)honors --push; --load otherwise
local-podmanpodman buildsingle-arch by default; multi-arch errors out (manual QEMU setup required)honors --push; built images live in the local store automatically
cloud-buildgcloud builds submit against a static cloudbuild-*.yamlalways linux/amd64 + linux/arm64 (server-side)always pushes

The orchestrator computes tags, threads BASE_IMAGE between layers, and dispatches to the selected builder. Switching backends is purely a --builder flag change — target names and other flags are uniform.

Build all images locally. Once core-base has been built, rebuilds can often use the default common build target.

Terminal window
# Build all layers locally without pushing — bare tags
# (scion-claude:latest, etc.) land in your local docker engine.
image-build/scripts/build-images.sh --target all
# Or build and push to your registry
image-build/scripts/build-images.sh --registry ghcr.io/myorg --push --target all
# When pushing, configure Scion to use them
scion config set image_registry ghcr.io/myorg

--registry is optional for local-only builds. Omit it and the orchestrator tags images with bare names that stay in your local image store. Supply it (without --push) if you’d rather have fully-qualified tags locally — nothing leaves the machine until you docker push separately. --registry becomes required as soon as you pass --push or --builder cloud-build.

Terminal window
# Single-arch local build, no registry needed
image-build/scripts/build-images.sh --builder local-podman --target all
# Or push to a registry
image-build/scripts/build-images.sh \
--builder local-podman \
--registry quay.io/myorg \
--push

Multi-arch Podman builds require manual QEMU binfmt setup. Until that is in place, passing --platform linux/amd64,linux/arm64 to local-podman exits with an actionable error.

If your project is hosted on GitHub:

  1. Fork the repo (or use it as a template).
  2. Go to Actions > Build Scion Images > Run workflow.
  3. Enter ghcr.io/<your-username> as the registry.
  4. Wait for the build to complete.
  5. Configure Scion:
    Terminal window
    scion config set image_registry ghcr.io/<your-username>

The workflow shells out to build-images.sh --builder local-docker after docker/setup-buildx-action, so it shares all the orchestration logic with local builds. It is also available as a reusable workflow via workflow_call for downstream repos.

For GCP-based workflows:

Terminal window
# One-time setup: enable APIs, create Artifact Registry repo, grant permissions
image-build/scripts/setup-cloud-build.sh --project my-gcp-project
# Submit a build
image-build/scripts/build-images.sh \
--builder cloud-build \
--registry us-central1-docker.pkg.dev/my-gcp-project/scion

Then point Scion at the registry:

Terminal window
scion config set image_registry us-central1-docker.pkg.dev/my-gcp-project/scion

The image_registry setting tells Scion to pull images from your registry instead of the upstream default. It rewrites the registry prefix of all standard harness images (those named scion-<harness>) while preserving the image name and tag.

When image_registry is set, Scion transforms the default image reference:

Default Imageimage_registryResolved Image
us-central1-docker.pkg.dev/.../scion-claude:latestghcr.io/myorgghcr.io/myorg/scion-claude:latest
us-central1-docker.pkg.dev/.../scion-gemini:latestghcr.io/myorgghcr.io/myorg/scion-gemini:latest

Globally (applies to all projects):

Terminal window
scion config set image_registry ghcr.io/myorg

Or edit ~/.scion/settings.yaml directly:

schema_version: "1"
image_registry: "ghcr.io/myorg"

Per-profile (different registries for different environments):

profiles:
local:
runtime: docker
image_registry: "ghcr.io/myorg"
staging:
runtime: kubernetes
image_registry: "us-central1-docker.pkg.dev/myproject/staging"

Profile-level image_registry takes precedence over the top-level setting.

The image_registry setting is the lowest-priority way to configure images. Explicit overrides always win:

  1. CLI --image flag (highest priority)
  2. Template scion-agent.yaml image field
  3. Profile harness_overrides image field
  4. image_registry rewrite (lowest priority)

If any higher-priority override specifies a full image path, image_registry does not apply to that agent.

The image-build/scripts/build-images.sh orchestrator supports the following options:

FlagDescriptionDefault
--registry <path>Target registry path (e.g., ghcr.io/myorg). Required when --push is set or with --builder cloud-build. When omitted for a local-only build, images are tagged with bare names (e.g., scion-claude:latest) and stay in the local store.
--builder <name>Backend: local-docker, local-podman, or cloud-build.local-docker
--target <target>Build target (see below).common
--tag <tag>Mutable image tag. The :<short-sha> tag is always added when in a git repo.latest
--platform <plat>Target platform(s). Use all for linux/amd64,linux/arm64. Ignored by cloud-build.builder’s native arch
--pushPush images after building. Auto-enabled for multi-arch local builds. Ignored by cloud-build (always pushes).build only
--dry-runPrint the resolved steps and the exact builder commands without executing.off

Targets resolve to an ordered list of step IDs (one step per image):

TargetWhat It BuildsNotes
core-basecore-baseFoundation tools layer.
scion-basescion-baseAdds sciontool. Reuses existing core-base:<tag>.
harnessesscion-claude, scion-gemini, scion-opencode, scion-codexReuses existing scion-base:<tag>.
hubscion-hubHub server image. Reuses existing scion-base:<tag>.
common (default)scion-base + harnesses + hubSkips core-base. Most common rebuild.
allFull DAGRebuilds everything from core-base.

Every image is tagged with both :<tag> (controlled by --tag, defaults to latest) and :<short-sha> (computed once from git rev-parse --short HEAD). When no SHA is available (e.g. running outside a git working tree), only the mutable tag is emitted.

When two steps in the same run depend on each other, the orchestrator threads BASE_IMAGE=...:<short-sha> so chained builds are immune to concurrent overwrites of :latest. Standalone targets (e.g. --target harnesses on its own) reference the parent image as :<tag>.

The orchestrator and builders assume the caller is already authenticated to the target registry (via docker login, podman login, gcloud auth configure-docker, etc.) and to any required cloud APIs. No login steps are performed inside the script.

Terminal window
# Full rebuild for all platforms, pushed to GHCR
image-build/scripts/build-images.sh \
--registry ghcr.io/myorg \
--target all \
--platform all \
--push
# Build only harness images with a specific tag
image-build/scripts/build-images.sh \
--registry ghcr.io/myorg \
--target harnesses \
--tag v1.2.0 \
--push
# Local build for testing (no push, current architecture only, bare tags)
image-build/scripts/build-images.sh --target all
# Preview what would run, without executing anything
image-build/scripts/build-images.sh \
--registry ghcr.io/myorg \
--target all \
--platform all \
--dry-run
# Submit the same target DAG to Cloud Build
image-build/scripts/build-images.sh \
--builder cloud-build \
--registry us-central1-docker.pkg.dev/myproject/scion \
--target all

The workflow at .github/workflows/build-images.yml can be used in two ways:

Run it from the GitHub Actions UI with inputs for registry, target, tag, and platform.

Call it from your own workflows in downstream repos:

jobs:
build-images:
uses: google/scion/.github/workflows/build-images.yml@main
with:
registry: ghcr.io/myorg
target: common
tag: latest
platform: all

The workflow is a runner, not a builder — it shells out to build-images.sh --builder local-docker and shares the same Dockerfiles and orchestration as a local build.

The cloud-build builder maps each --target to a static YAML file in image-build/:

TargetConfig file
allcloudbuild.yaml
commoncloudbuild-common.yaml
core-basecloudbuild-core-base.yaml
scion-basecloudbuild-scion-base.yaml
harnessescloudbuild-harnesses.yaml
hubcloudbuild-hub.yaml

These YAMLs reference $_TAG, $_SHORT_SHA, $_COMMIT_SHA, and $_REGISTRY substitutions, all forwarded by the orchestrator. _TAG defaults to latest in each YAML’s substitutions: block, preserving the prior behavior when --tag is omitted.

Run the one-time setup script to configure your GCP project:

Terminal window
image-build/scripts/setup-cloud-build.sh --project my-gcp-project

This script:

  • Enables the Cloud Build and Artifact Registry APIs.
  • Creates an Artifact Registry repository named scion.
  • Grants Cloud Build the necessary IAM permissions.