Skip to content

Devcontainer Fragments

If you are using devcontainers outside of VSCode, use @devcontainer, my simple CLI wrapper around npx @devcontainer/cli that provides better ergonomics.

Some of the features this gives you:

  • You don’t need to manually pass --workspace-folder.
  • It auto-installs dotfiles.
  • It supports custom shells (fish).
  • It properly passes TERM environment variables, for proper colors and other behaviors.
  • It has rudimentary “stop” and “down” subcommands (a basic feature request since 2023).

If you don’t set a memory limit on your containers, they can take down the whole system.

devcontainer.json
"runArgs": [ "--memory=16G" ]

When creating a devcontainer, you will often need to run a script to finish setting up the repository. There are two main ways you can go about this.

  1. Use a lifecycle script. This is the easiest way.
  2. Use a custom Dockerfile. This is more complex.

This is the easiest way to run a few commands after loading a prebuilt devcontainer image. It’s my recommended method for when you are creating a devcontainer that won’t be prebuilt. In this scenario, onCreateCommand can either take a script, or a path to a script in the workspace directory. For example, if you have .devcontainer/devcontainer.json and .devcontainer/setup.sh:

.devcontainer/devcontainer.json
"onCreateCommand": {
"local": ".devcontainer/setup.sh"
}

A quick note about the lifecycle scripts: each lifecycle event runs all of its scripts in parallel. You can take advantage of this by splitting up your script, for example you can yarn install and bundle install in separate scripts:

.devcontainer/devcontainer.json
"onCreateCommand": {
"yarn": "yarn install",
"bundle": "bundle install",
}

I don’t recommend bothering with this unless you are planning to prebuild your devcontainer to share with a team. Here’s a basic configuration for a custom Dockerfile, given .devcontainer/devcontainer.json and .devcontainer/Dockerfile.

.devcontainer/devcontainer.json
// Remove the "image" key and replace it with this
"build": {
"context": "..",
"dockerfile": "Dockerfile",
}

If you don’t want to use a devcontainer image as your base, that’s fine, but you’ll probably want to use the “common-utils” feature instead:

devcontainer.json
"features": {
"ghcr.io/devcontainers/features/common-utils:2": {
"installZsh": "true",
"username": "vscode",
"userUid": "1000",
"userGid": "1000",
"upgradePackages": "true"
},
"ghcr.io/devcontainers/features/git:1": {
"version": "latest",
"ppa": "false"
}
},
"remoteUser": "vscode",

References:

When you are using a devcontainer that requires other containers (e.g. Postgres, Redis), you have 3 options.

  1. Use docker-in-docker. Simplest, but provides no security against malicious code.
  2. Use Docker Compose. Secure, but does not allow building/running new images in the container.
  3. Use Sysbox. Allows full Docker usage while still being secure, but has a more complicated setup.

The simplest solution is just to use one of the publicly available docker-in-docker features.

Advantages:

  • Easy, configured entirely from devcontainer.json with no additional dependencies.

Disadvantages:

There are two variants of this method:

  • docker-in-docker: all containers, networks, and images will be scoped to the devcontainer, so you can re-use container and network names in different devcontainers without conflicts.

    devcontainer.json
    "features": {
    "ghcr.io/devcontainers/features/docker-in-docker:2": {}
    }
  • docker-outside-of-docker: just gives you the same docker instance as on the host system, so docker ps will show all containers running on the entire system.

    devcontainer.json
    "features": {
    "ghcr.io/devcontainers/features/docker-outside-of-docker:1": {}
    }

In general, if you’re going to use one of these methods, use docker-outside-of-docker if you’re not worried about conflicts with container and network names. It will allow you to manage all your docker images in one place and you will be able to re-use downloaded images across container rebuilds. If you are worried about conflicts, then prefer docker-in-docker.

If you are creating a devcontainer for a specific project which requires external services, the recommended way to set them up is using Docker Compose.

Advantages:

  • Devcontainer CLI / VSCode will take care of starting the containers for you.
  • Container images are stored on the host, so you don’t need to re-download them when rebuilding the devcontainer.
  • No privileged containers, so it’s safe to --dangerously-skip-permissions 1 and run untrusted code.

Disadvantages:

  • Docker is not available in the devcontainer, so you have to manage the other services through the host system.
devcontainer.json
// Remove the "image" key and replace it with these
"dockerComposeFile": "docker-compose.yml",
"service": "app"

This is just a normal Docker Compose file, so you can set any properties that you like. This snippet only shows the required keys.

References:

Sysbox is an alternative container runtime (the part of Docker that actually runs containers). It makes your containers behave more like virtual machines, improving isolation and enabling them to run more complex workloads (for example, Docker).

Advantages:

  • Full access to Docker inside the container.
  • Full isolation from the host system.

Disadvantage:

  • Requires installing additional software on the host machine and configuring devcontainer.json.

To install Sysbox, you can read the fine manual, but the following script works on Ubuntu 24.04 and Sysbox 0.6.7.

Terminal window
# Download and validate the appropriate version from Github
wget https://downloads.nestybox.com/sysbox/releases/v0.6.7/sysbox-ce_0.6.7-0.linux_amd64.deb
echo 'b7ac389e5a19592cadf16e0ca30e40919516128f6e1b7f99e1cb4ff64554172e sysbox-ce_0.6.7.linux_amd64.deb' | sha256sum -c
# The easiest way to install is to delete all existing containers and
# recreate them later
docker rm $(docker ps -a -q) -f
# Install the package
sudo apt-get install jq
sudo apt-get install ./sysbox-ce_0.6.7-0.linux_amd64.deb
# Verify that it's running
sudo systemctl status sysbox -n20

Then to set up the container, Sysbox has a variety of different options, but the simplest (just Docker, no systemd) could look like this:

devcontainer.json
"image": "docker.io/nestybox/ubuntu-noble-docker",
"features": {
"ghcr.io/devcontainers/features/common-utils:2": {
"installZsh": "false",
"upgradePackages": "false"
}
},
"runArgs": [ "--runtime=sysbox-runc" ],
"remoteUser": "ubuntu",
"postStartCommand": {
"dockerd": "sudo -s sh -c 'dockerd -G ubuntu > /var/log/dockerd.log 2>&1 &'",
},

This configuration will use their Ubuntu 24.04 image with Docker preinstalled, run the container using Sysbox, and start the docker daemon when the container starts.

  1. With --dangerously-skip-permissions, and with coding agents in general, any secret stored inside the container can still be exfiltrated using prompt injections, but the rest of your computer is safe.