hull
← Back to home
Documentation

Hull Documentation

Complete reference for the hull container runtime. CLI commands, manifest specification, security model, networking, and integration guides.

Overview

Hull is a daemonless Linux container runtime written in Zig. It compiles to a single ~3 MB static-musl binary with zero runtime dependencies. No dockerd, no containerd, no shim process. Each hull run call forks the workload directly and exits.

Hull enforces 7 independent security layers: user namespaces, PID namespaces, network namespaces, mount namespaces with pivot_root, cgroups v2, Landlock LSM, and seccomp-bpf. Failure of one layer does not disable the others. A workload that escapes seccomp still hits Landlock. A workload that bypasses Landlock still sees an isolated PID tree and an empty /proc.

Hull ships 6 curated seccomp profiles (default, webapp, node, dotnet, beam, java), 3 network modes (none, host, bridge), and supports rootless execution via --rootless. Manifests are plain JSON with 3 required fields.


Installation

Hull is a single binary. Download it and run.

Install
curl -fsSL https://hull.getmentat.run/install.sh | sh

# or manually:
curl -fsSL https://hull.getmentat.run/releases/hull-x86_64 -o /usr/local/bin/hull
chmod +x /usr/local/bin/hull
hull version

CLI Reference

Eight commands. No configuration files, no YAML, no TOML. Everything is driven by JSON manifests passed to hull run.

CommandDescription
hull run [--rootless] <manifest>Start a container from a JSON manifest. Forks the workload and exits. With --rootless, maps host uid into user namespace.
hull psList running containers with PID, uptime (seconds), and argv.
hull stop <name>Graceful shutdown. Sends SIGTERM, waits for exit.
hull kill <name>Immediate kill. Sends SIGKILL, no grace period.
hull exec <name> <cmd...>Run a command inside a running container's namespaces (nsenter).
hull logs <name>Print captured stdout and stderr from the container.
hull inspect <name>Show container status, cgroup limits and usage, namespace inodes, mount points.
hull versionPrint hull version string and build info.

Exit Codes

0Success
1Usage error (bad arguments, missing manifest)
2Runtime error (namespace setup, cgroup write, network)
3Manifest error (invalid JSON, missing required fields)
127Exec failed (binary not found in rootfs or permission denied)

On seccomp violation (SIGSYS), hull reads dmesg and prints the blocked syscall number so you can identify which syscall needs to be added to the profile.


Manifest Specification

A hull manifest is a single JSON file. Three fields are required; everything else has sensible defaults.

Required Fields

FieldTypeDescription
namestringContainer name. 1-64 characters, alphanumeric plus hyphens.
rootfsstringAbsolute path to rootfs directory or .tar.gz archive.
argvstring[]Command and arguments. First element is the executable path.

Optional Fields

FieldTypeDefaultDescription
envstring[][]Environment variables in KEY=VALUE format.
profilestring"default"Seccomp profile: default, webapp, node, dotnet, beam, java.
networkstring"none"Network mode: none, host, bridge.
hostnamestring(name)Hostname inside the container. Defaults to container name.
cwdstring"/"Working directory for the process inside the container.
limits.memory_mbnumber256Memory limit in megabytes. Kernel-enforced via cgroups v2.
limits.cpunumber1.0CPU limit as a fraction of one core (e.g. 0.5, 2.0).
limits.pidsnumber128Maximum number of processes. Prevents fork bombs.
mounts[]object[][]Bind mounts. Each object has host, container, and readonly fields.
mounts[].hoststring-Absolute path on the host to bind mount.
mounts[].containerstring-Absolute path inside the container for the mount target.
mounts[].readonlybooleanfalseWhether the mount is read-only.
bridge.namestring"hull0"Name of the bridge device on the host.
bridge.subnetstring"10.88.0.0/24"Subnet for IP allocation.
bridge.ipstring(auto)Static IP for the container. Auto-allocated if omitted.
bridge.mtunumber1500MTU for the veth pair.

Manifest Examples

Web Server (Node profile, bridge network)

A typical web application with bridge networking for internet access and the node seccomp profile for a libuv-based runtime.

webapp.json
{
  "name": "webapp",
  "rootfs": "/var/lib/hull/rootfs/webapp",
  "argv": ["/app/server", "--port", "3000"],
  "env": [
    "PORT=3000",
    "NODE_ENV=production",
    "HOST=app.example.com"
  ],
  "profile": "node",
  "network": "bridge",
  "hostname": "webapp",
  "cwd": "/app",
  "limits": {
    "memory_mb": 512,
    "cpu": 2.0,
    "pids": 256
  }
}

API Server (default profile, host network)

A statically compiled API server that binds directly to host ports. Host networking for zero overhead. Default seccomp profile for a typical Rust/Go/Zig binary.

api-server.json
{
  "name": "api-server",
  "rootfs": "/var/lib/hull/rootfs/api-server",
  "argv": ["/usr/local/bin/myapp", "--bind", "0.0.0.0:8080"],
  "env": [
    "RUST_LOG=info",
    "DATABASE_URL=postgres://localhost:5432/mydb"
  ],
  "profile": "default",
  "network": "host",
  "hostname": "api-server",
  "limits": {
    "memory_mb": 1024,
    "cpu": 4.0,
    "pids": 64
  }
}

Background Worker (default profile, no network)

A pure-compute worker with no network access. Loopback only. Reads work from a bind-mounted directory and writes results back.

worker.json
{
  "name": "worker",
  "rootfs": "/var/lib/hull/rootfs/worker",
  "argv": ["/app/worker", "--queue", "/data/jobs"],
  "env": [
    "WORKER_THREADS=4",
    "LOG_LEVEL=warn"
  ],
  "profile": "default",
  "network": "none",
  "hostname": "worker",
  "cwd": "/app",
  "limits": {
    "memory_mb": 2048,
    "cpu": 2.0,
    "pids": 32
  },
  "mounts": [
    {
      "host": "/srv/data/jobs",
      "container": "/data/jobs",
      "readonly": false
    },
    {
      "host": "/srv/data/config",
      "container": "/etc/worker",
      "readonly": true
    }
  ]
}

Seccomp Profiles

Hull ships 6 curated seccomp-bpf profiles. Each is an allowlist: syscalls not on the list trigger KILL_PROCESS (not EPERM like Docker). The violation is logged and hull reads dmesg to report the blocked syscall number.

ProfileSyscallsTarget RuntimesNotes
default122Rust musl, Zig, Go, shell scriptsI/O + net + process management + shell pipelines. The baseline for single-binary servers. Excludes io_uring on purpose (CVE history).
webappdefault + 3Node.js 22+, Next.js SSR, modern userspace runtimesDefault plus the io_uring trio (io_uring_setup/enter/register). Opt-in because of recent io_uring CVEs (2023-21400, 2024-0582, 2024-1085).
node32Node.js, Deno, Bun (libuv)Tight profile: epoll, eventfd, signalfd, timerfd -- the libuv core. No legacy syscalls, no file creation beyond openat.
dotnet36.NET 8/9 (CoreCLR, NativeAOT)select/pselect6 + signalfd4 + memfd_create (JIT code staging) + tgkill (pthread signals). Minimal.
beam177Elixir, Erlang, Phoenix (BEAM VM)Default + 55 extras: timerfd, signalfd, inotify, memfd_create, epoll_create, legacy file ops (mkdir/unlink/chmod/chown).
javapermissiveOpenJDK 8/11/17/21, Mirth Connect, install4j-packaged JVM appsCurated full table -- JVM workloads exercise a wide surface (signal-coordinated GC, JNI, JIT, hsperfdata IPC). Per-syscall iteration is O(N) failures, so this profile permits the table; seccomp still blocks anything outside it (kernel module ops, ptrace, kexec).

Blocked in All Profiles

The following dangerous syscalls are blocked in every profile, regardless of workload type:

ptraceprocess_vm_readvprocess_vm_writevbpfadd_keykeyctlrequest_keyuserfaultfdkexec_loadkexec_file_loadinit_modulefinit_moduledelete_module

Both x86_64 and aarch64 architectures are supported.


Security Layers

Hull enforces 7 independent security layers. Each layer operates independently -- failure of one does not disable the others.

1
User namespace (CLONE_NEWUSER)
The container process thinks it is running as uid 0, but the host sees an unprivileged uid. Enabled via --rootless. The parent process writes uid_map and gid_map via the fork-pipe dance before the child calls execve.
2
PID namespace (CLONE_NEWPID)
Isolated PID tree. The workload runs as PID 1 inside its namespace. It cannot see or signal any host process. Bridge mode uses a double-fork to preserve PID isolation across the network setup.
3
Network namespace (CLONE_NEWNET)
Three modes: none (loopback only, no route out), host (shared network stack with the host), bridge (veth pair connected to hull0 bridge with nftables masquerade for internet access).
4
Mount namespace (CLONE_NEWNS) + pivot_root
pivot_root (not chroot) into a dedicated rootfs. The host filesystem is completely invisible to the container. /proc is remounted to show only the PID namespace view.
5
cgroups v2
Kernel-enforced hard limits on CPU (cpu.max), memory (memory.max), and PIDs (pids.max). The container cannot fork-bomb the host or exhaust host RAM. Hull creates a dedicated cgroup slice per container.
6
Landlock LSM
Filesystem allowlist: rootfs gets read+execute, /tmp gets read+write, everything else is denied. Even uid 0 inside the container cannot bypass Landlock. Gracefully skipped on kernels older than 5.13.
7
seccomp-bpf
Syscall allowlist per workload profile. Violation triggers KILL_PROCESS (not Docker's EPERM). The BPF filter is installed immediately before execve. On kill, hull reads dmesg and reports the blocked syscall number.

Bridge Networking

When "network": "bridge" is set, hull creates a full veth-based network stack for the container. Here is the step-by-step process:

1. Create hull0 bridge
If the hull0 bridge device does not already exist on the host, hull creates it and assigns 10.88.0.1/24 as the gateway address. The bridge is brought UP.
2. Allocate IP via lease files
Hull scans lease files in the state directory to find the next available IP in the 10.88.0.0/24 subnet. A lease file is written atomically for the new container.
3. Create veth pair
A veth pair is created: one end (vethXXXX) stays in the host namespace and is attached to hull0; the other end is moved into the container's network namespace and renamed to eth0.
4. Configure container interface
Inside the container namespace, eth0 is assigned the allocated IP (e.g. 10.88.0.2/24), brought UP, and a default route via 10.88.0.1 is added.
5. nftables masquerade
Hull adds an nftables masquerade rule for the 10.88.0.0/24 subnet on the host's default outbound interface. This enables containers to reach the internet via the host's network connection.
6. iptables FORWARD
An iptables FORWARD rule is added to allow traffic between hull0 and the host's outbound interface. IP forwarding is enabled via /proc/sys/net/ipv4/ip_forward.
Verified output from inside a bridge container
$ nsenter -t <pid> -n ip -br addr
lo               UNKNOWN        127.0.0.1/8
eth0@if36548     UP             10.88.0.2/24

$ nsenter -t <pid> -n ip route
default via 10.88.0.1 dev eth0
10.88.0.0/24 dev eth0 proto kernel scope link src 10.88.0.2

$ nsenter -t <pid> -n ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=0.523 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=0.389 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=0.256 ms

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss

Rootless Mode

With hull run --rootless, hull runs the entire container without real root privileges. This is the recommended mode for untrusted workloads.

Fork-Pipe Dance

Linux requires a process outside the new user namespace to write uid_map and gid_map. Hull uses a fork-pipe pattern to solve this:

Parent
unprivileged, on the host
Child
in fresh namespaces
1
fork() with CLONE_NEWUSER
+ CLONE_NEWPID + CLONE_NEWNS
+ CLONE_NEWNET
spawned, blocked on pipe read
2
write /proc/<child>/uid_map
write /proc/<child>/gid_map
write "ok" to pipe
pipe read returns "ok"
3
proceed with mount setup
pivot_root into rootfs
apply seccomp + landlock
execve(argv[0])
4
exit (daemonless)
workload running

NEWUSER Mapping

The uid_map maps host uid (e.g. 1000) to container uid 0. Inside the container, the process believes it is root and can mount filesystems, create devices, and set capabilities -- but the kernel knows the real uid is unprivileged and enforces this at every boundary.

Defense in Depth

Even in rootless mode, all other 6 layers remain active: PID namespace, network namespace, mount namespace with pivot_root, cgroups v2, Landlock, and seccomp-bpf. The user namespace adds an additional layer on top, not a replacement.


State & Logs

Hull stores all state as plain JSON files on disk. No database, no socket, no daemon. The state directory is resolved in order:

PrioritySourceTypical Path
1HULL_STATE_DIR env var/custom/path/hull/state
2$HOME/.hull/state/home/user/.hull/state
3/var/lib/hull/state (root)/var/lib/hull/state

Directory Layout

State directory structure
$STATE_DIR/
  containers/
    myapp/
      state.json        # PID, status, start time, manifest snapshot
      stdout.log        # captured stdout
      stderr.log        # captured stderr
  leases/
    10.88.0.2.lease     # bridge IP lease (container name + timestamp)
    10.88.0.3.lease
  hull.pid              # optional: PID file for the hull process itself

The hull logs command reads directly from stdout.log and stderr.log. The hull inspect command reads state.json and queries the live cgroup filesystem for current resource usage.


Mentat Integration

Hull was designed as a workload driver for Mentat, a self-hosted compute platform. Mentat uses hull as one of its container execution backends alongside Firecracker microVMs and direct exec.

How It Works

Mentat's scheduler generates hull manifests dynamically based on service definitions. The integration is straightforward because hull is daemonless:

Manifest generation
Mentat translates service configs into hull JSON manifests, setting rootfs path, resource limits, network mode, and seccomp profile.
Spawn
Mentat calls hull run <manifest.json>. Hull forks the workload and exits. Mentat records the returned PID.
Health monitoring
Mentat periodically calls hull inspect <name> to check container status and resource usage.
Lifecycle
Mentat uses hull stop or hull kill for graceful/forceful shutdown during deployments and scaling events.
Logs
Mentat reads hull logs <name> output for centralized log aggregation.

Because hull is a static binary with no daemon, Mentat does not need to manage a long-running container service. There is no socket to connect to, no API to authenticate against, and no state to synchronize. The filesystem is the API.


Known Limitations

Hull is purpose-built and intentionally limited in scope. These are known constraints:

Linux only
Hull uses Linux-specific syscalls (clone3, pivot_root, seccomp, landlock). It does not run on macOS, Windows, or BSDs. Cross-compile from macOS for deployment.
No OCI compatibility
Hull does not implement the OCI runtime spec. It uses its own JSON manifest format. OCI bundles cannot be used directly (convert with Docker export).
No image layers
Hull uses flat rootfs directories or tarballs. There is no layer deduplication, no content-addressable storage, no image graph.
No overlay filesystem
Each container gets its own rootfs copy. There is no copy-on-write. Disk usage scales linearly with number of containers using the same base image.
No port forwarding
Bridge mode provides outbound internet access but hull does not configure inbound port forwarding (DNAT). Use host network mode or configure iptables manually.
No container-to-container DNS
Containers on the bridge network can reach each other by IP but there is no built-in DNS resolution for container names.
Single-host only
Hull manages containers on a single machine. There is no multi-host orchestration, no overlay networking across nodes, no built-in service discovery.
cgroups v2 required
Hull requires cgroups v2 (unified hierarchy). Systems running cgroups v1 or hybrid mode are not supported.
Landlock requires kernel 5.13+
Landlock LSM is gracefully skipped on older kernels, but the filesystem isolation layer will be missing.
No GPU passthrough
There is no support for passing GPU devices into containers. Compute-only workloads.