CLI Flags & Environment Variables

CLI Flags

FlagShortDescription
--port <PORT>-pListen port
--data-dir <PATH>-dData directory
--config <PATH>-cConfig file path
--model <PATH>-mPath to a GGUF model file
--gpu-layers <N>Layers to offload to GPU
--bootstrap <ADDR>Bootstrap peer address (repeatable)
--shards <RANGE>Shard range for split inference (e.g., "0-4")
--verbose-vIncrease log verbosity (-v, -vv, -vvv)

Subcommands

CommandDescription
runStart the daemon (default if no subcommand)
statusQuery running daemon status
chatInteractive CLI chat with streaming
benchBenchmark inference (tokens/sec, TTFT)
peersList connected peers
poolDevice pool management (link your machines)
test-splitTest split inference locally (diagnostic)
versionPrint version

chat Options

FlagDefaultDescription
--model-name <NAME>auto-detectModel to chat with
--system <TEXT>noneSystem prompt
--max-tokens <N>2048Max tokens per response
--temperature <F>0.7Sampling temperature

bench Options

FlagDefaultDescription
--model-name <NAME>auto-detectModel to benchmark
--prompt <TEXT>"Write a short essay..."Benchmark prompt
--max-tokens <N>128Tokens to generate
--iterations <N>1Number of benchmark runs

pool Subcommands

Link your personal devices so credits are combined on one main machine.

CommandDescription
pool create --name "My Devices"Create a device group (this machine becomes the main device)
pool invite-codeGenerate an 8-character invite code to share
pool join <CODE>Link this device using a code from your main machine
pool statusShow linked devices, credits, and online status
pool leaveUnlink this device from the group

Example flow:

# Main device:
swarmllm pool create --name "My Devices"
swarmllm pool invite-code   # → A3F7K2M9

# On each other device:
swarmllm pool join A3F7K2M9

Note: This links YOUR own devices. It's different from connecting to the SwarmLLM network (which uses swarm:// peer addresses).

Environment Variables

Every config option can be set via SWARMLLM_ prefix:

Config PathEnvironment Variable
node.listen_portSWARMLLM_NODE_LISTEN_PORT
node.data_dirSWARMLLM_NODE_DATA_DIR
logging.levelSWARMLLM_LOGGING_LEVEL
inference.model_pathSWARMLLM_INFERENCE_MODEL_PATH
inference.gpu_layersSWARMLLM_INFERENCE_GPU_LAYERS

Example:

SWARMLLM_NODE_LISTEN_PORT=9000 SWARMLLM_LOGGING_LEVEL=debug ./swarmllm run

Provider API Keys via Environment

Cloud provider API keys use standard environment variable names:

ProviderEnvironment Variable
OpenAIOPENAI_API_KEY
AnthropicANTHROPIC_API_KEY
DeepSeekDEEPSEEK_API_KEY
MistralMISTRAL_API_KEY
GroqGROQ_API_KEY
NVIDIA NIMNVIDIA_NIM_API_KEY
CerebrasCEREBRAS_API_KEY
SambaNovaSAMBANOVA_API_KEY
FireworksFIREWORKS_API_KEY
TogetherTOGETHER_API_KEY
DeepInfraDEEPINFRA_API_KEY
Moonshot/KimiMOONSHOT_API_KEY

These can also be placed in a .env file in your data directory:

# ~/.local/share/swarmllm/.env
OPENAI_API_KEY=sk-proj-...
DEEPSEEK_API_KEY=sk-...
NVIDIA_NIM_API_KEY=nvapi-...

The .env file is loaded at startup. It does not override existing environment variables or keys already configured via the dashboard/database. The dashboard settings UI shows "From .env" for keys loaded this way.