Daemon & Subsystems

The daemon spawns 12 Tokio tasks wired together with mpsc channels:

                           ┌──────────────┐
                           │  daemon/     │
                           │  (bootstrap) │
                           └──────┬───────┘
                                  │ spawns tokio tasks
  ┌───────┬───────┬───────┬───────┼───────┬──────────┬──────────┬──────────┬──────────┬──────────┬─────┐
  ▼       ▼       ▼       ▼       ▼       ▼          ▼          ▼          ▼          ▼          ▼     ▼
Network  Infer   Credit  Health   API    Rebal-   Acquisi-   Message    Pool     AutoShrd   HfWat- Update
Manager  Router  Ledger  Monitor  Server ancer    tion Mgr   Dispatch   Manager  Manager   cher   Checker

Subsystem Responsibilities

SubsystemFileRole
NetworkManagersrc/network/manager/libp2p swarm: Kademlia DHT + GossipSub + request/response
InferenceRoutersrc/inference/router/Request queuing, pipeline assembly, execution coordination
MessageDispatchersrc/daemon/dispatch/mod.rsRoutes inbound network messages to appropriate subsystems
CreditLedgersrc/credit/ledger.rsCredit balance tracking, transaction signing, gossip
HealthMonitorsrc/health/monitor.rsPeriodic health pings, rebalancing triggers
ShardRebalancersrc/health/rebalancer.rsShard redistribution on node join/leave
AcquisitionManagersrc/model/acquisition.rsBLAKE3-verified model downloads from peers and HuggingFace
ApiServersrc/api/server.rsAxum HTTP: OpenAI + Anthropic APIs + MCP server + admin dashboard + WebSocket
PoolManagersrc/pool/manager/Device pool management, credit forwarding
AutoShardManagersrc/model/auto_manage/VRAM-aware shard acquisition + smart pruning (manager, scoring, download, prune, scan, vram, wishlist). R111: refreshes the user-visible wishlist at the end of every tick.
HfWatcher (R112)src/model/huggingface/watcher.rsBackground task polling HuggingFace's trending GGUF feed once per hour. Caches the snapshot on state.models.hf_trending_cache (consumed by the wishlist scorer) and auto-promotes models above 100k downloads + 24h age from Discovered to DemandVerified. NonCritical — HF outages don't escalate to a daemon crash. Opt-out via auto_manage.hf_watcher_enabled = false.
UpdateCheckersrc/update.rsPeriodic GitHub release polling, SHA256-verified binary download, atomic apply. Skipped entirely when auto_update = "disabled" (default until binary signing C1 lands), so the supervisor doesn't log a misleading "exited unexpectedly" warning.

Channel Layout

FromToMessage Types
NetworkManagerMessageDispatcherAll inbound SwarmMessage variants
MessageDispatcherInferenceRouterInferenceRequest, LayerForward, LayerResult
InferenceRouterNetworkManagerOutgoing P2P messages
HealthMonitorShardRebalancerRebalanceEvent
ApiServerInferenceRouterRouterCommand (from HTTP)
ApiServerAcquisitionManagerAcquisitionCommand
AutoShardManagerAcquisitionManagerAcquisitionCommand
CreditLedgerNetworkManagerCreditGossip, CreditTransaction
MessageDispatcher(spawned task)VisionEncodeRequest → handler → VisionEncodeResponse

Broadcast Channels

ChannelTypeSubscribersPurpose
activity_txbroadcast::Sender<ActivityEvent> (256)WebSocketUnified event bus — all subsystem events (shard ops, downloads, inference, pool, config changes). Events carry toast_level for frontend toast control. History replayed to new WS clients.
dashboard_txbroadcast::Sender<DashboardSignal> (32)WebSocketDashboard refresh signals — PeersChanged (peer connect/disconnect), ModelsChanged (shard download/load/prune), UpdateAvailable(UpdateInfo) (new version).

Note: Former separate channels (prune_events_tx, models_changed_tx, lan_discovery_tx, system_notify_tx, peer_list_changed_tx, update_tx) were consolidated into these two in the event system unification.

Startup Sequence

  1. Parse CLI args (clap)
  2. Initialize tracing subscriber
  3. Load/create config (TOML + env + defaults + CLI overrides)
  4. Ensure data directory exists
  5. Load/generate Ed25519 identity
  6. Open redb database
  7. Build Daemon { config, identity, db }
  8. Initialize ModelExecutor (load GGUF if --model provided)
  9. Build Arc<SharedState> (includes ModelRegistry from DB)
  10. Scan local shards, register in registries
  11. Create mpsc channels
  12. Spawn all 12 tasks
  13. Open browser if configured
  14. tokio::select! on Ctrl+C or task exit
  15. Graceful shutdown: save peer cache, flush database

Graceful Shutdown

Shutdown is triggered by Ctrl+C (SIGINT/SIGTERM) or any task exiting:

  • A watch channel signals all subsystems
  • Peer cache is saved to redb
  • Database is flushed
  • Open connections are drained