Skip to content

Included Tools

GoodLife ships a curated AI research stack pre-configured and offline-capable on first boot. Everything below is included by default; you don’t need to pacman -S your way through a setup. All tooling is local-first; no telemetry, no cloud defaults.

ToolNotes
OllamaPre-configured. qwen2.5:7b and llama3.2:3b pre-pulled (~6.5 GB) — usable offline on first boot
llama-cppFallback inference engine for restricted-RAM nodes
vLLM (opt-in)High-throughput batching server for multi-user setups
synos-cortex-qv53 Quantumweave MPS (matrix-product-state) inference path
Terminal window
ollama run qwen2.5:7b # interactive chat, local
ollama list # see what's pulled
synos-alfred chat --backend ollama # ALFRED on top of Ollama
ToolUse
JupyterLabDefault notebook IDE, pre-configured kernels
QuartoReproducible analysis docs, publish to PDF/HTML
VSCodiumEditor with notebook support
PolarsDataFrames — faster than pandas for large data
DuckDBIn-process OLAP database; SQL on parquet/arrow
PandasAvailable; not the default for new code
FrameworkNotes
PyTorchDefault. CUDA + ROCm + CPU builds available
TensorFlowAvailable
JAX + FlaxAvailable
scikit-learnClassical ML

GPU support is automatic when a discrete GPU is present; CPU-only is the fallback path. The Salvage Yard mesh deployment scenario (4× i5-class laptops, no GPU) is a first-class supported config.

ToolNotes
Transformers (HF)Hugging Face model hub client + local cache
sentence-transformersEmbedding models with offline cache
synos-nlp-pipelineSyn_OS-native pipeline with hippocampus integration
synos-datalakeCurated local corpus for RAG
LanceDBEmbedded vector DB
Qdrant (containerised)Standalone vector DB option
ToolUse
whisper.cppLocal speech-to-text — fast on CPU
Piper TTSLocal text-to-speech
PortAudio + SoXAudio capture / processing

ALFRED in GoodLife defaults to text I/O; voice is opt-in.

~/.config/alfred/research.toml
[mode]
default = "advisory"
allow_promotion = false
[backends]
default = "ollama"
[privacy]
job_hunt_mode = true
context_isolation = "tenant"

Cloud backends (Claude, OpenAI, Gemini, DeepSeek) are available — but off until you configure keys. Privacy-first job-hunt mode keeps each conversation in its own tenant scope so résumé context doesn’t bleed into general chat.

ToolUse
synos-hive-bootstrapBring up a mesh from an empty node (Stoneglass v55)
arcanum-controller (k8s)Workload scheduler across the mesh
Tailscale + WireGuardMesh transport
Ollama hive modeSharded inference across mesh nodes
PluginFunction
CutsceneTypewriter dialogue / onboarding
MindmapVisualise concept graphs from notes / transcripts
RetroFilterAesthetic CRT/scan-line overlay (toggle)
Cyberspace3D file/process visualisation
SkillTreeCapability map (research-mode variant)
RehoboamSystem monitor overlay (CPU/RAM/GPU/inference stats)
Twin (v51)Digital-twin substrate visualisation

The FactionHQ plugin is disabled on GoodLife (no GRIMOIRE faction system on this profile).

  • offensive security tooling (metasploit, aircrack, the C2 frameworks)
  • Distrobox/BlackArch on-demand expansion (Master only)
  • the GRIMOIRE lab catalogue
  • the Master-tier RaaS engine + finding ranker + digest renderer

If you want any of these, you’re in the wrong profile — see Three ISOs.