NodeOne
A locally-hosted AI GUI powered by LLaMA-3 — built for privacy, agent routing, coding assistance, and memory-aware workflows.
⚡ Why NodeOne?
Cloud AIs are powerful — but they're not private, not always customizable, and not always accessible. NodeOne was built to run entirely on the ForgeCore machine using LLaMA-3, with a flexible GUI that routes between specialized agents, manages persistent tasks, and keeps memory across sessions.
The goal: an AI system that works with you over time — one that remembers context, routes tasks to the right agent, and operates without a subscription or internet dependency.
🧩 Key Features
GUI-First Design
Full graphical interface for local AI interaction — no terminal required. Chat-first with agent routing under the hood.
Agent Routing
Delegate tasks between specialized agents: coding assistant, strategy agent, and utility agent — each with its own context window.
Two-Stage Memory
Short-term session memory plus long-term persistent memory stored in SQLite. Context survives across restarts.
Fully Local & Private
All inference runs on-device with LLaMA-3 via Ollama. No data leaves ForgeCore. No API keys, no subscriptions.
Modular Architecture
Agent logic in Python, GUI in Next.js. Each component is independently extensible — add new agents or UI panels without touching the core.
Future-Ready
Designed with multi-modal expansion in mind — vision inputs, tool use, and richer agentic workflows are on the roadmap.
📸 Interface Screenshots


Early builds — active development continues
🛠️ Technical Stack
Backend
- ▸ LLaMA-3 via Ollama (local inference)
- ▸ Python agent logic & routing layer
- ▸ SQLite for persistent memory storage
- ▸ REST API between GUI and agents
Frontend
- ▸ React / Next.js GUI
- ▸ TailwindCSS for styling
- ▸ Chat-first interface design
- ▸ Agent selector & task panel