Road map

Nova is built in public. This page is a simple, honest list of what’s coming next — features that make Nova feel more alive, more useful, and easier for builders to run.

Priorities can shift as we test in the real world — but this is the direction.

Now (in progress)

New animations + micro-gestures

More emotion packs, idle motion, and “listening” behaviours so Nova doesn’t feel static between lines. Smoother transitions between gestures.

Better memory

Lightweight conversation memory so Nova can remember basics (your preferences, current task, what just happened) without becoming slow or messy.

Bigger / smarter AI models

Optional upgrades to larger local models for deeper reasoning and better conversation — while keeping a fast default for most users.

Soon

VLM (Vision-Language Model)

“Nova can see.” Describe what’s in view, point attention, react to objects, and improve face tracking behaviours. Designed to work locally where possible.

Reading body language

Better human cues: posture, attention, movement, and distance — so Nova can respond more naturally (e.g. lean-in when you’re engaged, settle when you’re busy).

Cleaner installs + setup

Reduce friction: clearer bundles, fewer “gotchas”, and smoother first-run setup for Windows.

Later (bigger milestones)

Cheaper manufacturing

Lower part count, easier prints, faster assembly, and more consistent hardware options — to make Nova more affordable and easier to build.

More sensors + awareness

Better “presence” loops: cleaner tracking, more stable motion, and improved response timing.

Desktop Nova → bigger Nova

The long-term path: take what works on Desktop Nova and scale it into larger builds.

Want to help shape this? Share feedback through YouTube / Instagram / TikTok — or build and report what breaks.