Two steps — takes about 60 seconds.
You need to be a member of the Google Group before Google Play will let you opt in to the alpha. Click below and join with the same Google account you use on your Android device.
Skip the Play Store entirely. Download the latest .apk from GitHub Releases and install it manually. You'll need to enable "Install from unknown sources" in your Android settings.
VAGUS MCP transforms your Android device into a sensory endpoint for AI agents. It's not another chatbot wrapper — it's an embodiment layer that gives agents the ability to perceive the physical world through device sensors and act back through device outputs.
Senses flow up. An inference layer adds contextual meaning — is the user walking? sleeping? indoors? paying attention? Actions flow down — haptics, speech, notifications, SMS. Every capability is individually governed by you, with per-tool rate limits, time-of-day windows, and approval prompts.
Built on the Model Context Protocol (MCP). Any MCP-compatible agent — Claude, GPT, Openclaw, n8n, custom frameworks — connects to the physical world through VAGUS.
Accelerometer, gyroscope, GPS, orientation. Proprioceptive grounding for spatial awareness.
SENSELight, pressure, proximity, temperature, humidity. Ambient awareness of the physical world.
SENSEBattery, connectivity, screen, notifications, clipboard. Awareness of its own substrate.
SENSEPhysical activity classification — still, walking, running, cycling, in vehicle.
INFERInferred attention availability and sleep likelihood from screen, motion, light, and time.
INFERProbability score for being indoors based on ambient sensor fusion and device signals.
INFERHaptic pulse patterns and text-to-speech. The agent reaches back through touch and voice.
ACTPost notifications, send SMS messages, open URLs. Agent actions in the real world.
ACTPer-capability toggles, rate limits, time windows, approval prompts, full access logs.
GOVERNJoin the alpha and help shape the embodiment layer for AI agents.