Live Dashboard¶
The robot is a browser tab, not an SSH session.
Camera · LiDAR · 43-DOF Joints · Voice · Teleop · Policy Runner — all live over WiFi.
Camera · LiDAR · 43-DOF Joints · Voice · Teleop · Policy Runner — all live over WiFi.
Quick Start¶
Live Dashboard¶
Connection
Enter ws://YOUR_G1_IP:10100 in the bottom bar and click Connect.
If you're on the same WiFi as the G1, it auto-discovers.
What You Can Do¶
| Feature | Panel | How |
|---|---|---|
| See what G1 sees | Camera (top-left) | Live JPEG @15fps over WebSocket |
| See the world in 3D | LiDAR (top-right) | 2,048 points, height-colored, top-down |
| Monitor 43 joints | Joints (bottom-left) | Animated bars, [-π, π] range, @30Hz |
| Speak to it | Sidebar | Type text or hold mic for push-to-talk |
| Hear it respond | Sidebar | PersonaPlex TTS streams back |
| Run policies | Sidebar | Click: neon-v1 · groot-wbc · teleop-direct |
| Teleop with gamepad | Sidebar | Plug in controller, auto-detected @30Hz |
| Emergency stop | Sidebar | Red ⏹ button — stops all policies |
Architecture¶
Browser (this page) G1 (Jetson Orin @ 192.168.1.151)
┌──────────────────┐ ┌──────────────────────────────┐
│ Camera │◄───────── │ NeonBridge (bridge.py) │
│ LiDAR │ WebSocket│ reads from StreamSession: │
│ 🦴 Joints │ JSON+b64 │ ├ CameraChannel @30Hz │
│ 🗣️ Voice in/out │◄─────────│ ├ JointsChannel @500Hz │
│ Policy ctrl │──────────►│ ├ LiDARChannel @10Hz │
│ 🎮 Gamepad │ ws:// │ ├ AudioChannel @16kHz │
│ 📋 Logs │ :10100 │ └ TextChannel (events) │
└──────────────────┘ └──────────────────────────────┘
The bridge reads from the exact same channels that feed training data. What you see in the dashboard is what the model sees during inference.
Scripting the Bridge¶
You can control the G1 programmatically from any WebSocket client:
import asyncio, websockets, json
async def control_robot():
async with websockets.connect("ws://192.168.1.151:10100") as ws:
# Subscribe to camera + joints
await ws.send(json.dumps({
"type": "subscribe",
"channels": ["camera", "joints"]
}))
# Send a language instruction
await ws.send(json.dumps({
"type": "instruct",
"text": "Pick up the red cup"
}))
# Start a policy
await ws.send(json.dumps({
"type": "policy",
"action": "start",
"name": "neon-v1"
}))
# Listen for data
async for msg in ws:
data = json.loads(msg)
if data["type"] == "joints":
print(f"Joints: {data['positions'][:5]}...")
elif data["type"] == "camera":
print(f"Frame: {data['width']}×{data['height']}")
asyncio.run(control_robot())
Message Reference¶
Browser → Bridge¶
| Type | Fields | Description |
|---|---|---|
subscribe |
channels: string[] |
Subscribe to data streams |
instruct |
text: string |
Language instruction → NeonVLA |
speak |
text: string |
Robot speaks via PersonaPlex TTS |
teleop |
axes: float[] |
Gamepad axes → joint velocity |
policy |
action, name? |
"start" / "stop" / "list" |
audio |
data: base64 |
Browser mic audio → robot |
ping |
— | Latency measurement |
Bridge → Browser¶
| Type | Key Fields | Rate |
|---|---|---|
camera |
data (base64 JPEG), width, height |
15 fps |
joints |
positions[], velocities[], names[] |
30 Hz |
lidar |
points[][] (x,y,z,intensity), count |
10 Hz |
status |
clients, policy, policy_fps |
1 Hz |
log |
level, message |
event |
speech |
text |
event |
welcome |
robot, channels, active_policy |
on connect |
pong |
ts |
on ping |
Running Without the Robot¶
The dashboard works without a live G1 — it just shows "Disconnected" in the status bar. This is useful for:
- UI development — iterate on the HTML/CSS locally
- Demo mode — show the interface in presentations
- Mock data — pipe synthetic data from a test server