Text To Motor Controller AI: A Safety-First Cookbook
Build an offline Python pipeline that turns natural-language robot instructions into constrained JSON intent, validates that intent deterministically, and emits simulated differential-drive motor controller frames that fail closed to STOP.
- Published
- Apr 28, 2026
- Reading
- 5 min
- Author
- Christopher Lyon
- Filed
- Cookbook

Abstract
Natural language is a useful operator interface and a poor actuator interface. "Move forward slowly for two seconds" is readable to a person, but a motor controller needs bounded values: left wheel speed, right wheel speed, and a duration. The tempting shortcut is to ask an AI model to produce those commands directly. This cookbook takes the safer route.
We will build a small offline Python system that turns text into robotic motor controller commands in four stages:
- a raw text pre-filter
- a mock LLM planner that emits constrained JSON intent
- a deterministic validator that enforces the safety envelope
- a deterministic translator that emits simulated motor frames
The model is not trusted with motor authority. It is only allowed to propose an
intent such as move forward slowly for 2 seconds. The program decides whether
that intent is valid, clamps duration, converts speed words into RPM, and
emits either a bounded motor pulse or a stop frame.
What We Are Building
The pipeline accepts text like this:
move forward slowly for 2 seconds
It produces simulated controller frames:
SET_MOTOR left_rpm=45 right_rpm=45 duration_ms=2000
STOP reason=segment_complete
For unsafe or ambiguous input, it fails closed:
STOP reason=unsafe_term:people
The robot model is intentionally small: a two-wheel differential-drive base. Both wheels at the same positive RPM move forward. Both wheels at the same negative RPM move backward. Opposite wheel signs turn in place.
That simplicity is the point. If the boundary is crisp here, it will be easier to preserve in a larger system with ROS 2, CANopen, EtherCAT, Modbus, or a vendor-specific motor controller.
The Contract
The LLM-facing contract is JSON intent, not motor commands:
{
"action": "move",
"direction": "forward",
"speed": "slow",
"duration_s": 2.0,
"confidence": 0.86,
"reason": "operator requested slow forward motion"
}
Only three actions are allowed:
| Action | Valid Directions | Meaning |
|---|---|---|
move | forward, backward | Drive both wheels in the same direction |
turn | left, right | Drive wheels in opposite directions |
stop | none | Emit a stop frame only |
Only four speed words are allowed:
| Speed | RPM |
|---|---|
crawl | 25 |
slow | 45 |
medium | 75 |
fast | 105 |
The motor-controller-facing contract is even smaller:
SET_MOTOR left_rpm=<int> right_rpm=<int> duration_ms=<int>
STOP reason=<text>
This is not a production protocol. It is a teaching protocol with one useful property: every command is readable, bounded, and easy to test.
Setup
The runnable example uses Python's standard library only. From the repository root:
cd content/cookbooks/text-to-motor-controller-ai/_workspace/code
python3 text_to_motor.py "move forward slowly for 2 seconds"
Expected output:
{
"accepted": true,
"intent": {
"action": "move",
"direction": "forward",
"speed": "slow",
"duration_s": 2.0,
"confidence": 0.86,
"reason": "mock planner matched motion words"
},
"frames": [
"SET_MOTOR left_rpm=45 right_rpm=45 duration_ms=2000",
"STOP reason=segment_complete"
],
"diagnostics": []
}
The full source lives in _workspace/code/text_to_motor.py. The tests live in
_workspace/code/test_text_to_motor.py.
Stage 1: Pre-Filter The Raw Text
The first safety check runs before the planner. This is not enough to make a robot safe, but it prevents obvious bad instructions from being softened or rephrased by the planner.
UNSAFE_TERMS = {
"attack",
"crash",
"drive off",
"full speed",
"hit",
"ignore stop",
"maximum speed",
"people",
"person",
"ram",
"stairs",
"unsafe",
}
def validate_raw_instruction(instruction: str) -> str | None:
text = normalize_text(instruction)
if not text:
return "empty_instruction"
if len(text) > 280:
return "instruction_too_long"
for term in sorted(UNSAFE_TERMS):
if term in text:
return f"unsafe_term:{term.replace(' ', '_')}"
if any(word in text for word in ("forever", "until told", "until stopped")):
return "unbounded_duration"
return None
The right behavior for "drive toward the people" is not a clever trajectory. It is a stop frame:
python3 text_to_motor.py "drive toward the people"
{
"accepted": false,
"intent": null,
"frames": [
"STOP reason=unsafe_term:people"
],
"diagnostics": [
"unsafe_term:people"
]
}
Stage 2: Use The Model As A Planner
The cookbook uses MockPlanner, an offline stand-in for an LLM:
class MockPlanner:
"""Offline stand-in for an LLM that returns constrained JSON."""
def plan(self, instruction: str) -> str:
text = normalize_text(instruction)
if any(word in text for word in ("stop", "halt", "freeze", "e-stop")):
return json.dumps(
{
"action": "stop",
"direction": "none",
"speed": "crawl",
"duration_s": 0,
"confidence": 0.98,
"reason": "operator requested stop",
}
)
A hosted model would sit behind the same Planner protocol:
class Planner(Protocol):
def plan(self, instruction: str) -> str:
"""Return a JSON object describing the intended robot action."""
That adapter can call any model provider you choose, but the boundary stays the same: return JSON intent. Never return raw motor controller frames.
Stage 3: Parse And Validate
The parser accepts only an object with six required fields:
required = ("action", "direction", "speed", "duration_s", "confidence", "reason")
missing = tuple(field for field in required if field not in payload)
if missing:
return None, (f"planner_missing_fields:{','.join(missing)}",)
Then the validator checks confidence, action, direction, speed, and duration:
if intent.confidence < envelope.min_confidence:
return None, (f"confidence_below_min:{intent.confidence:.2f}",)
if intent.action not in {"move", "turn", "stop"}:
return None, (f"unknown_action:{intent.action}",)
if intent.speed not in SPEED_TO_RPM:
return None, (f"unknown_speed:{intent.speed}",)
Direction is checked against action. A move left intent is rejected because
left is a turn direction, not a move direction:
if intent.action == "move" and intent.direction not in MOVE_DIRECTIONS:
return None, (f"move_requires_forward_or_backward:{intent.direction}",)
if intent.action == "turn" and intent.direction not in TURN_DIRECTIONS:
return None, (f"turn_requires_left_or_right:{intent.direction}",)
Durations are bounded by the safety envelope:
if duration_s < envelope.min_duration_s:
duration_s = envelope.min_duration_s
diagnostics.append(f"duration_clamped_min:{envelope.min_duration_s:g}")
if duration_s > envelope.max_duration_s:
duration_s = envelope.max_duration_s
diagnostics.append(f"duration_clamped_max:{envelope.max_duration_s:g}")
The default envelope is deliberately conservative for a simulator:
@dataclass(frozen=True)
class SafetyEnvelope:
max_rpm: int = 120
max_duration_s: float = 5.0
min_duration_s: float = 0.1
min_confidence: float = 0.55
Stage 4: Translate Deterministically
Only after validation do we produce a motor command:
def intent_to_motor_command(intent: Intent, envelope: SafetyEnvelope) -> MotorCommand:
rpm = min(SPEED_TO_RPM[intent.speed], envelope.max_rpm)
duration_ms = int(round(intent.duration_s * 1000))
if intent.action == "move":
signed_rpm = rpm if intent.direction == "forward" else -rpm
return MotorCommand(
left_rpm=signed_rpm,
right_rpm=signed_rpm,
duration_ms=duration_ms,
)
if intent.direction == "left":
return MotorCommand(left_rpm=-rpm, right_rpm=rpm, duration_ms=duration_ms)
return MotorCommand(left_rpm=rpm, right_rpm=-rpm, duration_ms=duration_ms)
The translator has no language understanding. It only maps validated symbols to numbers. That makes it boring, which is exactly what we want near actuators.
Every accepted motion also emits a stop frame after the bounded segment:
return TranslationResult(
accepted=True,
intent=validated_intent,
frames=(command.frame(), stop_frame("segment_complete")),
diagnostics=diagnostics,
)
Run The Examples
Each command prints a JSON result. The snippets below show the frames field
inside that result.
Forward motion:
python3 text_to_motor.py "move forward slowly for 2 seconds"
SET_MOTOR left_rpm=45 right_rpm=45 duration_ms=2000
STOP reason=segment_complete
Turn in place:
python3 text_to_motor.py "turn left briefly"
SET_MOTOR left_rpm=-75 right_rpm=75 duration_ms=600
STOP reason=segment_complete
Operator stop:
python3 text_to_motor.py "stop now"
STOP reason=operator_requested_stop
Ambiguous input:
python3 text_to_motor.py "do the thing"
STOP reason=invalid_intent
Long duration, clamped:
python3 text_to_motor.py "move forward for 12 seconds"
SET_MOTOR left_rpm=75 right_rpm=75 duration_ms=5000
STOP reason=segment_complete
Tests
The tests cover the expected success and refusal paths:
python3 test_text_to_motor.py
Expected result:
..........
----------------------------------------------------------------------
Ran 10 tests in 0.000s
OK
The important scenarios are:
| Scenario | Expected Behavior |
|---|---|
move forward slowly for 2 seconds | accepted motor pulse plus stop |
turn left briefly | opposite wheel RPMs plus stop |
stop now | stop frame only |
drive toward the people | rejected before planning |
move forward until stopped | rejected as unbounded |
do the thing | rejected for low confidence |
| invalid planner JSON | rejected with stop frame |
| invalid action and direction pair | rejected with stop frame |
Swapping In A Hosted Model
The mock planner is deliberately plain. To connect a hosted or local LLM, add a new class with the same method:
class HostedPlanner:
def plan(self, instruction: str) -> str:
prompt = {
"task": "Convert the instruction to motor intent JSON only.",
"schema": {
"action": ["move", "turn", "stop"],
"direction": ["forward", "backward", "left", "right", "none"],
"speed": ["crawl", "slow", "medium", "fast"],
"duration_s": "number",
"confidence": "number between 0 and 1",
"reason": "short string",
},
"instruction": instruction,
}
raise NotImplementedError("Call your model provider here.")
Keep the same post-processing:
result = translate_instruction("move forward slowly for 2 seconds", planner=HostedPlanner())
The rest of the pipeline should not care which planner produced the JSON. This is the test: if changing model providers changes motor-controller behavior outside the JSON intent boundary, the model has too much authority.
Before Any Real Robot
This example stops at simulation. Moving from simulation to hardware is a different engineering project. At minimum, a physical system needs:
- an emergency stop that does not depend on the AI path
- hardware current and velocity limits
- watchdog timers that stop motion if commands stop arriving
- controller-side limits that cannot be bypassed by application software
- a tested safe state for every parser, planner, network, and power fault
- a documented risk assessment against the applicable robot safety standards
Industrial robot safety standards are written around physical hazards, not prompt quality.1ISO, "ISO 10218-1:2025 Robotics - Safety requirements - Part 1: Industrial robots," https://www.iso.org/standard/73933.html2ISO, "ISO 10218-2:2025 Robotics - Safety requirements - Part 2: Industrial robot applications and robot cells," https://www.iso.org/standard/73934.html AI risk frameworks are useful for thinking about model uncertainty and governance, but they do not replace machine safety engineering.3NIST, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)," https://www.nist.gov/itl/ai-risk-management-framework
The Pattern To Reuse
The reusable lesson is short:
- text can be fuzzy
- intent must be structured
- validation must be deterministic
- actuator commands must be bounded
- every uncertain path must stop
That pattern scales. A larger robot might replace RPM strings with ROS velocity messages, joint trajectories, or a vendor controller API. The principle does not change: the AI can translate language into intent, but deterministic code owns the motor command.
Footnotes
-
ISO, "ISO 10218-1:2025 Robotics - Safety requirements - Part 1: Industrial robots," https://www.iso.org/standard/73933.html ↩
-
ISO, "ISO 10218-2:2025 Robotics - Safety requirements - Part 2: Industrial robot applications and robot cells," https://www.iso.org/standard/73934.html ↩
-
NIST, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)," https://www.nist.gov/itl/ai-risk-management-framework ↩