Voice Recognition
voice_recognition.py
Overview
voice_recognition.py is a text stub for ASR: it converts integer “utterance IDs” into canned sentences so downstream components (order-verifier, task-manager, etc.) can be developed and tested without a real speech recogniser.
Interfaces (strongly-typed, stateless)
Direction / Type |
Topic |
Semantics |
---|---|---|
Required (sub) |
|
Encoded mic “utterance” IDs (std_msgs/Int32) |
Required (sub) |
|
Legacy mic channel (std_msgs/Int32) |
Provided (pub) |
|
Recognised text (std_msgs/String) |
Provided (pub) |
|
Legacy speech topic, latched (std_msgs/String) |
Contract
Pre-conditions
A voice_strings: dict[int, str] mapping is passed to the constructor.
Post-conditions
On each valid key, publishes exactly one sentence on both topics.
Unknown keys are ignored (no publish, last sentence remains latched).
Invariants
The node resets its internal key cache after a successful publish.
Publisher queues absorb short bursts (queue_size=10, latched for legacy).
Implementation summary
Callbacks cache incoming Int32 messages to self.data.
Main loop checks self.data at 10 Hz: a. If self.data matches a key in voice_strings, publishes the mapped
sentence on both topics and logs it.
Clears self.data to avoid replaying the same phrase.
Holds no other state—perfect for hot-reload or unit tests.