voice2json
is a collection of command-line tools for offline speech/intent recognition on Linux. It is free, open source (MIT), and supports 18 human languages.
From the command-line:
$ voice2json -p en transcribe-wav \
< turn-on-the-light.wav | \
voice2json -p en recognize-intent | \
jq .
produces a JSON event like:
{
"text": "turn on the light",
"intent": {
"name": "LightState"
},
"slots": {
"state": "on"
}
}
when trained with this template:
[LightState]
states = (on | off)
turn (<states>){state} [the] light
Tools like Node-RED can be easily integrated with voice2json
through MQTT.
voice2json
is optimized for:
- Sets of voice commands that are described well by a grammar
- Commands with uncommon words or pronunciations
- Commands or intents that can vary at runtime
It can be used to:
- Add voice commands to existing applications or Unix-style workflows
- Provide basic voice assistant functionality completely offline on modest hardware
- Bootstrap more sophisticated speech/intent recognition systems
Supported speech to text systems include:
- CMU’s pocketsphinx
- Dan Povey’s Kaldi
- Mozilla’s DeepSpeech 0.9
- Kyoto University’s Julius
Getting Started
- Install voice2json
- Run
voice2json -p <LANG> download-profile
to download language-specific files- Your profile settings will be in
$HOME/.local/share/voice2json/<PROFILE>/profile.yml
- Your profile settings will be in
- Edit
sentences.ini
in your profile and add your custom voice commands - Train your profile
- Use the transcribe-wav and recognize-intent commands to do speech/intent recognition
- See the recipes for more possibilities
Supported Languages
voice2json
supports the following languages/locales. I don’t speak or write any language besides U.S. English very well, so please let me know if any profile is broken or could be improved! I’m mostly Chinese Room-ing it.
- Catalan (
ca
) - Czech (
cs
) - German (
de
) - Greek (
el
) - English (
en
) - Spanish (
es
) - French (
fr
) - Hindi (
hi
) - Italian (
it
) - Korean (
ko
) - Kazakh (
kz
) - Dutch (
nl
)nl_kaldi-cgn
(default)nl_kaldi-rhasspy
nl_pocketsphinx-cmu
- Polish (
pl
)pl_deepspeech-jaco
(default)pl_julius-github
- Portuguese (
pt
) - Russian (
ru
)ru_kaldi-rhasspy
(default)ru_pocketsphinx-cmu
- Swedish (
sv
)sv_kaldi-montreal
sv_kaldi-rhasspy
(default)
- Vietnamese (
vi
) - Mandarin (
zh
)
Unique Features
voice2json
is more than just a wrapper around pocketsphinx, Kaldi, DeepSpeech, and Julius!
- Training produces both a speech and intent recognizer. By describing your voice commands with
voice2json
’s templating language, you get more than just transcriptions for free. - Re-training is fast enough to be done at runtime (usually < 5s), even up to millions of possible voice commands. This means you can change referenced slot values or add/remove intents on the fly.
- All of the available commands are designed to work well in Unix pipelines, typically consuming/emitting plaintext or newline-delimited JSON. Audio input/output is file-based, so you can receive audio from any source.
How it Works
voice2json
needs a description of the voice commands you want to be recognized in a file named sentences.ini
. This can be as simple as a listing of [Intents]
and sentences:
[GarageDoor]
open the garage door
close the garage door
[LightState]
turn on the living room lamp
turn off the living room lamp
...
A small templating language is available to describe sets of valid voice commands, with [optional words]
, (alternative | choices)
, and <shared rules>
. Portions of (commands can be){annotated}
as containing slot values that you want in the recognized JSON.
When trained, voice2json
will transform audio data into JSON objects with the recognized intent and slots.
Assumptions
voice2json
is designed to work under the following assumptions:
- Speech can be segmented into voice commands by a wake word + silence, or via a push-to-talk mechanism
- A voice commands contains at most one intent
- Intents and slot values are equally likely
Why Not That
Why not just use Google, Dragon, or something else?
Cloud-based speech and intent recognition services, such as Google Assistant or Amazon’s Alexa, require a constant Internet connection to function. Additionally, they keep a copy of everything you say on their servers. Despite the high accuracy and deep integration with other services, this approach is too brittle and uncomfortable for me.
Dragon Naturally Speaking offers local installations and offline functionality. Great! Unfortunately, Dragon requires Microsoft Windows to function. It is possible to use Dragon in Wine on Linux or via a virtual machine, but is difficult to set up and not officially supported by Nuance.
Until relatively recently, Snips offered an impressive amount of functionality offline and was easy to interoperate with. Unfortunately, they were purchased by Sonos and have since shut down their online services (required to change your Snips assistants). See Rhasspy if you are looking for a Snips replacement, and avoid investing time and effort in a platform you cannot control!
If you feel comfortable sending your voice commands through the Internet for someone else to process, or are not comfortable with Linux and the command line, I recommend taking a look at Mycroft.
No Magic, No Surprises
voice2json
is not an A.I. or gee-whizzy machine learning system. It does not attempt to guess what you want to do, and keeps everything on your local machine. There is no online account sign-up needed, no privacy policy to review, and no advertisements. All generated artifacts are in standard data formats; typically just text.
Once you’ve installed voice2json and downloaded a profile, there is no longer a need for an Internet connection. At runtime, voice2json
will only every write to your profile directory or the system’s temporary directory (/tmp
).
Contributing
Community contributions are welcomed! There are many different ways to contribute:
- Pull requests for bug fixes, new features, or corrections to the documentation
- Help with any of the supported language profiles, including:
- Testing to make sure the acoustic models and default pronunciation dictionaries are working
- Translations of the example voice commands
- Example WAV files of you speaking with text transcriptions for performance testing
- Contributing to Mozilla Common Voice
- Assist other
voice2json
community members - Implement or critique one of my crazy ideas
Ideas
Here are some ideas I have for making voice2json
better that I don’t have time to implement.
Yet Another Wake Word Library
Porcupine is the best free wake word library I’ve found to date, but it has two major limitations for me:
- It is not entirely open source
- I can’t build it for architecture that aren’t currently supported
- Custom wake words expire after 30 days
- I can’t include custom wake words in pre-built packages/images
Picovoice has been very generous to release porcupine for free, so I’m not suggesting they change anything. Instead, I’d love to see a free and open source wake word library that has these features:
- Free and completely open source
- Performance close to porcupine or snowboy
- Able to run on a Raspberry Pi alongside other software (no 100% CPU usage)
- Can add custom wake words without hours of training
Mycroft Precise comes close, but requires a lot of expertise and time to train custom wake words. It’s performance is also unfortunately poorer than porcupine (in my limited experience).
I’ve wondered if Mycroft Precise’s approach (a GRU) could be extended to include Pocketsphinx’s keyword search mode as an input feature during training and at runtime. On it’s own, Pocketsphinx’s performance as a wake word detector is abysmal. But perhaps as one of several features in a neural network, it could help more than hurt.
Acoustic Models From Audiobooks
The paper LibriSpeech: An ASR Corpus Based on Public Domain Audio Books describes a method for taking free audio books from LibriVox and training acoustic models from it using Kaldi. For languages besides English, this may be a way of getting around the lack of free transcribed audio datasets! Although not ideal, it’s better than nothing.
For some languages, the audiobook approach may be especially useful with end-to-end machine learning approaches, like Mozilla’s DeepSpeech and Facebook’s wav2letter. Typical approaches to building acoustic models require the identification of a language’s phonemes and the construction of a large pronunciation dictionary. End-to-end approaches go directly from acoustic features to graphemes (letters), subsuming the phonetic dictionary step. More data is required, of course, but books tend to be quite long.
Android Support
voice2json
uses pocketsphinx, Kaldi, and Julius for speech recognition. All of these libraries have at least a proof-of-concept Android build:
It seems feasible that voice2json
could be ported to Android, providing decent offline mobile speech/intent recognition.
Browser-Based voice2json
Could empscripten be used to compile WebAssembly versions of voice2json
’s dependencies? Combined with something like pyodide, it might be possible to run (most of) voice2json
entirely in a modern web browser.