Skip to the content.

voice2json logo

voice2json is a collection of command-line tools for offline speech/intent recognition on Linux. It is free, open source (MIT), and supports 18 human languages.

From the command-line:

$ voice2json -p en transcribe-wav \
      < turn-on-the-light.wav | \
      voice2json -p en recognize-intent | \
      jq .

produces a JSON event like:

{
    "text": "turn on the light",
    "intent": {
        "name": "LightState"
    },
    "slots": {
        "state": "on"
    }
}

when trained with this template:

[LightState]
states = (on | off)
turn (<states>){state} [the] light

Tools like Node-RED can be easily integrated with voice2json through MQTT.


voice2json is optimized for:

It can be used to:

Supported speech to text systems include:


Getting Started

  1. Install voice2json
  2. Run voice2json -p <LANG> download-profile to download language-specific files
    • Your profile settings will be in $HOME/.local/share/voice2json/<PROFILE>/profile.yml
  3. Edit sentences.ini in your profile and add your custom voice commands
  4. Train your profile
  5. Use the transcribe-wav and recognize-intent commands to do speech/intent recognition

Supported Languages

voice2json supports the following languages/locales. I don’t speak or write any language besides U.S. English very well, so please let me know if any profile is broken or could be improved! I’m mostly Chinese Room-ing it.


Unique Features

voice2json is more than just a wrapper around pocketsphinx, Kaldi, DeepSpeech, and Julius!


How it Works

voice2json needs a description of the voice commands you want to be recognized in a file named sentences.ini. This can be as simple as a listing of [Intents] and sentences:

[GarageDoor]
open the garage door
close the garage door

[LightState]
turn on the living room lamp
turn off the living room lamp
...

A small templating language is available to describe sets of valid voice commands, with [optional words], (alternative | choices), and <shared rules>. Portions of (commands can be){annotated} as containing slot values that you want in the recognized JSON.

When trained, voice2json will transform audio data into JSON objects with the recognized intent and slots.

Custom voice command training

Assumptions

voice2json is designed to work under the following assumptions:


Why Not That

Why not just use Google, Dragon, or something else?

Cloud-based speech and intent recognition services, such as Google Assistant or Amazon’s Alexa, require a constant Internet connection to function. Additionally, they keep a copy of everything you say on their servers. Despite the high accuracy and deep integration with other services, this approach is too brittle and uncomfortable for me.

Dragon Naturally Speaking offers local installations and offline functionality. Great! Unfortunately, Dragon requires Microsoft Windows to function. It is possible to use Dragon in Wine on Linux or via a virtual machine, but is difficult to set up and not officially supported by Nuance.

Until relatively recently, Snips offered an impressive amount of functionality offline and was easy to interoperate with. Unfortunately, they were purchased by Sonos and have since shut down their online services (required to change your Snips assistants). See Rhasspy if you are looking for a Snips replacement, and avoid investing time and effort in a platform you cannot control!

If you feel comfortable sending your voice commands through the Internet for someone else to process, or are not comfortable with Linux and the command line, I recommend taking a look at Mycroft.

No Magic, No Surprises

voice2json is not an A.I. or gee-whizzy machine learning system. It does not attempt to guess what you want to do, and keeps everything on your local machine. There is no online account sign-up needed, no privacy policy to review, and no advertisements. All generated artifacts are in standard data formats; typically just text.

Once you’ve installed voice2json and downloaded a profile, there is no longer a need for an Internet connection. At runtime, voice2json will only every write to your profile directory or the system’s temporary directory (/tmp).



Contributing

Community contributions are welcomed! There are many different ways to contribute:


Ideas

Here are some ideas I have for making voice2json better that I don’t have time to implement.

Yet Another Wake Word Library

Porcupine is the best free wake word library I’ve found to date, but it has two major limitations for me:

  1. It is not entirely open source
    • I can’t build it for architecture that aren’t currently supported
  2. Custom wake words expire after 30 days
    • I can’t include custom wake words in pre-built packages/images

Picovoice has been very generous to release porcupine for free, so I’m not suggesting they change anything. Instead, I’d love to see a free and open source wake word library that has these features:

Mycroft Precise comes close, but requires a lot of expertise and time to train custom wake words. It’s performance is also unfortunately poorer than porcupine (in my limited experience).

I’ve wondered if Mycroft Precise’s approach (a GRU) could be extended to include Pocketsphinx’s keyword search mode as an input feature during training and at runtime. On it’s own, Pocketsphinx’s performance as a wake word detector is abysmal. But perhaps as one of several features in a neural network, it could help more than hurt.

Acoustic Models From Audiobooks

The paper LibriSpeech: An ASR Corpus Based on Public Domain Audio Books describes a method for taking free audio books from LibriVox and training acoustic models from it using Kaldi. For languages besides English, this may be a way of getting around the lack of free transcribed audio datasets! Although not ideal, it’s better than nothing.

For some languages, the audiobook approach may be especially useful with end-to-end machine learning approaches, like Mozilla’s DeepSpeech and Facebook’s wav2letter. Typical approaches to building acoustic models require the identification of a language’s phonemes and the construction of a large pronunciation dictionary. End-to-end approaches go directly from acoustic features to graphemes (letters), subsuming the phonetic dictionary step. More data is required, of course, but books tend to be quite long.

Android Support

voice2json uses pocketsphinx, Kaldi, and Julius for speech recognition. All of these libraries have at least a proof-of-concept Android build:

It seems feasible that voice2json could be ported to Android, providing decent offline mobile speech/intent recognition.

Browser-Based voice2json

Could empscripten be used to compile WebAssembly versions of voice2json’s dependencies? Combined with something like pyodide, it might be possible to run (most of) voice2json entirely in a modern web browser.


smiling terminal