Logo
Real-time Speech Recognition

Python SDK

Real-time Speech Recognition Python SDK

1 Summary

The Python SDK for voice interaction services. Supported services: One sentence recognition, real-time speech recognition

 Please carefully read the instruction document 

1.1 SDK files description

File/DirectoryDescription
speech_recSDK related files
demoExample code
 ├─ transcriber_demo.pyReal time speech recognition example code
 ├─ recognizer_demo.pyShort Speech Recognition example code
 ├─ demo.wavChinese Mandarin Sample Audio (WAV Format)
 ├─ demo.mp3Chinese Mandarin Sample Audio (MP3 Format)
setup.pyinstall file
README-JA.mdJapanese Operator's Manual
README-EN.mdEnglish Operator's Manual

**Note **: The recognition results of the test audio provided in the SDK are consistent. The default audio used is MP3 format. If the incoming audio is in WAV or other formats, it will be converted to MP3 format.

2 Operating environment

Python3.4 or later, ffmpeg. It is recommended to create a separate python runtime environment, otherwise version conflicts may occur.

3 Installation method

1.Ensure that the Python package management tool setuptools is installed. If it is not installed, install it.On the command line, type:

$ pip install setuptools

2.Unzip the SDK, go to the folder (where the setup.py file is located), and run the following command in the SDK directory:

Install

$ python setup.py install Note: - The above pip and python commands correspond to Python3.

  • If the following information is displayed, the installation is successful: Finished processing dependencies for speech-python-rec-sdk==1.0.0.8
  • After installation, the build, dist, and speech_python_rec_sdk.egg-info files are generated.

3.Modify the concrete parameters of the file in demo:

recognizer_demo.py and transcriber_demo.py are execution files for one-sentence recognition and real-time speech recognition, respectively.

//Enter the appID that you get when you purchase a service in the platform app_id = '#####'

//Enter appSecret, which you get when you purchase a service in the platform app_secret = '#####'

//Enter the path of the voice file to be identified. Change it to the path of the customized audio file audio_path = '####'

//Input language, format see platform documentation center-Speech recognition-Development Guide lang_type = 'ja-JP'

4.Run the file recognizer_demo.py or transcriber_demo.py to recognize the speech. If the token fails or expires, please delete the local SpeechRecognizer_token.txt file or SpeechTranscriber_token.txt file and try again. If it is still outdated, please contact the technical staff.

To run the command in the demo directory, set parameters such as app_id corresponding to python files in the demo.

$ python recognizer_demo.py $ python transcriber_demo.py #After the successful run, the SpeechRecognizer_token.txt or SpeechTranscriber_token.txt files are generated in the path where the demo is running.

Note: - If "timestamp timeout" or "timestamp is greater than the current time" is displayed, the local time is inconsistent with the server time. According to the time difference in the error message, Modify the _token.py file in the speech-python-rec file, modify the timestamp = int(t) line code appropriately, timestamp = int(t)+1 or 2,3,4, etc., or timestamp = int(t)-1 or 2,3,4, etc.

  • After _token.py is modified, the modification takes effect only after it is created again. The specific steps are as follows:

Delete build, dist, and speech_python_rec_sdk.egg-info files created and generated in the SDK directory.

To uninstall and reinstall the SDK, run $pip uninstall speech-python-rec-sdk and repeat steps 2,3,4.

4 Parameter description

4.1 Use of real-time speech recognition Demo

speech_rec/demo/transcriber_demo.py It is a real-time voice recognition demo, and you can run it directly.

4.1.2 Key interface description

Real-time speech recognition SDK is mainly completed using the Transcriber class, authorized to use the Token class to complete, code call steps:

  1. Acquire the token by calling the get_token() method in the SpeechClient class.
  2. Create an instance of the SpeechTranscriber.
  3. Create the Callback instance.
  4. Call the set_token() methods of theSpeechTranscriber instance to set parameters.
  5. Connect to the server by invoking the start() method of the SpeechTranscriber instance.
  6. Invoke the send() method of the SpeechTranscriber instance to send audio.
  7. Call the stop() method of the SpeechTranscriber instance to stop the transmission.
  8. Disconnect from the server by calling the close() method of the SpeechTranscriberinstance.

4.1.3 Parameter description

ParameterTypeRequiredDescriptionDefault Value
lang_typeStringYesLanguage optionRequired
formatStringNoAudio encoding formatpcm
sample_rateIntegerNoAudio sampling rate
When sample_rate=‘8000’
field parameter field is required, and field=‘call-center’
16000
enable_intermediate_resultBooleanNoWhether to return intermediate recognition resultstrue
enable_punctuation_predictionBooleanNoWhether to add punctuation in post-processingtrue
enable_inverse_text_normalizationBooleanNoWhether to perform ITN in post-processingtrue
max_sentence_silenceIntegerNoSpeech sentence breaking detection threshold. Silence longer than this threshold is considered as a sentence break. The valid parameter range is 200~1200. Unit: Millisecondssample_rate=16000:800
sample_rate=8000:250
enable_wordsBooleanNoWhether to return word informationfalse
enable_intermediate_wordsBooleanNoWhether to return intermediate result word informationfalse
enable_modal_particle_filterBooleanNoWhether to enable modal particle filteringtrue
hotwords_listList<String>NoOne-time hotwords list, effective only for the current connection. If both hotwords_list and hotwords_id parameters exist, hotwords_list will be used. Up to 100 entries can be provided at a time.None
hotwords_idStringNoHotwords IDNone
hotwords_weightFloatNoHotwords weight, the range of values [0.1, 1.0]0.4
correction_words_idStringNoForced correction vocabulary ID
Supports multiple IDs, separated by a vertical bar |; all indicates using all IDs.
None
forbidden_words_idStringNoForbidden words ID
Supports multiple IDs, separated by a vertical bar |; all indicates using all IDs.
None
fieldStringNoField
general: supports the sample_rate of 16000Hz
call-center: supports the sample_rate of 8000Hz
None
audio_urlStringNoReturned audio format (stored on the platform for only 30 days)
mp3: Returns a url for the audio in mp3 format
pcm: Returns a url for the audio in pcm format
wav: Returns a url for the audio in wav format
None
connect_timeoutIntegerNoConnection timeout (seconds), range: 5-6010
gainIntegerNoAmplitude gain factor, range [1, 20]
1 indicates no amplification, 2 indicates the original amplitude doubled (amplified by 1 times), and so on
sample_rate=16000:1
sample_rate=8000:2
user_idStringNoCustom user information, which will be returned unchanged in the response message, with a maximum length of 36 charactersNone
enable_lang_labelBooleanNoReturn language code in recognition results when switching languages, only effective for Japanese-English and Chinese-English mixed languages. Note: Enabling this feature may cause a response delay when switching languagesfalse
paragraph_conditionIntegerNoReturn a new paragraph number in the next sentence within the same speaker_id when the set character count is reached, range [100, 2000], values outside the range indicate that this feature is not enabled0
enable_save_logBooleanNoProvide log of audio data and recognition results to help us improve the quality of our products and servicestrue
Parameter nametypeDescriptionDefault value
app_idStringApplication idrequired
tokenStringTo apply a Token, use Auth to obtain itrequired
lang_typeStringIdentification languagerequired
formatStringAudio coding formatmp3
sample_rateIntegerAudio sampling rate16000
enable_intermediate_resultBooleanWhether to return intermediate recognition resultsfalse
enable_punctuation_predictionBooleanWhether to add punctuation in post-processingfalse
enable_inverse_text_normalizationBooleanWhether to execute ITN in post-processingfalse
max_sentence_silenceIntegerVoice break detection threshold. If the silence duration exceeds this threshold, it will be treated as broken sentences. Valid parameters range from 200 to 2000 (ms).450
enable_wordsBooleanWhether to enable return word informationfalse
enable_modal_particle_filterBooleanWhether to enable modal word filteringfalse
hotwords_idStringHot word IDnone
hotwords_weightFloatHot word weight, value range [0.1, 1.0]0.4
correction_words_idStringForcibly replace thesaurus ids. Support multiple forcibly replace thesaurus ids, separated by a vertical bar |; all Indicates that all mandatory replacement lexicon ids are usednone
forbidden_words_idStringSensitive word ID. Multiple sensitive word ids are supported. Separate each sensitive word ID with a vertical line |. all Indicates that all sensitive word ids are usednone
speaker_idStringSpeaker number. speaker_id supports a maximum of 36 characters, and the excess part will be truncated and discarded. If the speaker_id parameter is not passed in the SpeakerStart event, speaker_id in the return result will be empty. The SpeakerStart event triggers the mandatory clause. Therefore, send the SpeakerStart event only once before switching speakers.none
enable_save_logBooleanCan you provide voice data and recognition result logs for us to use to improve product and service quality?true

4.1.4 Real-time speech recognition example code

For the full code, see the speech_python_rec/demo/transcriber_demo.py file in the SDK.

# -*- coding: utf-8 -*-
import json
import os.path
import time
import threading
import traceback

import speech_rec
from speech_rec.callbacks import SpeechTranscriberCallback
from speech_rec.parameters import DefaultParameters, Parameters

token = None
expire_time = 7  # Expiration time

info_list = [[], [], False]


class MyCallback(SpeechTranscriberCallback):
    """
    The parameters of the constructor are not required. You can add them as needed
    The name parameter in the example can be used as the audio file name to be recognized for distinguishing in multithreading
    """

    def __init__(self, name='default'):
        self._name = name

    def started(self, message):
        self.print_message(message)

    def result_changed(self, message):
        self.print_message(message)

    def sentence_begin(self, message):
        self.print_message(message)

    def sentence_end(self, message):
        global info_list
        channel = message['header']['user_id']
        begin_time = message['payload']['begin_time']
        end_time = message['payload']['time']
        result = message['payload']['result']
        if channel == "left" or channel == "right":
            if channel == "left":
                if result:
                    info_list[0].append([channel, begin_time, end_time, result])
            elif channel == "right":
                if result:
                    info_list[1].append([channel, begin_time, end_time, result])
            self.print_info()
        else:
            print(message)

    def completed(self, message):
        try:
            print(message)
        except Exception as ee:
            print(ee)
            traceback.print_exc()
        global info_list
        info_list[2] = True

    def print_info(self, ):
        left_list = info_list[0]
        right_list = info_list[1]
        if_end = info_list[2]

        def format_string(data_list, list_name):
            channel, begin_time, end_time, result = data_list[0]
            if list_name == "left_list":
                info_list[0].pop(0)
            else:
                info_list[1].pop(0)

            return f"channel:{channel}\tbegin_time:{begin_time}\tend_time:{end_time}\tresult:{result}"

        if left_list and right_list:
            while True:
                if not left_list and not right_list:
                    break
                if left_list and right_list:
                    left_begin_time = left_list[0][1]
                    left_end_time = left_list[0][2]
                    right_begin_time = right_list[0][1]
                    right_end_time = right_list[0][2]
                    if left_begin_time == right_begin_time and left_end_time > right_end_time:
                        print(format_string(right_list, "right_list"))
                    elif left_begin_time == right_begin_time and left_end_time <= right_end_time:
                        print(format_string(left_list, "left_list"))
                    elif left_begin_time < right_begin_time:
                        print(format_string(left_list, "left_list"))
                    elif left_begin_time >= right_begin_time:
                        print(format_string(right_list, "right_list"))
                if left_list and not right_list:
                    if left_end_time > right_end_time:
                        break
                    else:
                        print(format_string(left_list, "left_list"))
                if not left_list and right_list:
                    if right_end_time > left_end_time:
                        print(format_string(right_list, "right_list"))
                    else:
                        break
        elif if_end:
            while left_list:
                print(format_string(left_list, "left_list"))
            while right_list:
                print(format_string(right_list, "right_list"))

    def print_message(self, message):
        channel = message['header']['user_id']
        if channel == "left" or channel == "right":
            pass
        else:
            print(message)

    def task_failed(self, message):
        print(message)

    def warning_info(self, message):
        print(message)

    def channel_closed(self):
        print('MyCallback.OnTranslationChannelClosed')


def solution(client, app_id, app_secret, audio_path, lang_type, kwargs):
    """
    Transcribe speech,single thread
    :param client: SpeechClient
    :param app_id: Your app_id
    :param app_secret: Your app_secret
    :param audio_path: Audio path
    :param lang_type: Language type
    """
    each_audio_format = kwargs.get("audio_format", DefaultParameters.MP3)
    field_ = kwargs.get("field", DefaultParameters.FIELD)
    user_id = kwargs.get("user_id", "default")
    print("ccc",kwargs)
    assert os.path.exists(audio_path), "Audio file path error, please check your audio path."
    if judging_expire_time(app_id, app_secret, expire_time):
        callback = MyCallback(audio_path)
        transcriber = client.create_transcriber(callback)
        transcriber.set_app_id(app_id)
        transcriber.set_token(token)
        # fixme You can customize the configuration according to the official website documentation
        payload = {
            "lang_type": lang_type,
            "format": each_audio_format,
            "field": field_,
            "sample_rate": sample_rate,
            "user_id": user_id
        }
        transcriber._payload.update(**payload)
        try:
            ret = transcriber.start()
            if ret < 0:
                return ret
            with open(audio_path, 'rb') as f:
                audio = f.read(7680)
                cnt = 0
                while audio:
                    ret = transcriber.send(audio)
                    # fixme: If you need to mandatory clause or set speaker id by yourself, please use the codes below

                    # Default, customizable and changeable
                    # if cnt % 768000 == 0:
                    #     # Mandatory clause setting
                    #     transcriber.set_mandatory_clause(True)
                    #     transcriber._header = transcriber.get_mandatory_clause()
                    #     transcriber.send(json.dumps({Parameters.HEADER: transcriber._header}), False)
                    #     # Set speaker ID
                    #     transcriber.set_speaker_id(speaker_id)
                    #     speaker_id_info = transcriber.get_speaker_id()
                    #     transcriber.send(json.dumps(speaker_id_info), False)
                    #     print("Mandatory and Set speaker:",transcriber._payload)

                    if ret < 0:
                        break
                    cnt += 7680
                    time.sleep(0.24)
                    audio = f.read(7680)
            transcriber.stop()
        except Exception as e:
            print(e)
        finally:
            transcriber.close()
    else:
        print("token expired")


def judging_expire_time(app_id, app_secret, extime):
    global token
    token_file = "SpeechTranscriber_token.txt"
    new_time = time.time()
    if not os.path.exists(token_file):
        client.get_token(app_id, app_secret, token_file)

    with open(token_file, "r", encoding="utf-8") as fr:
        token_info = eval(fr.read())
    old_time = token_info['time']
    token = token_info['token']
    flag = True
    if new_time - old_time > 60 * 60 * 24 * (extime - 1):
        flag, _ = client.get_token(app_id, app_secret, token_file)
        if flag:
            flag = True
            pass
        else:
            for i in range(7):
                flag, _ = client.get_token(app_id, app_secret, token_file)
                if flag is not None:
                    flag = True
                    break
    return flag


def channels_split_solution(audio_path, right_path, left_path, **kwargs):
    client = kwargs.get('client')
    appid = kwargs.get('app_id')
    appsecret = kwargs.get('app_secret')
    langtype = kwargs.get('lang_type')
    remove_audio = kwargs.get('rm_audio', True)
    client.auto_split_audio(audio_path, right_path, left_path)
    thread_list = []
    right_kwargs = kwargs.copy()
    right_kwargs["user_id"] = "right"
    thread_r = threading.Thread(target=solution, args=(client, appid, appsecret, right_path, langtype, right_kwargs))
    thread_list.append(thread_r)
    left_kwargs = kwargs.copy()
    left_kwargs["user_id"] = "left"
    thread_l = threading.Thread(target=solution, args=(client, appid, appsecret, left_path, langtype, left_kwargs))
    thread_list.append(thread_l)
    for thread in thread_list:
        thread.start()
    for thread in thread_list:
        thread.join()
    if remove_audio:
        try:
            os.remove(right_path)
            os.remove(left_path)
        except Exception as ee:
            print(ee)
            traceback.print_exc()


if __name__ == "__main__":
    client = speech_rec.SpeechClient()
    # Set the level of output log information:DEBUG、INFO、WARNING、ERROR
    client.set_log_level('INFO')
    # Type your app_id and app_secret
    app_id = ""  # your app id
    app_secret = ""  # your app secret
    audio_path = ""  # audio path
    lang_type = ""  # lang type
    field = ""  # field
    sample_rate = 16000  # sample rate [int] 16000 or 8000
    audio_format = ""  # audio format
    assert app_id and app_secret and audio_path and lang_type and field and sample_rate and audio_format, "Please check args"
    channel = client.get_audio_info(audio_path)['channel']
    # fixme This is just a simple example, please modify it according to your needs.
    if channel == 1:
        kwargs = {
            "field": field,
            "sample_rate": sample_rate,
            "audio_format": audio_format,
            "user_id": "",
        }
        solution(client, app_id, app_secret, audio_path, lang_type, kwargs)
    elif channel == 2:
        # Dual channel 8K audio solution
        channels_split_solution(audio_path=audio_path,
                                left_path=f"left.{audio_format}",
                                right_path=f"right.{audio_format}",
                                client=client,
                                app_id=app_id,
                                app_secret=app_secret,
                                lang_type=lang_type,
                                field=field,
                                sample_rate=sample_rate,
                                audio_format=audio_format,
                                )

5 SDK Download

Python SDK