Python SDK
Real-time Speech Recognition Python SDK
1 Summary
The Python SDK for voice interaction services. Supported services: One sentence recognition, real-time speech recognition
Please carefully read the instruction document1.1 SDK files description
| File/Directory | Description |
|---|---|
| speech_rec | SDK related files |
| demo | Example code |
| ├─ transcriber_demo.py | Real time speech recognition example code |
| ├─ recognizer_demo.py | Short Speech Recognition example code |
| ├─ demo.wav | Chinese Mandarin Sample Audio (WAV Format) |
| ├─ demo.mp3 | Chinese Mandarin Sample Audio (MP3 Format) |
| setup.py | install file |
| README-JA.md | Japanese Operator's Manual |
| README-EN.md | English Operator's Manual |
**Note **: The recognition results of the test audio provided in the SDK are consistent. The default audio used is MP3 format. If the incoming audio is in WAV or other formats, it will be converted to MP3 format.
2 Operating environment
Python3.4 or later, ffmpeg. It is recommended to create a separate python runtime environment, otherwise version conflicts may occur.
3 Installation method
1.Ensure that the Python package management tool setuptools is installed. If it is not installed, install it.On the command line, type:
$ pip install setuptools
2.Unzip the SDK, go to the folder (where the setup.py file is located), and run the following command in the SDK directory:
Install
$ python setup.py install Note: - The above pip and python commands correspond to Python3.
- If the following information is displayed, the installation is successful: Finished processing dependencies for speech-python-rec-sdk==1.0.0.8
- After installation, the build, dist, and speech_python_rec_sdk.egg-info files are generated.
3.Modify the concrete parameters of the file in demo:
recognizer_demo.py and transcriber_demo.py are execution files for one-sentence recognition and real-time speech recognition, respectively.
//Enter the appID that you get when you purchase a service in the platform app_id = '#####'
//Enter appSecret, which you get when you purchase a service in the platform app_secret = '#####'
//Enter the path of the voice file to be identified. Change it to the path of the customized audio file audio_path = '####'
//Input language, format see platform documentation center-Speech recognition-Development Guide lang_type = 'ja-JP'
4.Run the file recognizer_demo.py or transcriber_demo.py to recognize the speech. If the token fails or expires, please delete the local SpeechRecognizer_token.txt file or SpeechTranscriber_token.txt file and try again. If it is still outdated, please contact the technical staff.
To run the command in the demo directory, set parameters such as app_id corresponding to python files in the demo.
$ python recognizer_demo.py $ python transcriber_demo.py #After the successful run, the SpeechRecognizer_token.txt or SpeechTranscriber_token.txt files are generated in the path where the demo is running.
Note: - If "timestamp timeout" or "timestamp is greater than the current time" is displayed, the local time is inconsistent with the server time. According to the time difference in the error message, Modify the _token.py file in the speech-python-rec file, modify the timestamp = int(t) line code appropriately, timestamp = int(t)+1 or 2,3,4, etc., or timestamp = int(t)-1 or 2,3,4, etc.
- After _token.py is modified, the modification takes effect only after it is created again. The specific steps are as follows:
Delete build, dist, and speech_python_rec_sdk.egg-info files created and generated in the SDK directory.
To uninstall and reinstall the SDK, run $pip uninstall speech-python-rec-sdk and repeat steps 2,3,4.
4 Parameter description
4.1 Use of real-time speech recognition Demo
speech_rec/demo/transcriber_demo.py It is a real-time voice recognition demo, and you can run it directly.
4.1.2 Key interface description
Real-time speech recognition SDK is mainly completed using the Transcriber class, authorized to use the Token class to complete, code call steps:
- Acquire the token by calling the
get_token()method in theSpeechClientclass. - Create an instance of the
SpeechTranscriber. - Create the
Callbackinstance. - Call the
set_token()methods of theSpeechTranscriberinstance to set parameters. - Connect to the server by invoking the
start()method of theSpeechTranscriberinstance. - Invoke the
send()method of theSpeechTranscriberinstance to send audio. - Call the
stop()method of theSpeechTranscriberinstance to stop the transmission. - Disconnect from the server by calling the
close()method of theSpeechTranscriberinstance.
4.1.3 Parameter description
| Parameter | Type | Required | Description | Default Value |
|---|---|---|---|---|
| lang_type | String | Yes | Language option | Required |
| format | String | No | Audio encoding format | pcm |
| sample_rate | Integer | No | Audio sampling rate When sample_rate=‘8000’ field parameter field is required, and field=‘call-center’ | 16000 |
| enable_intermediate_result | Boolean | No | Whether to return intermediate recognition results | true |
| enable_punctuation_prediction | Boolean | No | Whether to add punctuation in post-processing | true |
| enable_inverse_text_normalization | Boolean | No | Whether to perform ITN in post-processing | true |
| max_sentence_silence | Integer | No | Speech sentence breaking detection threshold. Silence longer than this threshold is considered as a sentence break. The valid parameter range is 200~1200. Unit: Milliseconds | sample_rate=16000:800 sample_rate=8000:250 |
| enable_words | Boolean | No | Whether to return word information | false |
| enable_intermediate_words | Boolean | No | Whether to return intermediate result word information | false |
| enable_modal_particle_filter | Boolean | No | Whether to enable modal particle filtering | true |
| hotwords_list | List<String> | No | One-time hotwords list, effective only for the current connection. If both hotwords_list and hotwords_id parameters exist, hotwords_list will be used. Up to 100 entries can be provided at a time. | None |
| hotwords_id | String | No | Hotwords ID | None |
| hotwords_weight | Float | No | Hotwords weight, the range of values [0.1, 1.0] | 0.4 |
| correction_words_id | String | No | Forced correction vocabulary ID Supports multiple IDs, separated by a vertical bar |; all indicates using all IDs. | None |
| forbidden_words_id | String | No | Forbidden words ID Supports multiple IDs, separated by a vertical bar |; all indicates using all IDs. | None |
| field | String | No | Field general: supports the sample_rate of 16000Hz call-center: supports the sample_rate of 8000Hz | None |
| audio_url | String | No | Returned audio format (stored on the platform for only 30 days) mp3: Returns a url for the audio in mp3 format pcm: Returns a url for the audio in pcm format wav: Returns a url for the audio in wav format | None |
| connect_timeout | Integer | No | Connection timeout (seconds), range: 5-60 | 10 |
| gain | Integer | No | Amplitude gain factor, range [1, 20] 1 indicates no amplification, 2 indicates the original amplitude doubled (amplified by 1 times), and so on | sample_rate=16000:1 sample_rate=8000:2 |
| user_id | String | No | Custom user information, which will be returned unchanged in the response message, with a maximum length of 36 characters | None |
| enable_lang_label | Boolean | No | Return language code in recognition results when switching languages, only effective for Japanese-English and Chinese-English mixed languages. Note: Enabling this feature may cause a response delay when switching languages | false |
| paragraph_condition | Integer | No | Return a new paragraph number in the next sentence within the same speaker_id when the set character count is reached, range [100, 2000], values outside the range indicate that this feature is not enabled | 0 |
| enable_save_log | Boolean | No | Provide log of audio data and recognition results to help us improve the quality of our products and services | true |
| Parameter name | type | Description | Default value |
|---|---|---|---|
| app_id | String | Application id | required |
| token | String | To apply a Token, use Auth to obtain it | required |
| lang_type | String | Identification language | required |
| format | String | Audio coding format | mp3 |
| sample_rate | Integer | Audio sampling rate | 16000 |
| enable_intermediate_result | Boolean | Whether to return intermediate recognition results | false |
| enable_punctuation_prediction | Boolean | Whether to add punctuation in post-processing | false |
| enable_inverse_text_normalization | Boolean | Whether to execute ITN in post-processing | false |
| max_sentence_silence | Integer | Voice break detection threshold. If the silence duration exceeds this threshold, it will be treated as broken sentences. Valid parameters range from 200 to 2000 (ms). | 450 |
| enable_words | Boolean | Whether to enable return word information | false |
| enable_modal_particle_filter | Boolean | Whether to enable modal word filtering | false |
| hotwords_id | String | Hot word ID | none |
| hotwords_weight | Float | Hot word weight, value range [0.1, 1.0] | 0.4 |
| correction_words_id | String | Forcibly replace thesaurus ids. Support multiple forcibly replace thesaurus ids, separated by a vertical bar |; all Indicates that all mandatory replacement lexicon ids are used | none |
| forbidden_words_id | String | Sensitive word ID. Multiple sensitive word ids are supported. Separate each sensitive word ID with a vertical line |. all Indicates that all sensitive word ids are used | none |
| speaker_id | String | Speaker number. speaker_id supports a maximum of 36 characters, and the excess part will be truncated and discarded. If the speaker_id parameter is not passed in the SpeakerStart event, speaker_id in the return result will be empty. The SpeakerStart event triggers the mandatory clause. Therefore, send the SpeakerStart event only once before switching speakers. | none |
| enable_save_log | Boolean | Can you provide voice data and recognition result logs for us to use to improve product and service quality? | true |
4.1.4 Real-time speech recognition example code
For the full code, see the speech_python_rec/demo/transcriber_demo.py file in the SDK.
# -*- coding: utf-8 -*-
import json
import os.path
import time
import threading
import traceback
import speech_rec
from speech_rec.callbacks import SpeechTranscriberCallback
from speech_rec.parameters import DefaultParameters, Parameters
token = None
expire_time = 7 # Expiration time
info_list = [[], [], False]
class MyCallback(SpeechTranscriberCallback):
"""
The parameters of the constructor are not required. You can add them as needed
The name parameter in the example can be used as the audio file name to be recognized for distinguishing in multithreading
"""
def __init__(self, name='default'):
self._name = name
def started(self, message):
self.print_message(message)
def result_changed(self, message):
self.print_message(message)
def sentence_begin(self, message):
self.print_message(message)
def sentence_end(self, message):
global info_list
channel = message['header']['user_id']
begin_time = message['payload']['begin_time']
end_time = message['payload']['time']
result = message['payload']['result']
if channel == "left" or channel == "right":
if channel == "left":
if result:
info_list[0].append([channel, begin_time, end_time, result])
elif channel == "right":
if result:
info_list[1].append([channel, begin_time, end_time, result])
self.print_info()
else:
print(message)
def completed(self, message):
try:
print(message)
except Exception as ee:
print(ee)
traceback.print_exc()
global info_list
info_list[2] = True
def print_info(self, ):
left_list = info_list[0]
right_list = info_list[1]
if_end = info_list[2]
def format_string(data_list, list_name):
channel, begin_time, end_time, result = data_list[0]
if list_name == "left_list":
info_list[0].pop(0)
else:
info_list[1].pop(0)
return f"channel:{channel}\tbegin_time:{begin_time}\tend_time:{end_time}\tresult:{result}"
if left_list and right_list:
while True:
if not left_list and not right_list:
break
if left_list and right_list:
left_begin_time = left_list[0][1]
left_end_time = left_list[0][2]
right_begin_time = right_list[0][1]
right_end_time = right_list[0][2]
if left_begin_time == right_begin_time and left_end_time > right_end_time:
print(format_string(right_list, "right_list"))
elif left_begin_time == right_begin_time and left_end_time <= right_end_time:
print(format_string(left_list, "left_list"))
elif left_begin_time < right_begin_time:
print(format_string(left_list, "left_list"))
elif left_begin_time >= right_begin_time:
print(format_string(right_list, "right_list"))
if left_list and not right_list:
if left_end_time > right_end_time:
break
else:
print(format_string(left_list, "left_list"))
if not left_list and right_list:
if right_end_time > left_end_time:
print(format_string(right_list, "right_list"))
else:
break
elif if_end:
while left_list:
print(format_string(left_list, "left_list"))
while right_list:
print(format_string(right_list, "right_list"))
def print_message(self, message):
channel = message['header']['user_id']
if channel == "left" or channel == "right":
pass
else:
print(message)
def task_failed(self, message):
print(message)
def warning_info(self, message):
print(message)
def channel_closed(self):
print('MyCallback.OnTranslationChannelClosed')
def solution(client, app_id, app_secret, audio_path, lang_type, kwargs):
"""
Transcribe speech,single thread
:param client: SpeechClient
:param app_id: Your app_id
:param app_secret: Your app_secret
:param audio_path: Audio path
:param lang_type: Language type
"""
each_audio_format = kwargs.get("audio_format", DefaultParameters.MP3)
field_ = kwargs.get("field", DefaultParameters.FIELD)
user_id = kwargs.get("user_id", "default")
print("ccc",kwargs)
assert os.path.exists(audio_path), "Audio file path error, please check your audio path."
if judging_expire_time(app_id, app_secret, expire_time):
callback = MyCallback(audio_path)
transcriber = client.create_transcriber(callback)
transcriber.set_app_id(app_id)
transcriber.set_token(token)
# fixme You can customize the configuration according to the official website documentation
payload = {
"lang_type": lang_type,
"format": each_audio_format,
"field": field_,
"sample_rate": sample_rate,
"user_id": user_id
}
transcriber._payload.update(**payload)
try:
ret = transcriber.start()
if ret < 0:
return ret
with open(audio_path, 'rb') as f:
audio = f.read(7680)
cnt = 0
while audio:
ret = transcriber.send(audio)
# fixme: If you need to mandatory clause or set speaker id by yourself, please use the codes below
# Default, customizable and changeable
# if cnt % 768000 == 0:
# # Mandatory clause setting
# transcriber.set_mandatory_clause(True)
# transcriber._header = transcriber.get_mandatory_clause()
# transcriber.send(json.dumps({Parameters.HEADER: transcriber._header}), False)
# # Set speaker ID
# transcriber.set_speaker_id(speaker_id)
# speaker_id_info = transcriber.get_speaker_id()
# transcriber.send(json.dumps(speaker_id_info), False)
# print("Mandatory and Set speaker:",transcriber._payload)
if ret < 0:
break
cnt += 7680
time.sleep(0.24)
audio = f.read(7680)
transcriber.stop()
except Exception as e:
print(e)
finally:
transcriber.close()
else:
print("token expired")
def judging_expire_time(app_id, app_secret, extime):
global token
token_file = "SpeechTranscriber_token.txt"
new_time = time.time()
if not os.path.exists(token_file):
client.get_token(app_id, app_secret, token_file)
with open(token_file, "r", encoding="utf-8") as fr:
token_info = eval(fr.read())
old_time = token_info['time']
token = token_info['token']
flag = True
if new_time - old_time > 60 * 60 * 24 * (extime - 1):
flag, _ = client.get_token(app_id, app_secret, token_file)
if flag:
flag = True
pass
else:
for i in range(7):
flag, _ = client.get_token(app_id, app_secret, token_file)
if flag is not None:
flag = True
break
return flag
def channels_split_solution(audio_path, right_path, left_path, **kwargs):
client = kwargs.get('client')
appid = kwargs.get('app_id')
appsecret = kwargs.get('app_secret')
langtype = kwargs.get('lang_type')
remove_audio = kwargs.get('rm_audio', True)
client.auto_split_audio(audio_path, right_path, left_path)
thread_list = []
right_kwargs = kwargs.copy()
right_kwargs["user_id"] = "right"
thread_r = threading.Thread(target=solution, args=(client, appid, appsecret, right_path, langtype, right_kwargs))
thread_list.append(thread_r)
left_kwargs = kwargs.copy()
left_kwargs["user_id"] = "left"
thread_l = threading.Thread(target=solution, args=(client, appid, appsecret, left_path, langtype, left_kwargs))
thread_list.append(thread_l)
for thread in thread_list:
thread.start()
for thread in thread_list:
thread.join()
if remove_audio:
try:
os.remove(right_path)
os.remove(left_path)
except Exception as ee:
print(ee)
traceback.print_exc()
if __name__ == "__main__":
client = speech_rec.SpeechClient()
# Set the level of output log information:DEBUG、INFO、WARNING、ERROR
client.set_log_level('INFO')
# Type your app_id and app_secret
app_id = "" # your app id
app_secret = "" # your app secret
audio_path = "" # audio path
lang_type = "" # lang type
field = "" # field
sample_rate = 16000 # sample rate [int] 16000 or 8000
audio_format = "" # audio format
assert app_id and app_secret and audio_path and lang_type and field and sample_rate and audio_format, "Please check args"
channel = client.get_audio_info(audio_path)['channel']
# fixme This is just a simple example, please modify it according to your needs.
if channel == 1:
kwargs = {
"field": field,
"sample_rate": sample_rate,
"audio_format": audio_format,
"user_id": "",
}
solution(client, app_id, app_secret, audio_path, lang_type, kwargs)
elif channel == 2:
# Dual channel 8K audio solution
channels_split_solution(audio_path=audio_path,
left_path=f"left.{audio_format}",
right_path=f"right.{audio_format}",
client=client,
app_id=app_id,
app_secret=app_secret,
lang_type=lang_type,
field=field,
sample_rate=sample_rate,
audio_format=audio_format,
)