-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Add shallow fusion to C API #149
base: master
Are you sure you want to change the base?
Conversation
Hi, I'm using OnlineLMConfig by Python APIs, but it seams that it didn't work by set 'rnn lm onnx path' to 'model',may I ask what should I do to using shallow fusion by Python APIs? |
How do you tell it does not work? |
By decoding a test set which has 1k samples, say 10k characters, but got exactly the same result compared to not using LM. |
Before this, I obtained better results by using Python LM decode scripts from Icefall. This was with the same test set, ASR, and LM model. So, it was expected that better results would be achieved, even if only a few characters were changed. |
How many lm scales have you tried? |
python code : lm_config = OnlineLMConfig(
model=lm,
scale=scale,
)
print(lm_config)
print("="*30)
recognizer_config = OnlineRecognizerConfig(
feat_config=feat_config,
model_config=model_config,
lm_config=lm_config,
endpoint_config=endpoint_config,
enable_endpoint=enable_endpoint_detection,
decoding_method=decoding_method,
max_active_paths=max_active_paths,
context_score=context_score,
)
print(recognizer_config) and than print out this: OnlineLMConfig(model="base/with-state-epoch-21-avg-2.onnx", scale=1.1)
==============================
OnlineRecognizerConfig(feat_config=FeatureExtractorConfig(sampling_rate=16000, feature_dim=80), model_config=OnlineTransducerModelConfig(encoder_filename="./asr_model_chunk320/encoder.onnx", decoder_filename="./asr_model_chunk320/decoder.onnx", joiner_filename="./asr_model_chunk320/joiner.onnx", tokens="./asr_model_chunk320/tokens.txt", num_threads=8, provider="cpu", model_type="", debug=False), lm_config=OnlineLMConfig(model="", scale=0.5), endpoint_config=EndpointConfig(rule1=EndpointRule(must_contain_nonsilence=False, min_trailing_silence=2.4, min_utterance_length=0), rule2=EndpointRule(must_contain_nonsilence=True, min_trailing_silence=1.2, min_utterance_length=0), rule3=EndpointRule(must_contain_nonsilence=False, min_trailing_silence=0, min_utterance_length=20)), enable_endpoint=False, max_active_paths=4, context_score=1.5, decoding_method="modified_beam_search") added
and rebuild, still print lm_config=OnlineLMConfig(model="", scale=0.5), of recognizer_config
|
by adding |
Thank you for identifying the bug. Would you mind creating a PR to fix it? |
Sure, I will create PR after more test finish |
I'd like to use the online LM with the C API. What's the status on this? |
Integrate changes from #147
TODOs