TRANSFORMER-TRANSDUCER:END-TO-END SPEECH RECOGNITION WITH SELF-ATTENTION

TRANSFORMER-TRANSDUCER:END-TO-END SPEECH RECOGNITION WITH SELF-ATTENTION 1.论文摘要 (1)使用VGGNet 的因果卷积结合位置信息来对输入进行下采样来保证推理的效率。(2)使用截断自注意力机制来保证transormer的流式处理从而减少计算复杂度。取得了在LibriSPeech test-clean 6.37%的字错率,在
相关文章
相关标签/搜索