• 文档 >
  • 使用 Tacotron2 进行文本转语音 >
  • 旧版本 (稳定版)
快捷方式

使用 Tacotron2 进行文本转语音

作者Yao-Yuan Yang, Moto Hira

概述

本教程演示了如何使用 torchaudio 中预训练的 Tacotron2 构建文本转语音管道。

文本转语音管道的流程如下:

  1. 文本预处理

    首先,输入文本被编码成符号列表。在本教程中,我们将使用英文字符和音素作为符号。

  2. 频谱图生成

    从编码的文本中,生成频谱图。我们使用Tacotron2模型来完成此操作。

  3. 时域转换

    最后一步是将频谱图转换为波形。从频谱图生成语音的过程也称为声码器。在本教程中,使用了三种不同的声码器:WaveRNNGriffinLimNvidia 的 WaveGlow

下图说明了整个过程。

https://download.pytorch.org/torchaudio/tutorial-assets/tacotron2_tts_pipeline.png

所有相关组件都捆绑在torchaudio.pipelines.Tacotron2TTSBundle中,但本教程还将涵盖其内部过程。

准备

首先,我们安装必要的依赖项。除了torchaudio,还需要DeepPhonemizer来执行基于音素的编码。

%%bash
pip3 install deep_phonemizer
import torch
import torchaudio

torch.random.manual_seed(0)
device = "cuda" if torch.cuda.is_available() else "cpu"

print(torch.__version__)
print(torchaudio.__version__)
print(device)
2.8.0+cu126
2.8.0
cuda
import IPython
import matplotlib.pyplot as plt

文本处理

基于字符的编码

本节将介绍基于字符的编码是如何工作的。

由于预训练的 Tacotron2 模型需要特定的符号表,因此torchaudio中提供了相同的功能。但是,我们首先将手动实现编码以帮助理解。

首先,我们定义符号集'_-\'(),.:;? abcdefghijklmnopqrstuvwxyz'。然后,我们将输入文本的每个字符映射到符号表中相应符号的索引。不在表中的符号将被忽略。

symbols = "_-!'(),.:;? abcdefghijklmnopqrstuvwxyz"
look_up = {s: i for i, s in enumerate(symbols)}
symbols = set(symbols)


def text_to_sequence(text):
    text = text.lower()
    return [look_up[s] for s in text if s in symbols]


text = "Hello world! Text to speech!"
print(text_to_sequence(text))
[19, 16, 23, 23, 26, 11, 34, 26, 29, 23, 15, 2, 11, 31, 16, 35, 31, 11, 31, 26, 11, 30, 27, 16, 16, 14, 19, 2]

如上所述,符号表和索引必须与预训练的 Tacotron2 模型所期望的一致。torchaudio提供了与预训练模型相同的转换功能。您可以按如下方式实例化并使用此类转换。

processor = torchaudio.pipelines.TACOTRON2_WAVERNN_CHAR_LJSPEECH.get_text_processor()

text = "Hello world! Text to speech!"
processed, lengths = processor(text)

print(processed)
print(lengths)
tensor([[19, 16, 23, 23, 26, 11, 34, 26, 29, 23, 15,  2, 11, 31, 16, 35, 31, 11,
         31, 26, 11, 30, 27, 16, 16, 14, 19,  2]])
tensor([28], dtype=torch.int32)

注意:我们的手动编码和torchaudiotext_processor的输出是匹配的(这意味着我们正确地重新实现了库内部的功能)。它接受文本或文本列表作为输入。当提供文本列表时,返回的lengths变量表示输出批次中每个已处理标记的有效长度。

中间表示可以按如下方式检索

print([processor.tokens[i] for i in processed[0, : lengths[0]]])
['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd', '!', ' ', 't', 'e', 'x', 't', ' ', 't', 'o', ' ', 's', 'p', 'e', 'e', 'c', 'h', '!']

基于音素的编码

基于音素的编码类似于基于字符的编码,但它使用基于音素的符号表和 G2P(字母到音素)模型。

G2P 模型的细节超出了本教程的范围,我们只看转换是什么样子。

与基于字符的编码的情况类似,编码过程应该与预训练的 Tacotron2 模型所训练的内容相匹配。torchaudio有一个用于创建该过程的接口。

以下代码演示了如何创建和使用该过程。在幕后,使用DeepPhonemizer包创建了一个 G2P 模型,并获取了DeepPhonemizer作者发布的预训练权重。

bundle = torchaudio.pipelines.TACOTRON2_WAVERNN_PHONE_LJSPEECH

processor = bundle.get_text_processor()

text = "Hello world! Text to speech!"
with torch.inference_mode():
    processed, lengths = processor(text)

print(processed)
print(lengths)
  0%|          | 0.00/63.6M [00:00<?, ?B/s]
  0%|          | 128k/63.6M [00:00<02:17, 485kB/s]
  1%|          | 640k/63.6M [00:00<00:39, 1.67MB/s]
  4%|3         | 2.25M/63.6M [00:00<00:13, 4.87MB/s]
 11%|#1        | 7.25M/63.6M [00:00<00:04, 13.8MB/s]
 20%|##        | 13.0M/63.6M [00:00<00:02, 20.6MB/s]
 25%|##4       | 15.8M/63.6M [00:01<00:02, 22.4MB/s]
 30%|###       | 19.2M/63.6M [00:01<00:02, 21.7MB/s]
 39%|###8      | 24.8M/63.6M [00:01<00:01, 25.2MB/s]
 48%|####7     | 30.2M/63.6M [00:01<00:01, 27.4MB/s]
 57%|#####6    | 36.0M/63.6M [00:01<00:00, 29.3MB/s]
 62%|######2   | 39.5M/63.6M [00:01<00:00, 26.4MB/s]
 69%|######9   | 44.1M/63.6M [00:02<00:00, 26.5MB/s]
 78%|#######7  | 49.6M/63.6M [00:02<00:00, 28.2MB/s]
 87%|########6 | 55.2M/63.6M [00:02<00:00, 29.6MB/s]
 96%|#########5| 60.9M/63.6M [00:02<00:00, 32.6MB/s]
100%|##########| 63.6M/63.6M [00:02<00:00, 24.5MB/s]
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/modules/transformer.py:392: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
  warnings.warn(
tensor([[54, 20, 65, 69, 11, 92, 44, 65, 38,  2, 11, 81, 40, 64, 79, 81, 11, 81,
         20, 11, 79, 77, 59, 37,  2]])
tensor([25], dtype=torch.int32)

请注意,编码值与基于字符的编码示例不同。

中间表示如下所示。

print([processor.tokens[i] for i in processed[0, : lengths[0]]])
['HH', 'AH', 'L', 'OW', ' ', 'W', 'ER', 'L', 'D', '!', ' ', 'T', 'EH', 'K', 'S', 'T', ' ', 'T', 'AH', ' ', 'S', 'P', 'IY', 'CH', '!']

频谱图生成

Tacotron2是我们用于从编码文本生成频谱图的模型。有关模型的详细信息,请参阅该论文

使用预训练权重实例化 Tacotron2 模型很容易,但是请注意,Tacotron2 模型的输入需要通过匹配的文本处理器进行处理。

torchaudio.pipelines.Tacotron2TTSBundle将匹配的模型和处理器捆绑在一起,从而轻松创建管道。

有关可用捆绑包及其用法,请参阅Tacotron2TTSBundle

bundle = torchaudio.pipelines.TACOTRON2_WAVERNN_PHONE_LJSPEECH
processor = bundle.get_text_processor()
tacotron2 = bundle.get_tacotron2().to(device)

text = "Hello world! Text to speech!"

with torch.inference_mode():
    processed, lengths = processor(text)
    processed = processed.to(device)
    lengths = lengths.to(device)
    spec, _, _ = tacotron2.infer(processed, lengths)


_ = plt.imshow(spec[0].cpu().detach(), origin="lower", aspect="auto")
tacotron2 pipeline tutorial
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/modules/transformer.py:392: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
  warnings.warn(
Downloading: "https://download.pytorch.org/torchaudio/models/tacotron2_english_phonemes_1500_epochs_wavernn_ljspeech.pth" to /root/.cache/torch/hub/checkpoints/tacotron2_english_phonemes_1500_epochs_wavernn_ljspeech.pth

  0%|          | 0.00/107M [00:00<?, ?B/s]
 20%|#9        | 21.1M/107M [00:00<00:00, 221MB/s]
 40%|####      | 43.4M/107M [00:00<00:00, 228MB/s]
 70%|######9   | 74.8M/107M [00:00<00:00, 274MB/s]
 94%|#########3| 101M/107M [00:00<00:00, 259MB/s]
100%|##########| 107M/107M [00:00<00:00, 252MB/s]

请注意,Tacotron2.infer方法执行多项式采样,因此生成频谱图的过程会产生随机性。

def plot():
    fig, ax = plt.subplots(3, 1)
    for i in range(3):
        with torch.inference_mode():
            spec, spec_lengths, _ = tacotron2.infer(processed, lengths)
        print(spec[0].shape)
        ax[i].imshow(spec[0].cpu().detach(), origin="lower", aspect="auto")


plot()
tacotron2 pipeline tutorial
torch.Size([80, 190])
torch.Size([80, 184])
torch.Size([80, 185])

波形生成

一旦生成了频谱图,最后一个过程就是使用声码器从频谱图中恢复波形。

torchaudio提供了基于GriffinLimWaveRNN的声码器。

WaveRNN 声码器

从上一节继续,我们可以从同一个捆绑包中实例化匹配的 WaveRNN 模型。

bundle = torchaudio.pipelines.TACOTRON2_WAVERNN_PHONE_LJSPEECH

processor = bundle.get_text_processor()
tacotron2 = bundle.get_tacotron2().to(device)
vocoder = bundle.get_vocoder().to(device)

text = "Hello world! Text to speech!"

with torch.inference_mode():
    processed, lengths = processor(text)
    processed = processed.to(device)
    lengths = lengths.to(device)
    spec, spec_lengths, _ = tacotron2.infer(processed, lengths)
    waveforms, lengths = vocoder(spec, spec_lengths)
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/modules/transformer.py:392: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
  warnings.warn(
Downloading: "https://download.pytorch.org/torchaudio/models/wavernn_10k_epochs_8bits_ljspeech.pth" to /root/.cache/torch/hub/checkpoints/wavernn_10k_epochs_8bits_ljspeech.pth

  0%|          | 0.00/16.7M [00:00<?, ?B/s]
100%|##########| 16.7M/16.7M [00:00<00:00, 326MB/s]
def plot(waveforms, spec, sample_rate):
    waveforms = waveforms.cpu().detach()

    fig, [ax1, ax2] = plt.subplots(2, 1)
    ax1.plot(waveforms[0])
    ax1.set_xlim(0, waveforms.size(-1))
    ax1.grid(True)
    ax2.imshow(spec[0].cpu().detach(), origin="lower", aspect="auto")
    return IPython.display.Audio(waveforms[0:1], rate=sample_rate)


plot(waveforms, spec, vocoder.sample_rate)
tacotron2 pipeline tutorial


Griffin-Lim 声码器

使用 Griffin-Lim 声码器与 WaveRNN 相同。您可以使用get_vocoder()方法实例化声码器对象并传入频谱图。

bundle = torchaudio.pipelines.TACOTRON2_GRIFFINLIM_PHONE_LJSPEECH

processor = bundle.get_text_processor()
tacotron2 = bundle.get_tacotron2().to(device)
vocoder = bundle.get_vocoder().to(device)

with torch.inference_mode():
    processed, lengths = processor(text)
    processed = processed.to(device)
    lengths = lengths.to(device)
    spec, spec_lengths, _ = tacotron2.infer(processed, lengths)
waveforms, lengths = vocoder(spec, spec_lengths)
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/modules/transformer.py:392: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance)
  warnings.warn(
Downloading: "https://download.pytorch.org/torchaudio/models/tacotron2_english_phonemes_1500_epochs_ljspeech.pth" to /root/.cache/torch/hub/checkpoints/tacotron2_english_phonemes_1500_epochs_ljspeech.pth

  0%|          | 0.00/107M [00:00<?, ?B/s]
 33%|###3      | 35.8M/107M [00:00<00:00, 375MB/s]
 67%|######6   | 71.5M/107M [00:00<00:00, 361MB/s]
100%|##########| 107M/107M [00:00<00:00, 370MB/s]
plot(waveforms, spec, vocoder.sample_rate)
tacotron2 pipeline tutorial


Waveglow 声码器

Waveglow 是 Nvidia 发布的一种声码器。预训练权重已在 Torch Hub 上发布。可以使用torch.hub模块实例化该模型。

# Workaround to load model mapped on GPU
# https://stackoverflow.com/a/61840832
waveglow = torch.hub.load(
    "NVIDIA/DeepLearningExamples:torchhub",
    "nvidia_waveglow",
    model_math="fp32",
    pretrained=False,
)
checkpoint = torch.hub.load_state_dict_from_url(
    "https://api.ngc.nvidia.com/v2/models/nvidia/waveglowpyt_fp32/versions/1/files/nvidia_waveglowpyt_fp32_20190306.pth",  # noqa: E501
    progress=False,
    map_location=device,
)
state_dict = {key.replace("module.", ""): value for key, value in checkpoint["state_dict"].items()}

waveglow.load_state_dict(state_dict)
waveglow = waveglow.remove_weightnorm(waveglow)
waveglow = waveglow.to(device)
waveglow.eval()

with torch.no_grad():
    waveforms = waveglow.infer(spec)
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/hub.py:330: UserWarning: You are about to download and run code from an untrusted repository. In a future release, this won't be allowed. To add the repository to your trusted list, change the command to {calling_fn}(..., trust_repo=False) and a command prompt will appear asking for an explicit confirmation of trust, or load(..., trust_repo=True), which will assume that the prompt is to be answered with 'yes'. You can also use load(..., trust_repo='check') which will only prompt for confirmation if the repo is not already trusted. This will eventually be the default behaviour
  warnings.warn(
Downloading: "https://github.com/NVIDIA/DeepLearningExamples/zipball/torchhub" to /root/.cache/torch/hub/torchhub.zip
/root/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub/PyTorch/Classification/ConvNets/image_classification/models/common.py:13: UserWarning: pytorch_quantization module not found, quantization will not be available
  warnings.warn(
/root/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub/PyTorch/Classification/ConvNets/image_classification/models/efficientnet.py:17: UserWarning: pytorch_quantization module not found, quantization will not be available
  warnings.warn(
/pytorch/audio/ci_env/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:144: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
  WeightNorm.apply(module, name, dim)
Downloading: "https://api.ngc.nvidia.com/v2/models/nvidia/waveglowpyt_fp32/versions/1/files/nvidia_waveglowpyt_fp32_20190306.pth" to /root/.cache/torch/hub/checkpoints/nvidia_waveglowpyt_fp32_20190306.pth
plot(waveforms, spec, 22050)
tacotron2 pipeline tutorial


脚本总运行时间: (0 分 58.716 秒)

由 Sphinx-Gallery 生成的画廊

文档

访问全面的 PyTorch 开发者文档

查看文档

教程

为初学者和高级开发者提供深入的教程

查看教程

资源

查找开发资源并让您的问题得到解答

查看资源