Transformer——逐步详解架构和完整代码搭建

CSDN 2024-08-21 12:31:02 阅读 55

好久没更新博客,后面更新会勤一些。今天想聊一下Transformer,Transformer在NLP和CV领域都有着重要的价值,甚至可以看作是一个基础模型,这篇博客将通过详细代码深入解析Transformer模型总体架构图各个部分的的作用和搭建:论文链接,代码注释偏多,可以结合注释自己运行一下,效果更佳。难免有遗漏欠妥之处,欢迎指出。

文章目录

一、Transformer总体架构图二、输入部分2.1 词嵌入层2.2 位置编码

三、编码器3.1 掩码机制3.2 点乘注意力机制3.3 多头注意力机制3.4 前馈连接层(mlp)3.5 norm层(layernorm)3.6 残差连接层3.7 编码器层的构建3.8 编码器构建

四、解码器4.1 解码器层4.2 解码器

五、输出六、完整模型构建

一、Transformer总体架构图

Transformer是一个经典的编码解码结构,编码器decoder负责编码,解码器encoder负责解码。Transformer是基于seq2seq的架构,提出时被用在机器翻译任务上,后面变种‌Swin Transformer和‌Vision Transformer让其在CV领域也大放异彩。讲到Transformer就不得不讲到Transformer的总体架构图,如下所示:

在这里插入图片描述

总的来说,Transformer整体架构可以分为四个部分:

输入部分

编码器部分

解码器部分

输出部分

下面将对每个部分进行详细的介绍。

二、输入部分

输入部分包括两个部分内容:

原文本嵌入层及其位置编码目标文本嵌入层及其位置编码器

在这里插入图片描述

2.1 词嵌入层

首先讲解一下文本嵌入层的作用,可以理解为将文本转化为向量表示,以期望在高维空间中捕捉词汇之间的关系。关于高维的理解,如果是二维的话可以用[x,y]表示,对应的就是平面直角坐标系以圆点作为起点的一个向量;如果是三维的话可以用[x,y,z]表示,对应的就是三维坐标系中以圆点作为起点的一个向量;以此类推,[x,y,z…]即可以抽象的理解为高维的向量。

词嵌入主要通过nn.Embedding 来实现,下面通过代码来理解:

导入必要的库:

<code>import torch

import torch.nn as nn

import math

# 变量封装函数Variable,方便计算梯度,在版本0.4.0及之后的版本中已经被合并进了张量(Tensor)类中

from torch.autograd import Variable

构建Embeddings的类:

class Embeddings(nn.Module):

def __init__(self,d_model,vocab):

# d_model 表示词嵌入的维度

# vocab 表示词表的大小

super(Embeddings,self).__init__()

self.lut = nn.Embedding(vocab,d_model) # 词嵌入对象,

#nn.Embedding的主要作用是将整数索引(通常代表词汇表中的单词或标记)转换为固定大小的向量,这些向量称为嵌入向量

self.d_model = d_model

def forward(self,x):

return self.lut(x) * math.sqrt(self.d_model)# 将所有词嵌入的维度进行缩放,使得词嵌入的平均大小与位置编码的大小相近,从而更好地融合位置信息和词嵌入信息。

embedding测试

embedding = nn.Embedding(10,6)

input = torch.LongTensor([[1,2,4,5],[4,3,2,9]])

embedding(input)

输出:

tensor([[[ 2.0304, -0.0207, -1.5597, 0.0908, 0.8595, 0.3284],

[ 0.8606, -0.8066, -0.9573, 0.0747, 0.3627, 0.9916],

[ 1.0780, 0.2757, 1.0729, -0.7996, 0.4832, -1.5954],

[-1.6905, 0.9838, -0.4038, -2.0718, -0.2542, -0.1587]],

[[ 1.0780, 0.2757, 1.0729, -0.7996, 0.4832, -1.5954],

[-0.2118, -1.6557, -0.3727, 0.4616, 0.5757, -0.1505],

[ 0.8606, -0.8066, -0.9573, 0.0747, 0.3627, 0.9916],

[-0.3784, -0.3374, -0.1635, 1.4540, 0.0524, -0.9031]]],

grad_fn=<EmbeddingBackward0>)

查看维度:

d_model = 512

vocab = 1000

x = Variable(torch.LongTensor([[100,2,421,508],[491,998,1,221]]))

emb = Embeddings(d_model,vocab)

embout = emb(x)

print(embout.shape)

输出:

torch.Size([2, 4, 512])

2.2 位置编码

接下来是位置编码,由于Transformer是并行计算,每个向量可以一次和其他向量做运算,忽略了位置的关系,因此需要在embedding层加入位置编码器,将此会未知不同可能会产生不同的语义的信息加入到词嵌入张量中,以弥补位置信息的缺失。

在这里插入图片描述

构建positionalEncoding类

<code>class PositionalEncoding(nn.Module):

def __init__(self,d_model,dropout,max_len=5000):

super(PositionalEncoding,self).__init__()

self.dropout = nn.Dropout(p=dropout) # dropout层

pe = torch.zeros(max_len,d_model) # 初始化一个位置矩阵 大小为max_len * d_model

position = torch.arange(0,max_len).unsqueeze(1) # max_len*1

div_term = torch.exp(torch.arange(0,d_model,2)*

-math.log(10000.0)/d_model)

pe[:,0::2] = torch.sin(position*div_term)

pe[:,1::2] = torch.cos(position*div_term)

pe = pe.unsqueeze(0)

self.register_buffer('pe',pe) #注册一个buffer对象

def forward(self,x):

# 做了一个适配

x = x + Variable(self.pe[:,:x.size(1)],

requires_grad=False) # requires_grad 为false表示不需要求梯度

return self.dropout(x)

测试位置编码

d_model =512

dropout =0.1

max_len = 60

pe = PositionalEncoding(d_model,dropout,max_len)

pe_result = pe(embout)

print("embout",embout)

print("result",pe_result)

输出:

embout tensor([[[ 35.7760, 38.4913, 22.6874, ..., 1.4905, -14.9831, -7.5838],

[-13.7559, -9.8327, -17.8275, ..., 4.6092, -3.4576, 21.2533],

[ -8.6055, 3.3068, 17.0219, ..., 8.3475, 4.7165, -50.7102],

[ 17.3369, -5.6921, -39.4074, ..., -23.1921, 2.2748, 12.8539]],

[[ 20.6978, -19.6404, -7.7615, ..., 24.5968, -24.0445, 30.6407],

[-13.5787, -9.1779, 8.0756, ..., -23.0874, -26.3233, -8.6856],

[ 20.2096, -8.8020, 20.4391, ..., -14.0461, 3.0267, -29.4921],

[-21.3104, -14.8570, 34.4444, ..., -13.4595, -35.0762, 3.7146]]],

grad_fn=<MulBackward0>)

result tensor([[[ 39.7512, 43.8792, 0.0000, ..., 2.7672, -16.6478, -0.0000],

[-14.3493, -10.3249, -18.8952, ..., 6.2324, -3.8416, 24.7259],

[ -8.5514, 3.2118, 19.9537, ..., 10.3861, 5.2408, -55.2336],

[ 19.4200, -7.4246, -43.5137, ..., -24.6579, 2.5279, 15.3932]],

[[ 22.9976, -20.7116, -8.6239, ..., 28.4408, -26.7161, 35.1563],

[-14.1525, -9.5973, 9.8861, ..., -0.0000, -0.0000, -8.5396],

[ 23.4654, -10.2424, 23.7506, ..., -14.4957, 3.3633, -31.6579],

[-23.5215, -17.6077, 38.5439, ..., -13.8439, -38.9732, 5.2385]]],

grad_fn=<MulBackward0>)

将位置编码可视化

import matplotlib.pyplot as plt

import numpy as np

plt.figure(figsize=(15,5))

pe = PositionalEncoding(20,0)

y = pe(Variable(torch.zeros(1,100,20)))

plt.plot(np.arange(100),y[0,:,4:8].data.numpy())

plt.legend(["dim %d"%p for p in [4,5,6,7]])

# 每条颜⾊的曲线代表某⼀个词汇中的特征在不同位置的含义.

在这里插入图片描述

每条颜色的曲线代表某一个词汇中的特征在不同位置的含义,保证同一词汇随着所在位置不同他对应位置嵌入向量会发生变化,正弦波和余弦波的值域范围都是1到-1,这又很好的控制了嵌入数值的大小,有助于梯度的快速计算。

三、编码器

编码器部分

由N个编码器层堆叠而成每个编码器层由两个子层连接结构组成第一个子层包含一个多头注意力机制和norm层以及一个残差连接第二个子层包含一个mlp和norm层以及一个残差连接

在这里插入图片描述

3.1 掩码机制

首先学习一下掩码机制,在这里先学,是因为编码器和解码器中的多头注意力机制很类似,mask是一个可选的参数,因此可以先学mask。掩码就是一些0/1的集合,作用就是让另外一个张量中的一些数值被遮掩掉,具体是0被遮掩掉还是1被遮掩掉,这个可以自己去定义,一般都是0被遮掩掉。在Transformer中掩码mask最重要的作用是将未来的信息mask掉,使得模型看不到当前时刻之后的输出,这在解码器章节会再次讲到。

定义掩码函数

<code>def subsequent_mask(size):

attn_shape = (1,size,size)

subsequent_mask = np.triu(np.ones(attn_shape),k=1).astype('uint8') # 形成上三角元素

return torch.from_numpy(1-subsequent_mask) # 变成下三角

测试

ashape = (1,10,10)

a = np.ones(ashape)

print("a",a)

b = np.triu(a,k=1).astype('uint8')

print("b",b)

c = torch.from_numpy(1-b)

print("c",c)

输出:

a [[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]

[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]]

b [[[0 1 1 1 1 1 1 1 1 1]

[0 0 1 1 1 1 1 1 1 1]

[0 0 0 1 1 1 1 1 1 1]

[0 0 0 0 1 1 1 1 1 1]

[0 0 0 0 0 1 1 1 1 1]

[0 0 0 0 0 0 1 1 1 1]

[0 0 0 0 0 0 0 1 1 1]

[0 0 0 0 0 0 0 0 1 1]

[0 0 0 0 0 0 0 0 0 1]

[0 0 0 0 0 0 0 0 0 0]]]

c tensor([[[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],

[1, 1, 0, 0, 0, 0, 0, 0, 0, 0],

[1, 1, 1, 0, 0, 0, 0, 0, 0, 0],

[1, 1, 1, 1, 0, 0, 0, 0, 0, 0],

[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],

[1, 1, 1, 1, 1, 1, 0, 0, 0, 0],

[1, 1, 1, 1, 1, 1, 1, 0, 0, 0],

[1, 1, 1, 1, 1, 1, 1, 1, 0, 0],

[1, 1, 1, 1, 1, 1, 1, 1, 1, 0],

[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]], dtype=torch.uint8)

可视化掩码图

plt.figure(figsize=(5,5))

plt.imshow(subsequent_mask(20)[0])

在这里插入图片描述

3.2 点乘注意力机制

接下来讲解Transformer用到的点乘注意力机制,Transformer注意力机制太有名了,导致一提到注意力机制就想到Transformer,然后就会疯狂问Transformer相关的问题(别问我怎么知道的)。所以,了解Transformer的架构,准备一下八股是一个不错的选择。

注意力机制其实也是很类似于人了,人是有注意力的,经常在写作业的时候,老师会说集中注意力。其实对于模型来说,也可有注意力,比较经典的cbam注意力,就是通过并联空间注意力机制和通道注意力机制,达到对特征图不同通道不同位置给予不同权重的作用。

在Transformer中用到的注意力是自注意力机制,具体来说,需要三个指定的输入Q(query)、K(key)、V(value),计算的结果表示query在key和value作用下的表示,在Transformer中注意力的计算公式如下:

在这里插入图片描述

为什么要除以根号dk,一方面起到缩放维度的效果,减少计算量。另一方面,假设Q和K的均值都是0,方差都是1,做完矩阵乘法之后均值为0,方差变为dk,因此需要使用根号dk作为缩放,这样可以获得更平缓的softma,用图表示上述的计算过程:

在这里插入图片描述

构建注意力机制

<code>import torch.nn.functional as F

def attention(query,key,value,mask=None,dropout=None):

d_k = query.size(-1)

scores =torch.matmul(query,key.transpose(-2,-1)) / math.sqrt(d_k) # score 是一个张量

if mask is not None:

scores = scores.masked_fill(mask==0,-1e9) # 如果掩码处为0,则用-1e9 代替 softmax 后就变成0

p_attn = F.softmax(scores,dim=-1)

if dropout is not None:

p_attn = dropout(p_attn)

return torch.matmul(p_attn,value),p_attn

测试

不带mask

query = key = value = pe_result

print("query:",query)

attn,p_attn = attention(query,key,value)

print("attn:",attn)

print("p_attn",p_attn)

query: tensor([[[ 39.7512, 43.8792, 0.0000, ..., 2.7672, -16.6478, -0.0000],

[-14.3493, -10.3249, -18.8952, ..., 6.2324, -3.8416, 24.7259],

[ -8.5514, 3.2118, 19.9537, ..., 10.3861, 5.2408, -55.2336],

[ 19.4200, -7.4246, -43.5137, ..., -24.6579, 2.5279, 15.3932]],

[[ 22.9976, -20.7116, -8.6239, ..., 28.4408, -26.7161, 35.1563],

[-14.1525, -9.5973, 9.8861, ..., -0.0000, -0.0000, -8.5396],

[ 23.4654, -10.2424, 23.7506, ..., -14.4957, 3.3633, -31.6579],

[-23.5215, -17.6077, 38.5439, ..., -13.8439, -38.9732, 5.2385]]],

grad_fn=<MulBackward0>)

attn: tensor([[[ 39.7512, 43.8792, 0.0000, ..., 2.7672, -16.6478, 0.0000],

[-14.3493, -10.3249, -18.8952, ..., 6.2324, -3.8416, 24.7259],

[ -8.5514, 3.2118, 19.9537, ..., 10.3861, 5.2408, -55.2336],

[ 19.4200, -7.4246, -43.5137, ..., -24.6579, 2.5279, 15.3932]],

[[ 22.9976, -20.7116, -8.6239, ..., 28.4408, -26.7161, 35.1563],

[-14.1525, -9.5973, 9.8861, ..., 0.0000, 0.0000, -8.5396],

[ 23.4654, -10.2424, 23.7506, ..., -14.4957, 3.3633, -31.6579],

[-23.5215, -17.6077, 38.5439, ..., -13.8439, -38.9732, 5.2385]]],

grad_fn=<UnsafeViewBackward0>)

p_attn tensor([[[1., 0., 0., 0.],

[0., 1., 0., 0.],

[0., 0., 1., 0.],

[0., 0., 0., 1.]],

[[1., 0., 0., 0.],

[0., 1., 0., 0.],

[0., 0., 1., 0.],

[0., 0., 0., 1.]]], grad_fn=<SoftmaxBackward0>)

带mask:

query = key = value = pe_result

mask = Variable(torch.zeros(2,4,4))

attn,p_attn = attention(query,key,value,mask=mask)

print("query:",query)

print("attn:",attn)

print("p_attn:",p_attn)

query: tensor([[[ 39.7512, 43.8792, 0.0000, ..., 2.7672, -16.6478, -0.0000],

[-14.3493, -10.3249, -18.8952, ..., 6.2324, -3.8416, 24.7259],

[ -8.5514, 3.2118, 19.9537, ..., 10.3861, 5.2408, -55.2336],

[ 19.4200, -7.4246, -43.5137, ..., -24.6579, 2.5279, 15.3932]],

[[ 22.9976, -20.7116, -8.6239, ..., 28.4408, -26.7161, 35.1563],

[-14.1525, -9.5973, 9.8861, ..., -0.0000, -0.0000, -8.5396],

[ 23.4654, -10.2424, 23.7506, ..., -14.4957, 3.3633, -31.6579],

[-23.5215, -17.6077, 38.5439, ..., -13.8439, -38.9732, 5.2385]]],

grad_fn=<MulBackward0>)

attn: tensor([[[ 9.0676, 7.3354, -10.6138, ..., -1.3180, -3.1802, -3.7786],

[ 9.0676, 7.3354, -10.6138, ..., -1.3180, -3.1802, -3.7786],

[ 9.0676, 7.3354, -10.6138, ..., -1.3180, -3.1802, -3.7786],

[ 9.0676, 7.3354, -10.6138, ..., -1.3180, -3.1802, -3.7786]],

[[ 2.1973, -14.5397, 15.8892, ..., 0.0253, -15.5815, 0.0493],

[ 2.1973, -14.5397, 15.8892, ..., 0.0253, -15.5815, 0.0493],

[ 2.1973, -14.5397, 15.8892, ..., 0.0253, -15.5815, 0.0493],

[ 2.1973, -14.5397, 15.8892, ..., 0.0253, -15.5815, 0.0493]]],

grad_fn=<UnsafeViewBackward0>)

p_attn: tensor([[[0.2500, 0.2500, 0.2500, 0.2500],

[0.2500, 0.2500, 0.2500, 0.2500],

[0.2500, 0.2500, 0.2500, 0.2500],

[0.2500, 0.2500, 0.2500, 0.2500]],

[[0.2500, 0.2500, 0.2500, 0.2500],

[0.2500, 0.2500, 0.2500, 0.2500],

[0.2500, 0.2500, 0.2500, 0.2500],

[0.2500, 0.2500, 0.2500, 0.2500]]], grad_fn=<SoftmaxBackward0>)

3.3 多头注意力机制

多头的目的可以简单理解为想模仿卷积那种多通道的输出,本质上是为了让模型有H次的学习机会(假设头的数量是H),每个头都想获得一组QKV的计算结果,因此可以学习到不同位置的信息,最后再将多头的结果cat在一起最为最后的输出,模型结构如下:

在这里插入图片描述

构建多头注意力层

<code>import copy

# 定义一个克隆函数,在多头注意力机制的实现中,会用到多个结构相同的线性层

def clones(module,N):

# 将module复制N个,深拷贝 使得每个module独立 返回一个moduleList 列表

return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])

# 实现多头注意力机制类

class MultiHeadAttention(nn.Module):

def __init__(self,head,embedding_dim,dropout=0.1):

super(MultiHeadAttention,self).__init__()

# 后面要为每个头分配等量的词特征 也就是embedding_dim / head 个

assert embedding_dim % head ==0 # 判断h是否能被d_model整除

# 每个获得的分割词向量维度

self.d_k = embedding_dim // head

self.head = head

# 获得线性层对象 输入输出都是词嵌入维度,一共有4个(Q,K,V,最后拼接的矩阵)

self.linears = clones(nn.Linear(embedding_dim,embedding_dim),4)

self.attn = None

self.dropout = nn.Dropout(p=dropout)

def forward(self,query,key,value,mask=None):

if mask is not None:

# 使用unsqueeze拓展维度

mask = mask.unsqueeze(0)

# 代表有多少个样本

batch_size = query.size(0)

query,key,value = [model(x).view(batch_size,-1,self.head,self.d_k).transpose(1,2) for model,x in zip(self.linears,(query,key,value))]

x,self.attn = attention(query,key,value,mask=mask,dropout=self.dropout)

# 使用contiguous是的数据是连续存储的 这样view才能正常使用

# 得到与输入的形状相同

x = x.transpose(1,2).contiguous().view(batch_size,-1,self.head*self.d_k)

# 使用最后一个线性层 得到最终的输出

return self.linears[-1](x)

实现了一个多头注意力机制类(MultiHeadAttention),其中包含了对输入query、key和value进行处理的前向传播方法forward()。在这个多头注意力机制中,输入的query、key和value需要经过类似于线性变换和reshape等操作,以便与注意力机制计算相结合。

在forward()方法中,经过一系列操作后,x被变换为形状为(batch_size, -1, self.head*self.d_k)的张量,并且通过线性层将其输出为最终的结果。而为什么要进行这样的维度变换操作呢?

针对每个头分别进行注意力计算:首先,将输入query、key和value分别通过线性层进行变换后,将它们分别切分为self.head个头,每个头的维度为self.d_k。这样就可以针对每个头分别计算注意力。

进行注意力计算后的维度调整:在MultiHeadAttention模块中进行了注意力计算后,得到的输出x是一个(batch_size, self.head, num_steps, self.d_k)的张量,需要进行维度调整以便后续的线性变换。通过transpose和contiguous操作,将x的维度调整为(batch_size, num_steps, self.head*self.d_k),使得每个头的输出在同一维度上相连,以便输入到最后一个线性层进行组合处理。

最终的线性变换:经过维度调整后的x通过最后一个线性层进行线性变换,进一步得到最终的输出结果。

这里进行维度的变换操作是为了在多头注意力机制中对每个头进行独立的注意力计算,并最终将各个头的注意力结果整合在一起得到最终输出。

测试

head =8

embedding_dim = 512

dropout =0.2

query=value=key = pe_result

mask = Variable(torch.zeros(8,4,4))

mha = MultiHeadAttention(head,embedding_dim,dropout)

mha_result = mha(query,key,value,mask)

print(mha_result)

print(mha_result.size())

输出:

tensor([[[-3.2549, 2.8872, 8.2271, ..., 0.9531, -1.8069, -2.6490],

[-6.0513, 4.1177, 4.8186, ..., 1.3302, -6.8995, -2.5014],

[-4.2260, 1.2755, 3.4261, ..., 5.4288, -6.9640, -2.1595],

[-0.4197, 0.4246, 4.1276, ..., 6.7908, 0.4831, -0.3554]],

[[10.1127, 1.4201, 0.7476, ..., 1.2399, 1.2866, -1.8085],

[11.5379, 4.2588, 1.6288, ..., 2.9726, 1.5429, -1.4077],

[ 8.7062, 5.9161, 3.4573, ..., -2.7794, 0.3766, 3.6759],

[ 8.5761, 4.9488, 0.7260, ..., 0.5984, -0.1096, 2.3545]]],

grad_fn=<AddBackward0>)

torch.Size([2, 4, 512])

3.4 前馈连接层(mlp)

这一层就是一个简单的mlp层,但是效果出奇的好,主要是增加模型的学习能力,提高拟合程度。后续也有很多工作针对这个mlp提出改进,单无疑是在增加复杂度的基础上换准确度。对应在模型结构就是feedforward这一层,下面通过代码进行构建:

构建全连接层

class PositionwiseFeedForward(nn.Module):

def __init__(self,d_model,d_ff,dropout=0.1):

super(PositionwiseFeedForward,self).__init__()

# d_ff = 第一个线性层的输出维度 64

self.w1 = nn.Linear(d_model,d_ff)

self.w2 = nn.Linear(d_ff,d_model)

self.dropout = nn.Dropout(dropout)

def forward(self,x):

return self.w2(self.dropout(F.relu(self.w1(x))))

测试

d_model = 512

d_ff = 64

dropout = 0.2

x = mha_result

ff = PositionwiseFeedForward(d_model,d_ff,dropout)

ff_result = ff(x)

print(ff_result)

输出:

tensor([[[ 0.8269, 0.5668, -1.5094, ..., 1.6119, -1.9829, -0.2612],

[ 1.1301, 0.3857, -1.2542, ..., -0.0569, -1.2507, 1.5327],

[ 0.9952, 0.0055, -0.4670, ..., 0.3402, -0.4764, 1.8049],

[ 1.0562, 0.6098, -1.5683, ..., -1.6547, -0.7434, 1.7635]],

[[-1.1278, 0.3586, -0.3451, ..., 0.0465, 1.0358, -2.0395],

[ 0.0379, 0.4792, -0.0911, ..., -0.8611, 0.5370, -0.7849],

[-0.2414, -0.1789, 0.8591, ..., -1.8893, 0.1560, -0.6375],

[-0.8982, -0.8801, 1.2748, ..., -1.6009, 0.4860, -1.4320]]],

grad_fn=<AddBackward0>)

mlp中用到relu激活函数,计算公式是:ReLU(x)=max(0, x),图像如下所示:

在这里插入图片描述

3.5 norm层(layernorm)

在Transformer里面用的norm是layernorm,不同于cv领域比较常用的batchnorm,主要是切的维度不一样,具体可以关注我的下一篇博客,里面就有关于这个问题的解释。随着⽹络层数的增加,通过多层的计 算后参数可能开始出现过⼤或过⼩的情况,这样可能会导致学习过程出现异常,模型可 能收敛⾮常的慢. 因此都会在⼀定层数后接规范化层进⾏数值的规范化,使其特征数值 在合理范围内。layernorm有两个实例化参数,features和eps,分别表示词嵌入特征大小和一个足够小的数,是为了防止分母是0,输入参数x表示来自上一层的输出,本身的输出就是经过规范化的特征表示。下面通过具体的代码进行解释:

构建layernorm层:

<code>class LayerNorm(nn.Module):

def __init__(self,features,eps=1e-6):

# eps 防止分母变成0

super(LayerNorm,self).__init__()

# nn.Parameter 表示他们是模型的参数

# a2 b2 是可学习的

self.a2= nn.Parameter(torch.ones(features))

self.b2 = nn.Parameter(torch.zeros(features))

self.eps = eps

def forward(self,x):

# x是上一层的输出

mean = x.mean(-1,keepdim=True)

std = x.std(-1,keepdim=True)

return self.a2*(x-mean)/(std+self.eps)+self.b2

测试

features = d_model =512

eps = 1e-6

x = ff_result

ln = LayerNorm(features,eps)

ln_result= ln(x)

print(ln_result)

输出:

tensor([[[ 0.6447, 0.4475, -1.1267, ..., 1.2400, -1.4858, -0.1803],

[ 1.3305, 0.4301, -1.5534, ..., -0.1053, -1.5491, 1.8175],

[ 1.0351, 0.0203, -0.4641, ..., 0.3635, -0.4738, 1.8652],

[ 0.8872, 0.5078, -1.3434, ..., -1.4168, -0.6422, 1.4884]],

[[-1.0532, 0.3691, -0.3042, ..., 0.0705, 1.0171, -1.9256],

[ 0.0611, 0.5230, -0.0740, ..., -0.8801, 0.5836, -0.8003],

[-0.2461, -0.1781, 0.9501, ..., -2.0371, 0.1859, -0.6766],

[-0.7494, -0.7331, 1.2079, ..., -1.3823, 0.4974, -1.2302]]],

grad_fn=<AddBackward0>)

3.6 残差连接层

因为在Transformer里面有很多残差连接层,因此提前构建一个残差连接层是一个不错的选择。残差连接最早出现在Resnet中,主要解决的问题是随着模型层数增加,模型性能反而退化的显现,后面很多深度学习模型都采用了这种方式,在Transformer中不管是编码器还是解码器都采用了残差连接的方式,下面通过具体的代码进行讲解:

构建残差连接层

class SublayerConnection(nn.Module):

def __init__(self,size,dropout=0.1):

super(SublayerConnection,self).__init__()

self.norm = LayerNorm(size)

self.dropout = nn.Dropout(p=dropout)

def forward(self, x, sublayer):

# Apply Layer Normalization after residual connection

return self.norm(x + self.dropout(sublayer(x)))

测试:

x = pe_result

mask = Variable(torch.zeros(8,4,4))

self_att = MultiHeadAttention(head,d_model)

sublayer = lambda x: self_attn(x,x,x,mask)

sc = SublayerConnection(size,dropout)

sc_result = sc(x,sublayer)

print(sc_result)

print(sc_result.shape)

3.7 编码器层的构建

通过上面六个部分的学习,我们已经将一个编码器层所需要的所有子层都已经学习完毕,现在只要按照模型的结构将子层按网络结构拼接在一起即可,编码器结构在最开始有给图提到,下面用代码实现上面这张编码器层:

构建编码器层

class EncoderLayer(nn.Module):

def __init__(self,size,self_attn,feed_forward,dropout):

super(EncoderLayer,self).__init__()

self.self_attn =self_attn

self.feed_forward = feed_forward

self.sublayer = clones(SublayerConnection(size,dropout),2)

self.size = size

def forward(self,x,mask):

x =self.sublayer[0](x,lambda x:self.self_attn(x,x,x,mask))

return self.sublayer[1](x,self.feed_forward)

测试

size = 512

head =8

d_model =512

d_ff = 64

x = pe_result

dropout = 0.2

self_att = MultiHeadAttention(head,d_model)

ff = PositionwiseFeedForward(d_model,d_ff,dropout)

mask = Variable(torch.zeros(8,4,4))

el = EncoderLayer(size,self_att,ff,dropout)

el_result = el(x,mask)

print(el_result)

print(el_result.shape)

输出:

tensor([[[ 2.1907, 1.6837, -0.3434, ..., -0.4777, -0.4810, -0.9376],

[-0.2403, -0.0332, -0.9152, ..., 0.4202, -0.0186, 0.2153],

[-0.0618, 0.1980, 0.7177, ..., 0.4218, 0.5628, -3.4083],

[ 1.0136, -0.0655, -1.9666, ..., -1.0556, 0.7617, -0.2639]],

[[ 0.6778, -0.9679, -0.3791, ..., 1.6933, -0.9626, 0.9942],

[-0.2461, -0.6259, 0.2268, ..., 0.2968, -0.1554, -0.3626],

[ 0.9033, -0.6540, 0.5900, ..., -0.4296, 0.1917, -1.4874],

[-1.3893, -1.3273, 1.6339, ..., -0.4168, -1.6156, -0.0688]]],

grad_fn=<AddBackward0>)

torch.Size([2, 4, 512])

3.8 编码器构建

上面3.7节我们已经构建了一个编码器层,编码器中含有N个编码器层,因此只需要将N个编码器层堆叠在一起即可构建出一个编码器。直接看代码:

编码器构建

class Encoder(nn.Module):

def __init__(self,layer,N):

super(Encoder,self).__init__()

self.layers = clones(layer,N)

self.norm = LayerNorm(layer.size)

def forward(self,x,mask):

for layers in self.layers:

x = layer(x,mask)

return self.norm(x)

测试

size = 512

head =8

d_model =512

d_ff = 64

c = copy.deepcopy

attn = MultiHeadAttention(head,d_model)

ff= PositionwiseFeedForward(d_model,d_ff,dropout)

dropout = 0.2

layer = EncoderLayer(size,c(attn),c(ff),dropout)

N =8

mask = Variable(torch.zeros(8,4,4))

en = Encoder(layer,N)

en_result = en(x,mask)

print(en_result)

print(en_result.shape)

输出:

tensor([[[-0.8780, -0.2222, 0.3757, ..., -0.3787, -1.1558, -2.1735],

[-1.0507, -1.1983, 0.1771, ..., 0.2805, -1.5720, -1.6486],

[-0.4407, -0.6691, 0.5844, ..., -0.3482, -1.6438, -1.3539],

[-1.0780, 0.0833, 0.3086, ..., -0.4316, -1.6046, -0.5929]],

[[-1.0885, -0.2705, -0.8391, ..., 0.1343, -1.2254, -1.3318],

[-1.4956, 0.2504, -0.2050, ..., 0.4529, -1.7921, -1.3334],

[-0.3777, -0.8807, 0.3839, ..., 0.1178, -1.8314, -1.4151],

[-1.3368, -0.5649, -1.0093, ..., 0.1698, -1.7357, -1.6741]]],

grad_fn=<AddBackward0>)

torch.Size([2, 4, 512])

四、解码器

解码器器与编码器层基本一致,只不过多了一个编码解码注意力机制层,每个解码器层由3个子层构成:

第一个子层包含一个带掩码的多头注意力机制和一个norm层以及残差连接第二个子层包含一个编码解码多头注意力机制和一个norm层以及残差连接第三个子层包含一个mlp和一个norm层以及残差连接

在这里插入图片描述

4.1 解码器层

之前有提到mask的作用,在这里再一次点名,带掩码的多头注意力机制是防止解码器看到当前时刻之后的信息,比如对话任务,模型的输出是一个词一个词往外蹦的,因此解码器能获得的信息都是当前时刻之前的信息,因此需要加入mask掩码。掩码机制具体是将需要掩码的位置的值变为很小的一个数,softmax之后这个位置就变成0,从而达到掩码的目的。下面通过具体代码构建解码器层,由于之前的子层我们已经全部构建好了,只需要按解码器图示进行堆积即可,具体代码如下:

构建解码器层:

<code>class DecoderLayer(nn.Module):

def __init__(self,size,self_attn,src_attn,feed_forward,dropout):

super(DecoderLayer,self).__init__()

self.size = size

self.self_attn = self_attn

self.src_attn = src_attn

self.feed_forward = feed_forward

self.sublayer = clones(SublayerConnection(size,dropout),3)

def forward(self,x,memory,source_mask,target_mask):

m = memory

x = self.sublayer[0](x,lambda x:self.self_attn(x,x,x,target_mask))

x = self.sublayer[1](x,lambda x:self.self_attn(x,m,m,source_mask))

return self.sublayer[2](x,self.feed_forward)

测试

head = 8

size = 512

d_model = 512

d_ff = 64

dropout = 0.2

self_attn = src_attn = MultiHeadAttention(head, d_model, dropout)

ff = PositionwiseFeedForward(d_model, d_ff, dropout)

#x是来⾃⽬标数据的词嵌⼊表示, 但形式和源数据的词嵌⼊表示相同, 这⾥使⽤per充当.

x = pe_result

# memory是来⾃编码器的输出

memory = en_result

# 实际中source_mask和target_mask并不相同, 这⾥为了⽅便计算使他们都为mask

mask = Variable(torch.zeros(8, 4, 4))

source_mask = target_mask = mask

dl = DecoderLayer(size, self_attn, src_attn, ff, dropout)

dl_result = dl(x, memory, source_mask, target_mask)

print(dl_result)

print(dl_result.shape)

输出:

tensor([[[ 1.9085, 1.5095, -0.7919, ..., 0.3712, -0.4512, 0.8550],

[-0.5034, -0.4479, -1.7619, ..., 0.8656, 0.6345, 0.8180],

[-0.6091, -0.3193, -0.7074, ..., -0.0402, 0.7672, -1.3044],

[ 0.9583, -0.4078, -2.3120, ..., -0.5374, 0.8434, 1.0616]],

[[ 1.2345, -1.2972, -0.2494, ..., 0.6893, -0.3948, 1.5391],

[-0.1530, -0.5736, 0.6911, ..., -0.3912, 0.4623, -0.3656],

[ 0.8174, -0.8595, 0.5776, ..., -1.0014, 0.9449, -1.1462],

[-0.6898, -0.9118, 0.9026, ..., -0.6530, -0.7654, 0.2456]]],

grad_fn=<AddBackward0>)

torch.Size([2, 4, 512])

4.2 解码器

和编码器一样,解码器同样由N个编码器层堆叠而成,因此类似于编码器的构建,可以写出具体代码如下:

解码器构建:

class Decoder(nn.Module):

def __init__(self,layer,N):

super(Decoder,self).__init__()

self.layers = clones(layer,N)

self.norm = LayerNorm(layer.size)

def forward(self,x,memory,source_mask,target_mask):

for layer in self.layers:

x = layer(x,memory,source_mask,target_mask)

return self.norm(x)

测试

#分别是解码器层layer和解码器层的个数N

size = 512

d_model = 512

head = 8

d_ff = 64

dropout = 0.2

c = copy.deepcopy

attn = MultiHeadAttention(head, d_model)

ff = PositionwiseFeedForward(d_model, d_ff, dropout)

layer = DecoderLayer(d_model, c(attn), c(attn), c(ff), dropout)

N = 8

x = pe_result

memory = en_result

mask = Variable(torch.zeros(8, 4, 4))

source_mask = target_mask = mask

de = Decoder(layer, N)

de_result = de(x, memory, source_mask, target_mask)

print(de_result)

print(de_result.shape)

输出:

de = Decoder(layer, N)

de_result = de(x, memory, source_mask, target_mask)

print(de_result)

print(de_result.shape)

de = Decoder(layer, N)

de_result = de(x, memory, source_mask, target_mask)

print(de_result)

print(de_result.shape)

tensor([[[ 2.0136, 0.1628, -1.0203, ..., 0.1760, -1.1604, 1.3056],

[ 2.1784, 0.2714, -0.7200, ..., 0.1223, -0.8121, 0.9620],

[ 1.3615, -0.8476, -1.1441, ..., 0.2390, -0.6600, 1.1843],

[ 1.2602, -0.5010, -1.2188, ..., 0.0653, -1.0558, 0.8586]],

[[ 1.8728, -0.6219, -2.0477, ..., 1.7473, -0.2108, 0.9367],

[ 1.8149, -1.1993, -1.5834, ..., 1.3060, -0.5295, 1.0630],

[ 2.3128, -0.5949, -1.2872, ..., 1.6885, -0.5022, 0.4712],

[ 1.5520, -0.7780, -1.1812, ..., 1.4431, -0.9102, 1.2199]]],

grad_fn=<AddBackward0>)

torch.Size([2, 4, 512])

五、输出

输出层主要包含以下两个部分:

线性层softmax层

在这里插入图片描述

线性层做维度投影,softmax获得概率,两者可以直接搭建在一起,代码如下:

输出构建

<code>import torch.nn.functional as F

class Generator(nn.Module):

def __init__(self,d_model,vocab_size):

super(Generator,self).__init__()

self.project =nn.Linear(d_model,vocab_size)

def forward(self,x):

return F.log_softmax(self.project(x),dim=-1)

测试

m = nn.Linear(20,30)

input = torch.randn(128,20)

output = m(input)

print(output.size())

d_model = 512

vocab_size = 1000

x = de_result

gen = Generator(d_model,vocab_size)

gen_result= gen(x)

print(gen_result)

print(gen_result.shape)

输出:

tensor([[[-8.2090, -7.4104, -7.4865, ..., -7.1592, -6.9332, -5.5710],

[-8.0720, -7.0982, -7.1594, ..., -7.2314, -6.9562, -5.6433],

[-8.0333, -7.5025, -7.2924, ..., -6.9412, -6.7873, -5.3485],

[-8.4384, -7.3633, -7.1333, ..., -7.1498, -6.8243, -5.4572]],

[[-7.9861, -6.6642, -7.3332, ..., -6.6988, -6.4912, -5.9313],

[-7.9885, -6.5261, -6.7994, ..., -6.8536, -6.7248, -5.6821],

[-8.1758, -6.5041, -6.6978, ..., -7.2892, -6.7432, -5.5145],

[-8.1846, -6.7156, -6.9616, ..., -6.5103, -6.8996, -5.6653]]],

grad_fn=<LogSoftmaxBackward0>)

torch.Size([2, 4, 1000])

六、完整模型构建

编码器解码器上面已经构建好了,接下来就是实现完整的编码器解码器结构,再看一眼模型结构图,是不是清晰了很多,每个子层我们都用代码实现过了。

在这里插入图片描述

模型构建

<code>class EncoderDecoder(nn.Module):

def __init__(self,encoder,decoder,source_embed,target_embed,generator):

super(EncoderDecoder,self).__init__()

self.encoder = encoder

self.decoder = decoder

self.src_embed = source_embed

self.tgt_embed = target_embed

self.generator = generator

def forward(self,source,target,source_mask,target_mask):

return self.decode(self.encode(source,source_mask),source_mask,target,target_mask)

def encode(self,source,source_mask):

return self.encoder(self.src_embed(source),source_mask)

def decode(self,memory,source_mask,target,target_mask):

return self.decoder(self.tgt_embed(target),memory,source_mask,target_mask)

vocab_size = 1000

d_model = 512

encoder = en

decoder = de

source_embed = nn.Embedding(vocab_size, d_model)

target_embed = nn.Embedding(vocab_size, d_model)

generator = gen

#假设源数据与⽬标数据相同, 实际中并不相同

source = target = Variable(torch.LongTensor([[100, 2, 421, 508], [491, 998, 1, 221]]))

# 假设src_mask与tgt_mask相同,实际中并不相同

source_mask = target_mask = Variable(torch.zeros(8, 4, 4))

ed = EncoderDecoder(encoder, decoder, source_embed, target_embed, generator)

ed_result = ed(source, target, source_mask, target_mask)

print(ed_result)

print(ed_result.shape)

def make_model(source_vocab,target_vocab,N=6,d_model=512,d_ff=2048,head=8,dropout=0.1):

c =copy.deepcopy

attn = MultiHeadAttention(head,d_model)

ff = PositionwiseFeedForward(d_model,d_ff,dropout)

position = PositionalEncoding(d_model,dropout)

model = EncoderDecoder(

nn.Sequential(Embeddings(d_model,source_vocab),c(position)),

Encoder(EncoderLayer(d_model,c(attn),c(ff),dropout),N),

Decoder(DecoderLayer(d_model,c(attn),c(attn),c(ff),dropout),N),

nn.Sequential(Embeddings(d_model,target_vocab),c(position)),

Generator(d_model,target_vocab))

for p in model.parameters():

# 这⾥⼀但判断参数的维度⼤于1,则会将其初始化成⼀个服从均匀分布的矩阵,

if p.dim()>1:

nn.init.xavier_uniform(p)

return model

测试

source_vocab = 11

target_vocab = 11

N= 6

res = make_model(source_vocab,target_vocab,N)

print(res)

输出

EncoderDecoder(

(encoder): Sequential(

(0): Embeddings(

(lut): Embedding(11, 512)

)

(1): PositionalEncoding(

(dropout): Dropout(p=0.1, inplace=False)

)

)

(decoder): Encoder(

(layers): ModuleList(

(0): EncoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(1): EncoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(2): EncoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(3): EncoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(4): EncoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(5): EncoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

)

(norm): LayerNorm()

)

(src_embed): Decoder(

(layers): ModuleList(

(0): DecoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(src_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(2): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(1): DecoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(src_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(2): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(2): DecoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(src_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(2): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(3): DecoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(src_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(2): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(4): DecoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(src_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(2): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

(5): DecoderLayer(

(self_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(src_attn): MultiHeadAttention(

(linears): ModuleList(

(0): Linear(in_features=512, out_features=512, bias=True)

(1): Linear(in_features=512, out_features=512, bias=True)

(2): Linear(in_features=512, out_features=512, bias=True)

(3): Linear(in_features=512, out_features=512, bias=True)

)

(dropout): Dropout(p=0.1, inplace=False)

)

(feed_forward): PositionwiseFeedForward(

(w1): Linear(in_features=512, out_features=2048, bias=True)

(w2): Linear(in_features=2048, out_features=512, bias=True)

(dropout): Dropout(p=0.1, inplace=False)

)

(sublayer): ModuleList(

(0): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(1): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

(2): SublayerConnection(

(norm): LayerNorm()

(dropout): Dropout(p=0.1, inplace=False)

)

)

)

)

(norm): LayerNorm()

)

(tgt_embed): Sequential(

(0): Embeddings(

(lut): Embedding(11, 512)

)

(1): PositionalEncoding(

(dropout): Dropout(p=0.1, inplace=False)

)

)

(generator): Generator(

(project): Linear(in_features=512, out_features=11, bias=True)

)

)



声明

本文内容仅代表作者观点,或转载于其他网站,本站不以此文作为商业用途
如有涉及侵权,请联系本站进行删除
转载本站原创文章,请注明来源及作者。