[归一化]RMSNorm

2023-12-21 11:57:38

RMSNorm

输入向量 x ∈ R m x \in \mathbb{R^m} xRm,输出向量 y ∈ R n y\in \mathbb{R^n} yRn

线性变换: y i = f ( a i + b i ) y_i= f(a_i+b_i) yi?=f(ai?+bi?)

其中:

  • 非线性激活函数: a i = ∑ j = 1 m w i j x j a_i=\displaystyle \sum^m_{j=1}w_{ij}x_j ai?=j=1m?wij?xj?
  • w i w_i wi?为第i个神经元的权重, b i b_i bi?为偏置项

LayerNorm

a  ̄ i = a i ? μ σ g i \overline{a}_i=\frac{a_i-\mu}{\sigma}g_i ai?=σai??μ?gi? y i = f ( a  ̄ i + b ) y_i= f(\overline{a}_i+b) yi?=f(ai?+b)

其中:

  • 均值 μ = 1 n ∑ i = 1 n a i \mu = \frac{1}{n} \displaystyle \sum_{i=1}^na_i μ=n1?i=1n?ai?
  • 方差 σ = 1 n ∑ i = 1 n ( a i ? μ ) 2 \sigma= \sqrt[]{\frac{1}{n} \displaystyle \sum_{i=1}^n(a_i-\mu)^2} σ=n1?i=1n?(ai??μ)2 ?

代码实现

class LayerNorm(torch.nn.Module):
  def __int__(self, dim, eps=1e-6):
    self.eps = eps
    self.weight = nn.Parameter(dim)
  
  def forward(self, x):
    output = self._norm(x)
    return output * self.weight

RMSNorm

a  ̄ i = a i R M S ( a ) g i \overline{a}_i=\frac{a_i}{RMS(a)}g_i ai?=RMS(a)ai??gi? y i = f ( a  ̄ i + b ) y_i= f(\overline{a}_i+b) yi?=f(ai?+b)

其中:

  • R M S ( a ) = 1 n ∑ i = 1 n a i 2 RMS(a)= \sqrt[]{\frac{1}{n} \displaystyle \sum_{i=1}^na_i^2} RMS(a)=n1?i=1n?ai2? ?

补充:

  • 不考虑re-center,效果几乎相似但效率更高
  • 是LayerNorm中均值为0的特殊情况

代码实现

class RMSNorm(torch.nn.Module):
  def __init__(self, dim, eps=1e-6):
    self.eps = eps
    self.weight = nn.Parameter(torch.ones(dim))
    
  def _norm(self, x):
    return x * torch.rsqrt(x.pow(2).mean(-1, keep_dim=True) + self.eps)
  
  def forward(self, x):
    output = self._norm(x.float()).type_as(x)
    return output * self.weight

文章来源:https://blog.csdn.net/Elvira521yan/article/details/135126033
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。