yolo系列相关损失函数
参考:https://blog.csdn.net/geek0105/article/details/129549229?utm_medium=distribute.pc_relevant.none-task-blog-2defaultbaidujs_baidulandingword~default-0-129549229-blog-134171750.235v40pc_relevant_3m_sort_dl_base3&spm=1001.2101.3001.4242.1&utm_relevant_index=3
目录
1.BCEBlurWithLogitsLoss
2.FocalLoss
3.QFocalLoss
4.APLoss
5.aLRPLoss
6.RankSortLoss
7.IOULoss
GIoU
DIoU
CIoU(Complete IoU loss)
Enhanced Completed IoU
Efficient IoU Loss
αIoU
SIoU
1.BCEBlurWithLogitsLoss
BCEBlurWithLogitsLoss是BCE函数的一个变种,在yolov5中提出来的,其目的是削弱missing样本(就是存在目标但是没有标注出来)带来的负面影响。其实就是通过降低missing样本loss的权重,降低其在反向传播中的比重,达到降低missing样本的负面影响的目的。但是笔者认为,这样可能导致模型无法区分目标和混淆目标,提高混淆目标的误检率。代码如下:
class BCEBlurWithLogitsLoss(nn.Module):
# BCEwithLogitLoss() with reduced missing label effects.
def __init__(self, alpha=0.05):
super().__init__()
self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss()
self.alpha = alpha
def forward(self, pred, true):
loss = self.loss_fcn(pred, true)
pred = torch.sigmoid(pred) # prob from logits
dx = pred - true # reduce only missing label effects
# dx = (pred - true).abs() # reduce missing label and false label effects
alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4))
loss *= alpha_factor
return loss.mean()
2.FocalLoss
FcoalLoss是何凯明大神在2017年发表的一篇论文:Focal Loss for Dense Object Detection.中提出的。
论文地址:https://arxiv.org/abs/1708.02002
主要思路是: 通过增加困难样本的权重,让模型专注于困难样本(hard_sample)的学习,防止简单样本(easy_sample)过多主导训练的进程,可以解决难样本过少的问题。代码如下
class FocalLoss(nn.Module):
# Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
super(FocalLoss, self).__init__()
self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
self.gamma = gamma
self.alpha = alpha
self.reduction = loss_fcn.reduction
self.loss_fcn.reduction = 'none' # required to apply FL to each element
def forward(self, pred, true):
loss = self.loss_fcn(pred, true)
# p_t = torch.exp(-loss)
# loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability
# TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py
pred_prob = torch.sigmoid(pred) # prob from logits
p_t = true * pred_prob + (1 - true) * (1 - pred_prob)
alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
modulating_factor = (1.0 - p_t) ** self.gamma
loss *= alpha_factor * modulating_factor
if self.reduction == 'mean':
return loss.mean()
elif self.reduction == 'sum':
return loss.sum()
else: # 'none'
return loss
3.QFocalLoss
QFocalLoss是2020年的一篇文章,主要是解决FocalLoss只能适用于标签是0-1这样的二分类或者多分类任务,对于使用了label smooth的任务则无法使用。代码如下:
class QFocalLoss(nn.Module):
# Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
def __init__(self, loss_fcn, gamma=1.5, alpha=0.25):
super(QFocalLoss, self).__init__()
self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss()
self.gamma = gamma
self.alpha = alpha
self.reduction = loss_fcn.reduction
self.loss_fcn.reduction = 'none' # required to apply FL to each element
def forward(self, pred, true):
loss = self.loss_fcn(pred, true)
pred_prob = torch.sigmoid(pred) # prob from logits
alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha)
modulating_factor = torch.abs(true - pred_prob) ** self.gamma
loss *= alpha_factor * modulating_factor
if self.reduction == 'mean':
return loss.mean()
elif self.reduction == 'sum':
return loss.sum()
else: # 'none'
return loss
从个人在私有任务使用看,使用QFocalLoss对于map基本没有提升,甚至误检增多了(recall升高)。分析可能是和FocalLoss一样的问题,由于提高了困难样本的权重,会造成过于关注一些模棱两可的样本。对于小样本来说,估计会有提升,对于一般任务,不如使用BCEBlurWithLogitsLoss。
7.IOULoss
IoU、GIoU、DIoU、CIoU、EIoU、αIoU、SIoU
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!