位置: IT常识 - 正文

涨点技巧:注意力机制---Yolov8引入CBAM、GAM、Resnet_CBAM(涨点是什么意思)

编辑:rootadmin
涨点技巧:注意力机制---Yolov8引入CBAM、GAM、Resnet_CBAM  1.计算机视觉中的注意力机制

推荐整理分享涨点技巧:注意力机制---Yolov8引入CBAM、GAM、Resnet_CBAM(涨点是什么意思),希望有所帮助,仅作参考,欢迎阅读内容。

文章相关热门搜索词:股市涨点是什么意思,股票涨点什么意思,股票涨点什么意思,起涨点买入法,涨点怎么计算,涨点是什么意思,抓住股票起涨点,股票涨点,内容如对您有帮助,希望把文章链接给更多的朋友!

一般来说,注意力机制通常被分为以下基本四大类:

通道注意力 Channel Attention

空间注意力机制 Spatial Attention

时间注意力机制 Temporal Attention

分支注意力机制 Branch Attention

1.1.CBAM:通道注意力和空间注意力的集成者

轻量级的卷积注意力模块,它结合了通道和空间的注意力机制模块

论文题目:《CBAM: Convolutional Block Attention Module》 论文地址:  https://arxiv.org/pdf/1807.06521.pdf

上图可以看到,CBAM包含CAM(Channel Attention Module)和SAM(Spartial Attention Module)两个子模块,分别进行通道和空间上的Attention。这样不只能够节约参数和计算力,并且保证了其能够做为即插即用的模块集成到现有的网络架构中去。

1.2 GAM:Global Attention Mechanism涨点技巧:注意力机制---Yolov8引入CBAM、GAM、Resnet_CBAM(涨点是什么意思)

超越CBAM,全新注意力GAM:不计成本提高精度! 论文题目:Global Attention Mechanism: Retain Information to Enhance Channel-Spatial Interactions 论文地址:https://paperswithcode.com/paper/global-attention-mechanism-retain-information

从整体上可以看出,GAM和CBAM注意力机制还是比较相似的,同样是使用了通道注意力机制和空间注意力机制。但是不同的是对通道注意力和空间注意力的处理。​

1.3 ResBlock_CBAM

CBAM结构其实就是将通道注意力信息核空间注意力信息在一个block结构中进行运用。

在resnet中实现cbam:即在原始block和残差结构连接前,依次通过channel attention和spatial attention即可。

1.4性能评价

 2.Yolov8加入CBAM、GAM

2.1 CBAM加入modules.py中(相当于yolov5中的common.py)class ChannelAttention(nn.Module): # Channel-attention module https://github.com/open-mmlab/mmdetection/tree/v3.0.0rc1/configs/rtmdet def __init__(self, channels: int) -> None: super().__init__() self.pool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Conv2d(channels, channels, 1, 1, 0, bias=True) self.act = nn.Sigmoid() def forward(self, x: torch.Tensor) -> torch.Tensor: return x * self.act(self.fc(self.pool(x)))class SpatialAttention(nn.Module): # Spatial-attention module def __init__(self, kernel_size=7): super().__init__() assert kernel_size in (3, 7), 'kernel size must be 3 or 7' padding = 3 if kernel_size == 7 else 1 self.cv1 = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False) self.act = nn.Sigmoid() def forward(self, x): return x * self.act(self.cv1(torch.cat([torch.mean(x, 1, keepdim=True), torch.max(x, 1, keepdim=True)[0]], 1)))class CBAM(nn.Module): # Convolutional Block Attention Module def __init__(self, c1, kernel_size=7): # ch_in, kernels super().__init__() self.channel_attention = ChannelAttention(c1) self.spatial_attention = SpatialAttention(kernel_size) def forward(self, x): return self.spatial_attention(self.channel_attention(x))

2.2 GAM_Attention加入modules.py中:

def channel_shuffle(x, groups=2): ##shuffle channel # RESHAPE----->transpose------->Flatten B, C, H, W = x.size() out = x.view(B, groups, C // groups, H, W).permute(0, 2, 1, 3, 4).contiguous() out = out.view(B, C, H, W) return outclass GAM_Attention(nn.Module): # https://paperswithcode.com/paper/global-attention-mechanism-retain-information def __init__(self, c1, c2, group=True, rate=4): super(GAM_Attention, self).__init__() self.channel_attention = nn.Sequential( nn.Linear(c1, int(c1 / rate)), nn.ReLU(inplace=True), nn.Linear(int(c1 / rate), c1) ) self.spatial_attention = nn.Sequential( nn.Conv2d(c1, c1 // rate, kernel_size=7, padding=3, groups=rate) if group else nn.Conv2d(c1, int(c1 / rate), kernel_size=7, padding=3), nn.BatchNorm2d(int(c1 / rate)), nn.ReLU(inplace=True), nn.Conv2d(c1 // rate, c2, kernel_size=7, padding=3, groups=rate) if group else nn.Conv2d(int(c1 / rate), c2, kernel_size=7, padding=3), nn.BatchNorm2d(c2) ) def forward(self, x): b, c, h, w = x.shape x_permute = x.permute(0, 2, 3, 1).view(b, -1, c) x_att_permute = self.channel_attention(x_permute).view(b, h, w, c) x_channel_att = x_att_permute.permute(0, 3, 1, 2) # x_channel_att=channel_shuffle(x_channel_att,4) #last shuffle x = x * x_channel_att x_spatial_att = self.spatial_attention(x).sigmoid() x_spatial_att = channel_shuffle(x_spatial_att, 4) # last shuffle out = x * x_spatial_att # out=channel_shuffle(out,4) #last shuffle return out

2.3 ResBlock_CBAM加入modules.py中:

class ResBlock_CBAM(nn.Module): def __init__(self, in_places, places, stride=1, downsampling=False, expansion=4): super(ResBlock_CBAM, self).__init__() self.expansion = expansion self.downsampling = downsampling self.bottleneck = nn.Sequential( nn.Conv2d(in_channels=in_places, out_channels=places, kernel_size=1, stride=1, bias=False), nn.BatchNorm2d(places), nn.LeakyReLU(0.1, inplace=True), nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False), nn.BatchNorm2d(places), nn.LeakyReLU(0.1, inplace=True), nn.Conv2d(in_channels=places, out_channels=places * self.expansion, kernel_size=1, stride=1, bias=False), nn.BatchNorm2d(places * self.expansion), ) self.cbam = CBAM(c1=places * self.expansion, c2=places * self.expansion, ) if self.downsampling: self.downsample = nn.Sequential( nn.Conv2d(in_channels=in_places, out_channels=places * self.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(places * self.expansion) ) self.relu = nn.ReLU(inplace=True) def forward(self, x): residual = x out = self.bottleneck(x) out = self.cbam(out) if self.downsampling: residual = self.downsample(x) out += residual out = self.relu(out) return out

2.4 CBAM、GAM_Attention、ResBlock_CBAM加入tasks.py中(相当于yolov5中的yolo.py)

from ultralytics.nn.modules import (C1, C2, C3, C3TR, SPP, SPPF, Bottleneck, BottleneckCSP, C2f, C3Ghost, C3x, Classify, Concat, Conv, ConvTranspose, Detect, DWConv, DWConvTranspose2d, Ensemble, Focus, GhostBottleneck, GhostConv, Segment,CBAM, GAM_Attention , ResBlock_CBAM)

def parse_model(d, ch, verbose=True):函数中

if m in (Classify, Conv, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus, BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x , CBAM , GAM_Attention ,ResBlock_CBAM):

2.4 CBAM、GAM修改对应yaml

2.4.1 CBAM加入yolov8

# Ultralytics YOLO 🚀, GPL-3.0 license# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parametersnc: 80 # number of classesscales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbonebackbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9# YOLOv8.0n headhead: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, CBAM, [512]] - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, CBAM, [256]] - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 20 (P4/16-medium) - [-1, 1, CBAM, [512]] - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 24 (P5/32-large) - [-1, 1, CBAM, [1024]] - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)

2.4.2 GAM加入yolov8

# Ultralytics YOLO 🚀, GPL-3.0 license# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parametersnc: 80 # number of classesscales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n' # [depth, width, max_channels] n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbonebackbone: # [from, repeats, module, args] - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4 - [-1, 3, C2f, [128, True]] - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8 - [-1, 6, C2f, [256, True]] - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16 - [-1, 6, C2f, [512, True]] - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32 - [-1, 3, C2f, [1024, True]] - [-1, 1, SPPF, [1024, 5]] # 9# YOLOv8.0n headhead: - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 6], 1, Concat, [1]] # cat backbone P4 - [-1, 3, C2f, [512]] # 12 - [-1, 1, GAM_Attention, [512,512]] - [-1, 1, nn.Upsample, [None, 2, 'nearest']] - [[-1, 4], 1, Concat, [1]] # cat backbone P3 - [-1, 3, C2f, [256]] # 16 (P3/8-small) - [-1, 1, GAM_Attention, [256,256]] - [-1, 1, Conv, [256, 3, 2]] - [[-1, 13], 1, Concat, [1]] # cat head P4 - [-1, 3, C2f, [512]] # 20 (P4/16-medium) - [-1, 1, GAM_Attention, [512,512]] - [-1, 1, Conv, [512, 3, 2]] - [[-1, 9], 1, Concat, [1]] # cat head P5 - [-1, 3, C2f, [1024]] # 24 (P5/32-large) - [-1, 1, GAM_Attention, [1024,1024]] - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)
本文链接地址:https://www.jiuchutong.com/zhishi/289679.html 转载请保留说明!

上一篇:最贵的微博是什么(史上最贵的微博)

下一篇:我找到了 4 个 Midjourney 的免费替代品,停止为 Midjourney 付费,有免费的 AI 替代品(我找到了这个)

  • 最省钱的网络推广方式,最有效率的网站推广方法(最省钱的网络推广)

  • oppoa53是5G手机吗(oppoa53手机最新款5g怎么样)

  • iwatch解锁手机怎么设置(iwatch解锁手机怎么弄)

  • 顺丰怎么寄快递上门取件(顺丰怎么寄快递便宜)

  • opporeno7支持无线充电功能吗(opporeno7支持无线充电吗)

  • 微信朋友圈能发长视频吗(微信朋友圈能发几张图)

  • 苹果6电池不耐用了怎么修复(苹果6电池不耐电怎么办)

  • 快手评论赞最多为什么不置顶(快手多少评论才会显示作者赞过)

  • 下载微信解析包出现问题是怎么回事(微信下安装免费下载)

  • 联想小新连不上无线网(联想小新连不上网怎么回事)

  • a58主板最高支持什么cpu(a58主板最高支持什么显卡)

  • 小米通话录音保存在哪里查看(小米通话录音保存在手机哪里了)

  • 网络层的协议有哪些(属于网络层的协议有)

  • 显卡风扇转显示器无信号(显卡风扇转显示器没反应)

  • ios13.3有什么新功能(ios13.6有什么新功能)

  • 支付宝好友删除后怎么恢复(支付宝好友删除了还能转账吗)

  • 抖音视频上传多久才有播放量(抖音视频上传多大)

  • 电脑安装软件慢是什么原因(电脑安装软件慢是硬盘读取慢吗)

  • 红米note8pro有没有指示灯(红米note8pro有没有红外遥控)

  • 通话中断00:00是什么意思(电话中断00:00)

  • 华为风控中心管理在哪里(华为安全风控中心)

  • 手机qq怎么一次性删除多个好友(手机QQ怎么一次性删除所有联系人)

  • 华为nova5广角怎么开(华为nova5广角模式怎么开)

  • 荣耀v20怎么看后台(华为荣耀v20怎么打开后盖)

  • ipad下载腾讯视频为什么是小的(ipad下载腾讯视频hd版和不是HD)

  • 两个蓝牙音箱怎么互联(两个蓝牙音箱怎样连在一起播放)

  • 怎么修改网页时间(怎么修改网页时间提前看到内容)

  • oppok3手机私密照片在哪查找(oppo手机设为私密照片)

  • 小米电视4a和4x的区别(小米电视4a和4c有什么区别哪个好)

  • 电脑上qq打不开(电脑上QQ打不开QQ空间)

  • 电脑安全模式有什么用?(电脑安全模式有声音吗)

  • Get请求报错404出现原因及解决办法

  • Win11如何截屏保存?Win11截屏保存方法(win11的截屏)

  • 免责声明:网站部分图片文字素材来源于网络,如有侵权,请及时告知,我们会第一时间删除,谢谢! 邮箱:opceo@qq.com

    鄂ICP备2023003026号

    网站地图: 企业信息 工商信息 财税知识 网络常识 编程技术

    友情链接: 武汉网站建设 电脑维修 湖南楚通运网络