位置: IT常识 - 正文

YOLOV5超参数设置与数据增强解析(yolov5超参数进化)

编辑:rootadmin
YOLOV5超参数设置与数据增强解析 1、YOLOV5的超参数配置文件介绍

推荐整理分享YOLOV5超参数设置与数据增强解析(yolov5超参数进化),希望有所帮助,仅作参考,欢迎阅读内容。

文章相关热门搜索词:yolov4调参,yolov5 参数,yolov4调参,yolov5 参数,yolov5超参数进化,yolov5超参数进化,yolov5超参数设置范围,yolov3超参数设置,内容如对您有帮助,希望把文章链接给更多的朋友!

YOLOv5有大约30个超参数用于各种训练设置。它们在*xml中定义。/data目录下的Yaml文件。更好的初始猜测将产生更好的最终结果,因此在进化之前正确地初始化这些值是很重要的。如果有疑问,只需使用缺省值,这些缺省值是为YOLOv5 COCO训练从头优化的。

YOLOv5的超参文件见data/hyp.finetune.yaml(适用VOC数据集)或者hyo.scrach.yaml(适用COCO数据集)文件

1、yolov5/data/hyps/hyp.scratch-low.yaml(YOLOv5 COCO训练从头优化,数据增强低)# Hyperparameters for low-augmentation COCO training from scratch # python train.py --batch 64 --cfg yolov5n6.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --linear # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3) 初始学习速率 lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf) ,最终OneCycleLR学习率 momentum: 0.937 # SGD momentum/Adam beta1 weight_decay: 0.0005 # optimizer weight decay 5e-4 ,权重衰变 warmup_epochs: 3.0 # warmup epochs (fractions ok) 学习率热身epoch warmup_momentum: 0.8 # warmup initial momentum 学习率热身初始动量 warmup_bias_lr: 0.1 # warmup initial bias lr 学习率热身偏执学习率 box: 0.05 # box loss gain cls: 0.5 # cls loss gain cls_pw: 1.0 # cls BCELoss positive_weight obj: 1.0 # obj loss gain (scale with pixels) obj_pw: 1.0 # obj BCELoss positive_weight iou_t: 0.20 # IoU training threshold anchor_t: 4.0 # anchor-multiple threshold # anchors: 3 # anchors per output layer (0 to ignore) fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5) #颜色亮度,色调(Hue)、饱和度(Saturation) hsv_h: 0.015 # image HSV-Hue augmentation (fraction) hsv_s: 0.7 # image HSV-Saturation augmentation (fraction) hsv_v: 0.4 # image HSV-Value augmentation (fraction) #图像旋转 degrees: 0.0 # image rotation (+/- deg) #图像平移 translate: 0.1 # image translation (+/- fraction) ##图像仿射变换的缩放比例 scale: 0.5 # image scale (+/- gain) #设置裁剪的仿射矩阵系数 shear: 0.0 # image shear (+/- deg) #透视变换 perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 ,range 0-0.001 0.0:仿射变换,>0为透视变换 flipud: 0.0 # image flip up-down (probability) fliplr: 0.5 # image flip left-right (probability) mosaic: 1.0 # image mosaic (probability) mixup: 0.0 # image mixup (probability) #在mosaic启用时,才可以启用 copy_paste: 0.0 # segment copy-paste (probability),在mosaic启用时,才可以启用 2、yolov5/data/hyps/hyp.scratch-mdeia.yaml(数据增强中)# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Hyperparameters for medium-augmentation COCO training from scratch# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorialslr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)momentum: 0.937 # SGD momentum/Adam beta1weight_decay: 0.0005 # optimizer weight decay 5e-4warmup_epochs: 3.0 # warmup epochs (fractions ok)warmup_momentum: 0.8 # warmup initial momentumwarmup_bias_lr: 0.1 # warmup initial bias lrbox: 0.05 # box loss gaincls: 0.3 # cls loss gaincls_pw: 1.0 # cls BCELoss positive_weightobj: 0.7 # obj loss gain (scale with pixels)obj_pw: 1.0 # obj BCELoss positive_weightiou_t: 0.20 # IoU training thresholdanchor_t: 4.0 # anchor-multiple threshold# anchors: 3 # anchors per output layer (0 to ignore)fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)hsv_h: 0.015 # image HSV-Hue augmentation (fraction)hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)hsv_v: 0.4 # image HSV-Value augmentation (fraction)degrees: 0.0 # image rotation (+/- deg)translate: 0.1 # image translation (+/- fraction)scale: 0.9 # image scale (+/- gain)shear: 0.0 # image shear (+/- deg)perspective: 0.0 # image perspective (+/- fraction), range 0-0.001flipud: 0.0 # image flip up-down (probability)fliplr: 0.5 # image flip left-right (probability)mosaic: 1.0 # image mosaic (probability)mixup: 0.1 # image mixup (probability)copy_paste: 0.0 # segment copy-paste (probability)3、hyp.scratch-high.yaml(数据增强高)# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Hyperparameters for high-augmentation COCO training from scratch# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorialslr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)momentum: 0.937 # SGD momentum/Adam beta1weight_decay: 0.0005 # optimizer weight decay 5e-4warmup_epochs: 3.0 # warmup epochs (fractions ok)warmup_momentum: 0.8 # warmup initial momentumwarmup_bias_lr: 0.1 # warmup initial bias lrbox: 0.05 # box loss gaincls: 0.3 # cls loss gaincls_pw: 1.0 # cls BCELoss positive_weightobj: 0.7 # obj loss gain (scale with pixels)obj_pw: 1.0 # obj BCELoss positive_weightiou_t: 0.20 # IoU training thresholdanchor_t: 4.0 # anchor-multiple threshold# anchors: 3 # anchors per output layer (0 to ignore)fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)hsv_h: 0.015 # image HSV-Hue augmentation (fraction)hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)hsv_v: 0.4 # image HSV-Value augmentation (fraction)degrees: 0.0 # image rotation (+/- deg)translate: 0.1 # image translation (+/- fraction)scale: 0.9 # image scale (+/- gain)shear: 0.0 # image shear (+/- deg)perspective: 0.0 # image perspective (+/- fraction), range 0-0.001flipud: 0.0 # image flip up-down (probability)fliplr: 0.5 # image flip left-right (probability)mosaic: 1.0 # image mosaic (probability)mixup: 0.1 # image mixup (probability)copy_paste: 0.1 # segment copy-paste (probability)2、OneCycleLR学习率

根据“OneCycleLR学习率”策略,设置各参数组的学习率。1cycle策略将学习率从初始学习率退火到最大学习率,然后从最大学习率退火到远低于初始学习率的最小学习率。论文地址

3、Warmup

warmup是一种学习率优化方法,最早出现在resnet论文中,在模型训练初期选用较小的学习率,训练一段时间之后(10epoch 或者 10000steps)使用预设的学习率进行训练

为什么使用

模型训练初期,权重随机化,对数据的理解为0,在第一个epoch中,模型会根据输入的数据进行快速的调参,此时如果采用较大的学习率,有很大的可能使模型学偏,后续需要更多的轮次才能拉回来

当模型训练一段时间之后,对数据有一定的先验知识,此时使用较大的学习率模型不容易学偏,可以使用较大的学习率加速训练。

当模型使用较大的学习率训练一段时间之后,模型的分布相对比较稳定,此时不宜从数据中再学到新的特点,如果继续使用较大的学习率会破坏模型的稳定性,而使用较小的学习率更获得最优。

YOLOV5超参数设置与数据增强解析(yolov5超参数进化)

Pytorch内部并没有warmup的接口,为此需要使用第三方包pytorch_warmup ,可以使用命令pip install pytorch_warmup进行安装

1、当学习率计划使用全局迭代数时,未调优的线性预热可以这样使用:import torchimport pytorch_warmup as warmupoptimizer = torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), weight_decay=0.01)num_steps = len(dataloader) * num_epochslr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_steps)warmup_scheduler = warmup.UntunedLinearWarmup(optimizer)for epoch in range(1,num_epochs+1): for batch in dataloader: optimizer.zero_grad() loss = ... loss.backward() optimizer.step() with warmup_scheduler.dampening(): lr_scheduler.step()2、如果你想使用PyTorch 1.4.0或更高版本支持的学习率调度“链接”,你可以简单地给出一组with语句的学习率调度程序代码:lr_scheduler1 = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)lr_scheduler2 = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)warmup_scheduler = warmup.UntunedLinearWarmup(optimizer)for epoch in range(1,num_epochs+1): for batch in dataloader: ... optimizer.step() with warmup_scheduler.dampening(): lr_scheduler1.step() lr_scheduler2.step()3、当学习率计划使用epoch号时,预热计划可以这样使用:lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[num_epochs//3], gamma=0.1)warmup_scheduler = warmup.UntunedLinearWarmup(optimizer)for epoch in range(1,num_epochs+1): for iter, batch in enumerate(dataloader): optimizer.zero_grad() loss = ... loss.backward() optimizer.step() if iter < len(dataloader)-1: with warmup_scheduler.dampening(): pass with warmup_scheduler.dampening(): lr_scheduler.step()4、Warmup Schedules1、Manual Warmup

预热因子w(t)取决于预热期,必须手动指定线性预热和指数预热。

1、 Linearw(t) = min(1, t / warmup_period)warmup_scheduler = warmup.LinearWarmup(optimizer, warmup_period=2000)2、 Exponentialwarmup_period = 1 / (1 - beta2)warmup_scheduler = warmup.UntunedExponentialWarmup(optimizer)3、 RAdam Warmup

The warmup factor depends on Adam’s beta2 parameter for RAdamWarmup. Please see the original paper for the details.

warmup_scheduler = warmup.RAdamWarmup(optimizer)4、 Apex’s Adam

The Apex library provides an Adam optimizer tuned for CUDA devices, FusedAdam. The FusedAdam optimizer can be used with the warmup schedulers. For example:

optimizer = apex.optimizers.FusedAdam(params, lr=0.001, betas=(0.9, 0.999), weight_decay=0.01)lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_steps)warmup_scheduler = warmup.UntunedLinearWarmup(optimizer)4、YOLOV5数据增强(yolov5-v6\utils\datasets.py)

目标检测 YOLOv5 - 数据增强 Yolov5(v6.1)数据增强方式解析 一旦训练开始,您可以在train_batch*.jpg图像中查看增强策略的效果。这些图像将在你的火车日志目录中,通常是yolov5/runs/train/exp: train_batch0.jpg shows train batch 0 mosaics and labels:

5、 YOLOv5集成Albumentations,添加新的数据增强方法

To use albumentations simply pip install -U albumentations and then update the augmentation pipeline as you see fit in the new Albumentations class in yolov5/utils/augmentations.py. Note these Albumentations operations run in addition to the YOLOv5 hyperparameter augmentations, i.e. defined in hyp.scratch.yaml.

Here’s an example that applies Blur, MedianBlur and ToGray albumentations in addition to the YOLOv5 hyperparameter augmentations normally applied to your training mosaics 😃

class Albumentations: # YOLOv5 Albumentations class (optional, used if package is installed) def __init__(self): self.transform = None try: import albumentations as A check_version(A.__version__, '1.0.3') # version requirement self.transform = A.Compose([ A.Blur(blur_limit=50, p=0.1), A.MedianBlur(blur_limit=51, p=0.1), A.ToGray(p=0.3)], bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels'])) logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms)) except ImportError: # package not installed, skip pass except Exception as e: logging.info(colorstr('albumentations: ') + f'{e}') def __call__(self, im, labels, p=1.0): if self.transform and random.random() < p: new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])]) return im, labels

##您可以在YOLOv5数据加载器中集成额外的Albumentations增强功能: 在YOLOv5数据加载器中插入albumentaugment功能的最佳位置是这里:

if self.augment: # Augment imagespace if not mosaic: img, labels = random_perspective(img, labels, degrees=hyp['degrees'], translate=hyp['translate'], scale=hyp['scale'], shear=hyp['shear'], perspective=hyp['perspective']) # Augment colorspace augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) # Apply cutouts # if random.random() < 0.9: # labels = cutout(img, labels)

其中img为图像,label为边框标签。请注意,您添加的任何albuments增强都将是对超参数文件中定义的现有自动YOLOv5增强的补充:

6、定义评估指标

健康是我们追求的价值最大化。在YOLOv5中,我们将默认适应度函数定义为指标的加权组合:mAP@0.5占权重的10%,mAP@0.5:0.95占剩余的90%,没有Precision P和Recall R。您可以根据自己的需要进行调整,或者使用默认的适合度定义(推荐)。

yolov5/utils/metrics.pyLines 12 to 16 in 4103ce9 def fitness(x): # Model fitness as a weighted combination of metrics w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] return (x[:, :4] * w).sum(1) 7、 Evolve(模型参数更新进化)# Single-GPUpython train.py --epochs 10 --data coco128.yaml --weights yolov5s.pt --cache --evolve# Multi-GPUfor i in 0 1 2 3 4 5 6 7; do sleep $(expr 30 \* $i) && # 30-second delay (optional) echo 'Starting GPU '$i'...' && nohup python train.py --epochs 10 --data coco128.yaml --weights yolov5s.pt --cache --device $i --evolve > evolve_gpu_$i.log &done# Multi-GPU bash-while (not recommended)for i in 0 1 2 3 4 5 6 7; do sleep $(expr 30 \* $i) && # 30-second delay (optional) echo 'Starting GPU '$i'...' && "$(while true; do nohup python train.py... --device $i --evolve 1 > evolve_gpu_$i.log; done)" &done# YOLOv5 Hyperparameter Evolution Results# Best generation: 287# Last generation: 300# metrics/precision, metrics/recall, metrics/mAP_0.5, metrics/mAP_0.5:0.95, val/box_loss, val/obj_loss, val/cls_loss# 0.54634, 0.55625, 0.58201, 0.33665, 0.056451, 0.042892, 0.013441lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)momentum: 0.937 # SGD momentum/Adam beta1weight_decay: 0.0005 # optimizer weight decay 5e-4warmup_epochs: 3.0 # warmup epochs (fractions ok)warmup_momentum: 0.8 # warmup initial momentumwarmup_bias_lr: 0.1 # warmup initial bias lrbox: 0.05 # box loss gaincls: 0.5 # cls loss gaincls_pw: 1.0 # cls BCELoss positive_weightobj: 1.0 # obj loss gain (scale with pixels)obj_pw: 1.0 # obj BCELoss positive_weightiou_t: 0.20 # IoU training thresholdanchor_t: 4.0 # anchor-multiple threshold# anchors: 3 # anchors per output layer (0 to ignore)fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)hsv_h: 0.015 # image HSV-Hue augmentation (fraction)hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)hsv_v: 0.4 # image HSV-Value augmentation (fraction)degrees: 0.0 # image rotation (+/- deg)translate: 0.1 # image translation (+/- fraction)scale: 0.5 # image scale (+/- gain)shear: 0.0 # image shear (+/- deg)perspective: 0.0 # image perspective (+/- fraction), range 0-0.001flipud: 0.0 # image flip up-down (probability)fliplr: 0.5 # image flip left-right (probability)mosaic: 1.0 # image mosaic (probability)mixup: 0.0 # image mixup (probability)copy_paste: 0.0 # segment copy-paste (probability)

我们建议至少300代的进化才能获得最好的结果。请注意,进化通常是昂贵和耗时的,因为基本场景要训练数百次,可能需要数百或数千个GPU小时。

8、 超参数可视化

evolve.csv is plotted as evolve.png by utils.plots.plot_evolve() after evolution finishes with one subplot per hyperparameter showing fitness (y axis) vs hyperparameter values (x axis). Yellow indicates higher concentrations. Vertical distributions indicate that a parameter has been disabled and does not mutate. This is user selectable in the meta dictionary in train.py, and is useful for fixing parameters and preventing them from evolving.

本文链接地址:https://www.jiuchutong.com/zhishi/290649.html 转载请保留说明!

上一篇:野外探险家亚历克斯·彼得森在胡德山南侧快速滑翔,俄勒冈 (© Richard Hallman/DEEPOL by plainpicture)(野外生存探险家)

下一篇:圣三一教堂,英国埃文河畔斯特拉特福 (© James Osmond/Getty Images)(圣三一教堂英文)

  • 增值税加计抵减政策
  • 纳税检查调整的滞纳金怎么收
  • 三免三减半如何申报企业所得税
  • 企业所得税抵扣项
  • 房屋租赁收入如何征税?
  • 购进财务软件折旧怎么算
  • 现金支票填写注意事项有哪些
  • 进口货物会计分录举例
  • 扣员工餐费需要缴纳个税吗
  • 月底结转应交税费怎么弄
  • 货到票未到的会计账务处理
  • 罚没支出包括税收滞纳金吗
  • 企业不征税收入用于支出所形成的固定资产
  • 零星费用没有发票报销可以做入工资吗
  • 待报解地方预算收入怎么做账
  • 企业技术中心认定专精特新
  • 进口增值税计入关税完税价格吗
  • 旅行社代订机票发票报销
  • 经营性租赁 会计准则
  • 企业合并案例
  • 进销存账本怎么做
  • wordpress相关文章插件
  • PHP:Memcached::deleteByKey()的用法_Memcached类
  • phpcurl模拟登录
  • 资产负债表日后期间是指
  • 解决脱发的8个方法
  • 分包工程账务处理
  • 微前端Qiankun
  • 事故赔偿金怎么处理
  • hypergraph learning
  • 小规模场地租赁费税率是1还是5
  • python yolo
  • d2loader does not recognize
  • 本期应纳税额是怎么算
  • 制造业属不属于第二产业
  • 公司帐户转到法人私卡备用金行吗
  • 支付招聘网站费用怎么入账
  • 个人所得税既有工资薪金又有劳务报酬房屋租金
  • 企业被吊销后能当被告嘛
  • 个体户交个税新政策
  • 不能从销项税额中抵扣的进项税额为A购进货物运费准予
  • 公帐的钱可以转到其他人帐户吗
  • 利润表季报的本期金额
  • 材料成本差异的会计分录
  • 疫情捐款可以抵扣增值税吗
  • 开具发票要注意方面是有哪些?
  • 工会经费到底怎么算
  • 年底没有取得发票企业所得税
  • 建筑业暂估成本票来了后的账务处理
  • 不需要计提折旧的情况
  • 去年应收账款下账错误怎么调整
  • 检测费用的会计分录
  • 以前年度损益调整会计分录
  • 购买需要安装的生产设备会计分录
  • 安全生产费的会计分录
  • 持有待售流动资产减值
  • 以现金支付办公用品费440元
  • 劳务报酬个税如何入账
  • 餐饮企业的内部营销
  • 会计科目设置的相关注意事项
  • mysql5.5解压版安装教程
  • linux常用命令修改
  • solaris 11.4
  • win8 更改电脑设置
  • win10增加右键菜单
  • 常见unix操作系统
  • mac电脑登录
  • win8任务栏图标太大了
  • 在Linux系统中安装MySQL
  • jquery日期控件onchange事件
  • formatter参数
  • 安卓api中文手册
  • windows、linux
  • Unity3D游戏开发(第2版)
  • jquery选择器总结
  • unity打包后的程序闪退
  • android:theme="@style/apptheme"
  • 高速公路发票在哪里开
  • 宁德市蕉城区地图全图最新
  • 国税网站怎么登录进入
  • 免责声明:网站部分图片文字素材来源于网络,如有侵权,请及时告知,我们会第一时间删除,谢谢! 邮箱:opceo@qq.com

    鄂ICP备2023003026号

    网站地图: 企业信息 工商信息 财税知识 网络常识 编程技术

    友情链接: 武汉网站建设