首页 最新 热门 推荐

  • 首页
  • 最新
  • 热门
  • 推荐

基于YOLOV5的路面障碍检测系统(算法设计)

  • 25-04-24 19:00
  • 2527
  • 10669
blog.csdn.net

第一:了解什么是YOLOv5算法

  1. 官网:Ultralytics YOLOv5 -Ultralytics YOLO 文档综合指南
  2. Yolo系列内容介绍:https://zhuanlan.zhihu.com/p/143747206

这里说很详细,你自己看就可以了。写论文你绝对用得上,但既然称之为Yolov5,也有很多非常不错的地方值得我们学习。不过因为Yolov5的网络结构和Yolov3、Yolov4相比,不好可视化。

1.Yolov5四种网络结构

Yolov5官方代码中,给出的目标检测网络中一共有4个版本,分别是Yolov5s、Yolov5m、Yolov5l、Yolov5x四个模型。学习一个新的算法,最好在脑海中对算法网络的整体架构有一个清晰的理解。但比较尴尬的是,Yolov5代码中给出的网络文件是yaml格式,和原本Yolov3、Yolov4中的cfg不同。因此无法用netron工具直接可视化的查看网络结构,造成有的同学不知道如何去学习这样的网络。

比如下载了Yolov5的四个pt格式的权重模型:

  1. Yolov5网络结构图

2.Yolov5核心基础内容

Yolov5的结构和Yolov4很相似,但也有一些不同,大白还是按照从整体到细节的方式,对每个板块进行讲解。

上图即Yolov5的网络结构图,可以看出,还是分为输入端、Backbone、Neck、Prediction四个部分。

上面是Yolov5作者的算法性能测试图:

1、Yolov5的项目结构

看这篇博客:你会学会很多,写论问也用得上。(博主连接如下)

YOLOv5源码逐行超详细注释与解读(1)——项目目录结构解析-CSDN博客

2、YOLO总体架构图

  1. 3、训练自己的自定义数据集YOLOv5。

培训自定义数据 -Ultralytics YOLO 文档

  1. # coding:utf-8
  2. # ----------------------------------------------------------------------------
  3. # Pytorch multi-GPU YOLOV5 based UMT
  4. from __future__ import absolute_import
  5. from __future__ import division
  6. from __future__ import print_function
  7. import argparse
  8. import logging
  9. import math
  10. import os
  11. import random
  12. import sys
  13. import time
  14. import warnings
  15. import yaml
  16. import numpy as np
  17. from copy import deepcopy
  18. from pathlib import Path
  19. from threading import Thread
  20. from tqdm import tqdm
  21. import torch.distributed as dist
  22. import torch.nn as nn
  23. import torch.nn.functional as F
  24. import torch.optim as optim
  25. import torch.optim.lr_scheduler as lr_scheduler
  26. import torch.utils.data
  27. from torch.cuda import amp
  28. from torch.nn.parallel import DistributedDataParallel as DDP
  29. from torch.utils.tensorboard import SummaryWriter
  30. FILE = Path(__file__).absolute()
  31. sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path
  32. import ssda_yolov5_test as test # for end-of-epoch mAP
  33. from models.experimental import attempt_load
  34. from models.yolo import Model
  35. from utils.autoanchor import check_anchors
  36. from utils.datasets import create_dataloader
  37. from utils.datasets_single import create_dataloader_single
  38. from utils.google_utils import attempt_download
  39. from utils.loss import ComputeLoss
  40. from utils.torch_utils import ModelEMA, WeightEMA, select_device, intersect_dicts, torch_distributed_zero_first, de_parallel
  41. from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume
  42. from utils.plots import plot_images, plot_labels, plot_results, plot_evolution
  43. from utils.metrics import fitness
  44. from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \
  45. strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \
  46. check_requirements, print_mutation, set_logging, one_cycle, colorstr, \
  47. non_max_suppression, check_dataset_umt, xyxy2xywhn
  48. logger = logging.getLogger(__name__)
  49. LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html
  50. RANK = int(os.getenv('RANK', -1))
  51. WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))
  52. # hyp means path/to/hyp.yaml or hyp dictionary
  53. def train(hyp, opt, device):
  54. save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, notest, nosave, workers, = \
  55. opt.save_dir, opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \
  56. opt.resume, opt.notest, opt.nosave, opt.workers
  57. teacher_alpha, conf_thres, iou_thres, max_gt_boxes, lambda_weight, student_weight, teacher_weight = \
  58. opt.teacher_alpha, opt.conf_thres, opt.iou_thres, opt.max_gt_boxes, opt.lambda_weight, \
  59. opt.student_weight, opt.teacher_weight
  60. all_shift = opt.consistency_loss
  61. # Directories
  62. save_dir = Path(save_dir)
  63. wdir = save_dir / 'weights'
  64. wdir.mkdir(parents=True, exist_ok=True) # make dir
  65. last_student, last_teacher = wdir / 'last_student.pt', wdir / 'last_teacher.pt'
  66. best_student, best_teacher = wdir / 'best_student.pt', wdir / 'best_teacher.pt'
  67. results_file = save_dir / 'results.txt'
  68. # Hyperparameters
  69. if isinstance(hyp, str):
  70. with open(hyp) as f: # default path data/hyps/hyp.scratch.yaml
  71. hyp = yaml.safe_load(f) # load hyps dict
  72. logger.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
  73. # Save run settings
  74. with open(save_dir / 'hyp.yaml', 'w') as f:
  75. yaml.safe_dump(hyp, f, sort_keys=False)
  76. with open(save_dir / 'opt.yaml', 'w') as f:
  77. yaml.safe_dump(vars(opt), f, sort_keys=False)
  78. # Configure
  79. plots = not evolve # create plots
  80. cuda = device.type != 'cpu'
  81. init_seeds(1 + RANK)
  82. with open(data) as f:
  83. data_dict = yaml.safe_load(f) # data dict
  84. # Loggers
  85. loggers = {'wandb': None, 'tb': None} # loggers dict
  86. if RANK in [-1, 0]:
  87. # TensorBoard
  88. if not evolve:
  89. prefix = colorstr('tensorboard: ')
  90. logger.info(f"{prefix}Start with 'tensorboard --logdir {opt.project}', view at http://localhost:6006/")
  91. loggers['tb'] = SummaryWriter(str(save_dir))
  92. # W&B
  93. opt.hyp = hyp # add hyperparameters
  94. run_id = torch.load(weights).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None
  95. run_id = run_id if opt.resume else None # start fresh run if transfer learning
  96. wandb_logger = WandbLogger(opt, save_dir.stem, run_id, data_dict)
  97. loggers['wandb'] = wandb_logger.wandb
  98. if loggers['wandb']:
  99. data_dict = wandb_logger.data_dict
  100. weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # may update weights, epochs if resuming
  101. nc = 1 if single_cls else int(data_dict['nc']) # number of classes
  102. names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
  103. assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, data) # check
  104. is_coco = data.endswith('coco.yaml') and nc == 80 # COCO dataset
  105. # Model
  106. pretrained = weights.endswith('.pt')
  107. # torch.cuda.empty_cache()
  108. # strip_optimizer(weights) # strip optimizers, this will apparently reduce the model size
  109. if pretrained:
  110. with torch_distributed_zero_first(RANK):
  111. weights = attempt_download(weights) # download if not found locally
  112. ckpt = torch.load(weights, map_location=device) # load checkpoint
  113. # model_student
  114. model_student = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
  115. # model_teacher
  116. model_teacher = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
  117. exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else [] # exclude keys
  118. state_dict = ckpt['model'].float().state_dict() # to FP32
  119. state_dict = intersect_dicts(state_dict, model_student.state_dict(), exclude=exclude) # intersect
  120. model_student.load_state_dict(state_dict, strict=False) # load
  121. # model_teacher.load_state_dict(state_dict, strict=False) # load
  122. model_teacher.load_state_dict(state_dict.copy(), strict=False) # load
  123. logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model_student.state_dict()), weights)) # report
  124. else:
  125. model_student = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
  126. model_teacher = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
  127. # Update models weights [only by this way, we can resume the old training normally...][ref models.experimental.attempt_load()]
  128. if student_weight != "None" and teacher_weight != "None": # update model_student and model_teacher
  129. torch.cuda.empty_cache()
  130. ckpt_student = torch.load(student_weight, map_location=device) # load checkpoint
  131. state_dict_student = ckpt_student['ema' if ckpt_student.get('ema') else 'model'].float().half().state_dict() # to FP32
  132. model_student.load_state_dict(state_dict_student, strict=False) # load
  133. del ckpt_student, state_dict_student
  134. ckpt_teacher = torch.load(teacher_weight, map_location=device) # load checkpoint
  135. state_dict_teacher = ckpt_teacher['ema' if ckpt_teacher.get('ema') else 'model'].float().half().state_dict() # to FP32
  136. model_teacher.load_state_dict(state_dict_teacher, strict=False) # load
  137. del ckpt_teacher, state_dict_teacher
  138. # Dataset
  139. with torch_distributed_zero_first(RANK):
  140. # check_dataset(data_dict) # check, need to be re-write or command out
  141. check_dataset_umt(data_dict) # check, need to be re-write or command out
  142. train_path_source_real = data_dict['train_source_real'] # training source dataset w labels
  143. train_path_source_fake = data_dict['train_source_fake'] # training target-like dataset w labels
  144. train_path_target_real = data_dict['train_target_real'] # training target dataset w/o labels
  145. train_path_target_fake = data_dict['train_target_fake'] # training source-like dataset w/o labels
  146. test_path_target_real = data_dict['test_target_real'] # test on target dataset w labels, should not use testset to train
  147. # test_path_target_real = data_dict['train_target_real'] # test on target dataset w labels, remember val in 'test_target_real'
  148. # Freeze
  149. freeze_student = [] # parameter names to freeze (full or partial)
  150. for k, v in model_student.named_parameters():
  151. v.requires_grad = True # train all layers
  152. if any(x in k for x in freeze_student):
  153. print('freezing %s' % k)
  154. v.requires_grad = False
  155. freeze_teacher = [] # parameter names to freeze (full or partial)
  156. for k, v in model_teacher.named_parameters():
  157. v.requires_grad = True # train all layers
  158. if any(x in k for x in freeze_teacher):
  159. print('freezing %s' % k)
  160. v.requires_grad = False
  161. # Optimizer
  162. nbs = 64 # nominal batch size
  163. accumulate = max(round(nbs / batch_size), 1) # accumulate loss before optimizing
  164. hyp['weight_decay'] *= batch_size * accumulate / nbs # scale weight_decay
  165. logger.info(f"Scaled weight_decay = {hyp['weight_decay']}")
  166. pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
  167. for k, v in model_student.named_modules():
  168. if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
  169. pg2.append(v.bias) # biases
  170. if isinstance(v, nn.BatchNorm2d):
  171. pg0.append(v.weight) # no decay
  172. elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
  173. pg1.append(v.weight) # apply decay
  174. if opt.adam:
  175. student_optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
  176. else:
  177. student_optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
  178. student_optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay
  179. student_optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
  180. logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
  181. del pg0, pg1, pg2
  182. # UMT algorithm
  183. student_detection_params = []
  184. for key, value in model_student.named_parameters():
  185. if value.requires_grad:
  186. student_detection_params += [value]
  187. teacher_detection_params = []
  188. for key, value in model_teacher.named_parameters():
  189. if value.requires_grad:
  190. teacher_detection_params += [value]
  191. value.requires_grad = False
  192. teacher_optimizer = WeightEMA(teacher_detection_params, student_detection_params, alpha=teacher_alpha)
  193. # For debugging
  194. # for k, v in model_student.named_parameters():
  195. # print(k, v.requires_grad)
  196. # for k, v in model_teacher.named_parameters():
  197. # print(k, v.requires_grad)
  198. # Scheduler https://arxiv.org/pdf/1812.01187.pdf
  199. # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
  200. if opt.linear_lr:
  201. lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
  202. else:
  203. lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
  204. scheduler = lr_scheduler.LambdaLR(student_optimizer, lr_lambda=lf)
  205. # plot_lr_scheduler(
注:本文转载自blog.csdn.net的的文章"https://blog.csdn.net/weixin_42380711/article/details/139428975"。版权归原作者所有,此博客不拥有其著作权,亦不承担相应法律责任。如有侵权,请联系我们删除。
复制链接
复制链接
相关推荐
发表评论
登录后才能发表评论和回复 注册

/ 登录

评论记录:

未查询到任何数据!
回复评论:

分类栏目

后端 (14832) 前端 (14280) 移动开发 (3760) 编程语言 (3851) Java (3904) Python (3298) 人工智能 (10119) AIGC (2810) 大数据 (3499) 数据库 (3945) 数据结构与算法 (3757) 音视频 (2669) 云原生 (3145) 云平台 (2965) 前沿技术 (2993) 开源 (2160) 小程序 (2860) 运维 (2533) 服务器 (2698) 操作系统 (2325) 硬件开发 (2492) 嵌入式 (2955) 微软技术 (2769) 软件工程 (2056) 测试 (2865) 网络空间安全 (2948) 网络与通信 (2797) 用户体验设计 (2592) 学习和成长 (2593) 搜索 (2744) 开发工具 (7108) 游戏 (2829) HarmonyOS (2935) 区块链 (2782) 数学 (3112) 3C硬件 (2759) 资讯 (2909) Android (4709) iOS (1850) 代码人生 (3043) 阅读 (2841)

热门文章

101
推荐
关于我们 隐私政策 免责声明 联系我们
Copyright © 2020-2025 蚁人论坛 (iYenn.com) All Rights Reserved.
Scroll to Top