【目标检测】YOLOv5多进程/多线程推理加速实验

zstar-_ 2024-07-06 16:31:02 阅读 54

前言

最近在研究如何让YOLOv5推理得更快,总体看来,主要有以下这些思路:

使用更快的 GPU,即:P100 -> V100 -> A100多卡GPU推理减小模型尺寸,即YOLOv5x -> YOLOv5l -> YOLOv5m -> YOLOv5s -> YOLOv5n进行半精度FP16推理与<code>python detect.py --half减少–img-size,即 1280 -> 640 -> 320导出成ONNXOpenVINO格式,获得CPU加速导出到TensorRT获得GPU加速批量输入图片进行推理使用多进程/多线程进行推理

注:使用多卡GPU和多进程/多线程的推理并不会对单张图片推理起到加速作用,只适用于很多张图片一起进行推理的场景。

本篇主要来研究多进程/多线程是否能对YOLOv5算法推理起到加速作用。

实验环境

GPU:RTX2060

torch:1.7.1+cu110

检测图片大小:1920x1080

img-size:1920

使用半精度推理half=True

推理模型:yolov5m.pt

实验过程

先放实验代码(detect.py),根据官方源码进行了小改:

import configparser

import time

from pathlib import Path

import cv2

import torch

import threading

import sys

import multiprocessing as mp

sys.path.append("yolov5")

from models.experimental import attempt_load

from utils.datasets import LoadImages

from utils.general import check_img_size, non_max_suppression, scale_coords

from utils.plots import Annotator, colors

from utils.torch_utils import select_device

from concurrent.futures import ThreadPoolExecutor

Detect_path = 'D:/Data/detect_outputs' # 检测图片输出路径

def detect(path, model_path, detect_size):

source = path

weights = model_path

imgsz = detect_size

conf_thres = 0.25

iou_thres = 0.45

device = ""

augment = True

save_img = True

save_dir = Path(Detect_path) # increment run

device = select_device(device)

half = device.type != 'cpu' # half precision only supported on CUDA

# Load model

model = attempt_load(weights, map_location=device) # load FP32 model

stride = int(model.stride.max()) # model stride

imgsz = check_img_size(imgsz, s=stride) # check img_sizef

if half:

model.half() # to FP16

# Set Dataloader

vid_path, vid_writer = None, None

dataset = LoadImages(source, img_size=imgsz, stride=stride)

# Get names and colors

names = model.module.names if hasattr(model, 'module') else model.names

# Run inference

if device.type != 'cpu':

model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once

result_list = []

for path, img, im0s, vid_cap in dataset:

# 读取图片传到gpu上

t1 = time.time()

img = torch.from_numpy(img).to(device)

print("read pictures cost time:", time.time() - t1)

t2 = time.time()

img = img.half() if half else img.float() # uint8 to fp16/32

img /= 255.0 # 0 - 255 to 0.0 - 1.0

if img.ndimension() == 3:

img = img.unsqueeze(0)

print("process pictures cost time:", time.time() - t2)

# Inference

pred = model(img, augment=augment)[0]

# Apply NMS

pred = non_max_suppression(pred, conf_thres, iou_thres)

# Process detections

for i, det in enumerate(pred): # detections per image

p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0)

p = Path(p) # to Path

save_path = str(save_dir / p.name) # img.jpg

s += '%gx%g ' % img.shape[2:] # print string

# print(s) # 384x640

s_result = '' # 输出检测结果

annotator = Annotator(im0, line_width=3, example=str(names))

if len(det):

# Rescale boxes from img_size to im0 size

det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

# Print results

for c in det[:, -1].unique():

n = (det[:, -1] == c).sum() # detections per class

# s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string

s += f"{ n} { names[int(c)]}, " # add to string

s_result += f"{ n} { names[int(c)]} "

# Write results

for *xyxy, conf, cls in reversed(det):

if save_img:

c = int(cls)

# label = f'{names[int(cls)]} {conf:.2f}'

label = f'{ names[int(cls)]}'

# print(label)

annotator.box_label(xyxy, label, color=colors(c, True))

# print(xyxy)

print(f'{ s}')

# print(f'{s_result}')

result_list.append(s_result)

# 将conf对象中的数据写入到文件中

conf = configparser.ConfigParser()

cfg_file = open("glovar.cfg", 'w')

conf.add_section("default") # 在配置文件中增加一个段

# 第一个参数是段名,第二个参数是选项名,第三个参数是选项对应的值

conf.set("default", "process", str(dataset.img_count))

conf.set("default", "total", str(dataset.nf))

conf.write(cfg_file)

cfg_file.close()

im0 = annotator.result()

# Save results (image with detections)

t3 = time.time()

if save_img:

if dataset.mode == 'image':

cv2.imwrite(save_path, im0)

else: # 'video' or 'stream'

if vid_path != save_path: # new video

vid_path = save_path

if isinstance(vid_writer, cv2.VideoWriter):

vid_writer.release() # release previous video writer

if vid_cap: # video

fps = vid_cap.get(cv2.CAP_PROP_FPS)

w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))

h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

else: # stream

fps, w, h = 30, im0.shape[1], im0.shape[0]

save_path += '.mp4'

vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))

vid_writer.write(im0)

print("write pictures cost time:", time.time() - t3)

print('Done')

def run(path, model_path, detect_size):

with torch.no_grad():

detect(path, model_path, detect_size)

首先进行小批量的图片进行实验,下面输入两张图片进行检测。

原始推理

if __name__ == '__main__':

s_t = time.time()

path1 = "D:/Data/image/DJI_0001_00100.jpg"

path2 = "D:/Data/image/DJI_0001_00530.jpg"

model_path = "../weights/best.pt"

detect_size = 1920

run(path1, model_path, detect_size)

run(path2, model_path, detect_size)

print("Tatal Cost Time:", time.time() - s_t)

Tatal Cost Time: 3.496427059173584

线程池推理

开辟两个线程进行推理:

if __name__ == '__main__':

s_t = time.time()

pool = ThreadPoolExecutor(max_workers=2)

path1 = "D:/Data/image/DJI_0001_00100.jpg"

path2 = "D:/Data/image/DJI_0001_00530.jpg"

model_path = "../weights/best.pt"

detect_size = 1920

pool.submit(run, path1, model_path, detect_size)

pool.submit(run, path2, model_path, detect_size)

pool.shutdown(wait=True)

print("Tatal Cost Time:", time.time() - s_t)

Tatal Cost Time: 3.2433135509490967

开双线程推理和原始推理时间类似,再次验证了python中的”伪多线程”。

进程池推理

开辟两个进程进行推理:

if __name__ == '__main__':

s_t = time.time()

pool = mp.Pool(processes=2)

path1 = "D:/Data/image/DJI_0001_00100.jpg"

path2 = "D:/Data/image/DJI_0001_00530.jpg"

model_path = "../weights/best.pt"

detect_size = 1920

pool.apply_async(run, (path1, model_path, detect_size,))

pool.apply_async(run, (path2, model_path, detect_size,))

pool.close()

pool.join()

print("Tatal Cost Time:", time.time() - s_t)

Tatal Cost Time: 6.020772695541382

双进程推理

双进程推理时间竟然是原始推理的两倍,以为是进程池的开销太大,于是换种写法,不使用进程池:

if __name__ == '__main__':

s_t = time.time()

path1 = "D:/Data/image/DJI_0001_00100.jpg"

path2 = "D:/Data/image/DJI_0001_00530.jpg"

model_path = "../weights/best.pt"

detect_size = 1920

p1 = mp.Process(target=run, args=(path1, model_path, detect_size,))

p2 = mp.Process(target=run, args=(path2, model_path, detect_size,))

p1.start()

p2.start()

p1.join()

p2.join()

print("Tatal Cost Time:", time.time() - s_t)

Tatal Cost Time: 6.089479446411133

发现双进程时间仍然较久,说明在数据较少时,进程的开销成本过高,这和我之前做的实验多线程和多进程的效率对比结果相类似。

于是下面将图像数量扩大到300张进行实验。

300pic-原始推理

if __name__ == '__main__':

s_t = time.time()

path1 = "D:/Data/image"

path2 = "D:/Data/image2"

path3 = "D:/Data/image3"

model_path = "../weights/best.pt"

detect_size = 1920

run(path1, model_path, detect_size)

run(path2, model_path, detect_size)

run(path3, model_path, detect_size)

print("Tatal Cost Time:", time.time() - s_t)

Tatal Cost Time: 62.02898120880127

300pic-多进程推理

if __name__ == '__main__':

s_t = time.time()

path1 = "D:/Data/image"

path2 = "D:/Data/image2"

path3 = "D:/Data/image3"

model_path = "../weights/best.pt"

detect_size = 1920

p1 = mp.Process(target=run, args=(path1, model_path, detect_size,))

p2 = mp.Process(target=run, args=(path2, model_path, detect_size,))

p3 = mp.Process(target=run, args=(path3, model_path, detect_size,))

p1.start()

p2.start()

p3.start()

p1.join()

p2.join()

p3.join()

print("Tatal Cost Time:", time.time() - s_t)

Tatal Cost Time: 47.85872721672058

和预期一样,当数据量提升上去时,多进程推理的速度逐渐超越原始推理。

总结

本次实验结果如下表所示:

图像处理张数 原始推理(s) 多线程推理(s) 多进程推理(s)
2 3.49 3.24 6.08
300 62.02 / 47.85

值得注意的是,使用多进程推理时,进程间保持独立,这意味着模型需要被重复在GPU上进行创建,因此,可以根据单进程所占显存大小来估算显卡所支持的最大进程数。

后续:在顶配机上进行实验

后面嫖到了组里i9-13700K+RTX4090的顶配主机,再进行实验,结果如下:

图像处理张数 原始推理(s) 多线程推理(s) 多进程推理(s)
2 2.21 2.09 3.92
300 29.23 / 17.61

后记:更正结论

后面觉得之前做的实验有些草率,尽管Python存在GIL的限制,但是在此类IO频繁的场景中,多线程仍然能缓解IO阻塞,从而实现加速,因此选用YOLOv5s模型,在4090上,对不同分辨率的图片进行测试:

输入图像分辨率:1920x1080

图像数量 原始推理(s) 双线程推理(s) 双进程推理(s)
2 1.92 1.85 3.92
100 7.02 4.91 6.52
200 13.07 8.10 9.66

输入图像分辨率:13400x9528

图像数量 原始推理(s) 双线程推理(s) 双进程推理(s)
2 6.46 4.99 7.03
100 190.85 119.43 117.12
200 410.95 239.84 239.51


声明

本文内容仅代表作者观点,或转载于其他网站,本站不以此文作为商业用途
如有涉及侵权,请联系本站进行删除
转载本站原创文章,请注明来源及作者。