Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When adding trackers, the result lags " TPEs + 1" frames #9

Open
xiaoxiaoyi689 opened this issue Dec 22, 2023 · 4 comments
Open

When adding trackers, the result lags " TPEs + 1" frames #9

xiaoxiaoyi689 opened this issue Dec 22, 2023 · 4 comments

Comments

@xiaoxiaoyi689
Copy link

hi @leafqycc , do you try to add trackers like DeepSort? when I infer my model yolov6 + DeepSort, the final result will lag " TPEs + 1" frames. I print the detetction result and tracking results, then I find the detection result of "TPEs + 2" frame is same as the first frame. This leads to the final error. But I don' t know how to solve it.

@leafqycc
Copy link
Owner

leafqycc commented Dec 22, 2023

Maybe you reference the same mat object in the main thread and sub-thread, just like this demo code, the second.jpg and third.jpg pictures are actually the same object

import cv2
import _thread
import time

def draw(img):
    img = cv2.rectangle(img, (50, 50), (150, 150), (255, 255, 255), 2)
    cv2.imwrite("second.jpg", img)

cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imwrite("first.jpg", frame)

_thread.start_new_thread(draw, (frame, ))

time.sleep(3)
cv2.imwrite("third.jpg", frame)

Make sure that the mat(frame) object referenced every time is brand new, for example, reading the camera in a loop

@xiaoxiaoyi689
Copy link
Author

xiaoxiaoyi689 commented Jan 10, 2024

@leafqycc , thanks for your reply!!! I have solved above problem. I would like to ask about the problem of using multi-threading to detect two videos at the same time by using your method. When I add two videos(example video 1 and video2), in the final detection results, some frames of video 1 appear in the results of video2 and the frames of video2 appear in the results of video1. I don't know why this situation occurs. Could you help me? Here is my multithreads code:
`
modelPath = "./rknnModel/yolov5s_relu_tk2_RK3588_i8.rknn"
TPEs = 3
pool = rknnPoolExecutor(
rknnModel=modelPath,
TPEs=TPEs,
func=myFunc)

path_video1 = '/home/rpdzkj/Yolov7-Deepsort-rknn111/16.00.00.mp4'
path_video2 = '/home/rpdzkj/Yolov7-Deepsort-rknn111/18.00.00.mp4'

tracker_thread1 = threading.Thread(target=run_tracker_in_thread, args=(path_video1, pool, TPEs, 1), daemon=True)
tracker_thread2 = threading.Thread(target=run_tracker_in_thread, args=(path_video2, pool, TPEs, 2), daemon=True)

tracker_thread1.start()
tracker_thread2.start()

tracker_thread1.join()
tracker_thread2.join()
cv2.destroyAllWindows()`

and the method run_tracker_in_thread is :

`def run_tracker_in_thread(filename, pool, TPEs, file_index):

cap = cv2.VideoCapture(filename)  # Read the video file
frame_count = 0
fps = 0.00

if (cap.isOpened()):
    for i in range(TPEs + 1):
        ret, frame = cap.read()
        if not ret:
            cap.release()
            del pool
            exit(-1)
        pool.put(frame)
while True:
    ret, frame = cap.read()  # Read the video frames
    if not ret:
        break
    t0 = time.time()
    pool.put(frame)
    results, flag = pool.get()

    if flag == False:
        break
    
    im = results[0]
     
    t1 = (time.time() - t0)
    frame_count += 1
    fps = (fps + (1. / t1)) / 2
    cv2.imwrite("temp/" + str(file_index) + "/img_" + str(frame_count) + "_" + str(file_index) + ".jpg", im) 
    print('Frame->{}, fps->{:.2f}, results have been saved ----- ThreadName {}'.format(frame_count, fps, file_index))
    
cap.release()
pool.release()`

@leafqycc
Copy link
Owner

@leafqycc , thanks for your reply!!! I have solved above problem. I would like to ask about the problem of using multi-threading to detect two videos at the same time by using your method. When I add two videos(example video 1 and video2), in the final detection results, some frames of video 1 appear in the results of video2 and the frames of video2 appear in the results of video1. I don't know why this situation occurs. Could you help me? Here is my multithreads code: ` modelPath = "./rknnModel/yolov5s_relu_tk2_RK3588_i8.rknn" TPEs = 3 pool = rknnPoolExecutor( rknnModel=modelPath, TPEs=TPEs, func=myFunc)

path_video1 = '/home/rpdzkj/Yolov7-Deepsort-rknn111/16.00.00.mp4' path_video2 = '/home/rpdzkj/Yolov7-Deepsort-rknn111/18.00.00.mp4'

tracker_thread1 = threading.Thread(target=run_tracker_in_thread, args=(path_video1, pool, TPEs, 1), daemon=True) tracker_thread2 = threading.Thread(target=run_tracker_in_thread, args=(path_video2, pool, TPEs, 2), daemon=True)

tracker_thread1.start() tracker_thread2.start()

tracker_thread1.join() tracker_thread2.join() cv2.destroyAllWindows()`

and the method run_tracker_in_thread is :

`def run_tracker_in_thread(filename, pool, TPEs, file_index):

cap = cv2.VideoCapture(filename)  # Read the video file
frame_count = 0
fps = 0.00

if (cap.isOpened()):
    for i in range(TPEs + 1):
        ret, frame = cap.read()
        if not ret:
            cap.release()
            del pool
            exit(-1)
        pool.put(frame)
while True:
    ret, frame = cap.read()  # Read the video frames
    if not ret:
        break
    t0 = time.time()
    pool.put(frame)
    results, flag = pool.get()

    if flag == False:
        break
    
    im = results[0]
     
    t1 = (time.time() - t0)
    frame_count += 1
    fps = (fps + (1. / t1)) / 2
    cv2.imwrite("temp/" + str(file_index) + "/img_" + str(frame_count) + "_" + str(file_index) + ".jpg", im) 
    print('Frame->{}, fps->{:.2f}, results have been saved ----- ThreadName {}'.format(frame_count, fps, file_index))
    
cap.release()
pool.release()`

You need to initialize different rknnPoolExecutor objects for two threads, and in the above code tracker_thread1 and tracker_thread2 call the same pool for object detection, because the pool get operation is carried out concurrently, resulting in the order of image acquisition is confused, and some frames of video 1 appear in the results of video2 and the frames of video2 appear in the results of video1.

@xiaoxiaoyi689
Copy link
Author

@leafqycc , thank you very much. I have solved the problem by initializing different rknnPoolExecutor objects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants