Skip to content

[QUESTION] GPU memory continue to increase #250

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
yimi-code opened this issue May 10, 2025 · 0 comments
Open

[QUESTION] GPU memory continue to increase #250

yimi-code opened this issue May 10, 2025 · 0 comments
Labels
question Further information is requested

Comments

@yimi-code
Copy link

yimi-code commented May 10, 2025

What is your question?

When I use stack operations, the GPU memory will continue to increase, eventually causing the GPU memory to be full.

import os
import cvcuda
from nvidia import nvimgcodec
import pynvml

class WorkContent():
    def __init__(self):
        device_id = 0
        self.cvcuda_stream = cvcuda.Stream().current
        self.decoder = nvimgcodec.Decoder(device_id=device_id)
        self.h = 0
        self.w = 0
        self.c = 0

        pynvml.nvmlInit()
        self.handle = pynvml.nvmlDeviceGetHandleByIndex(0)
        self.gpu_free = 0
        

    def __call__(self, img):
        img_data = open(img, "rb").read()
        
        image = self.decoder.decode(img_data, cuda_stream=self.cvcuda_stream)
        if len(image.shape) == 2:
            image = cvcuda.cvtcolor(image, cvcuda.ColorConversion.GRAY2RGB)

        
        image = cvcuda.as_tensor(image, "HWC")
        images_data = cvcuda.stack([image])

        
        h, w, c = image.shape
        max_h = max(self.h, h)
        max_w = max(self.w, w)
        max_c = max(self.c, c)

        mem_info = pynvml.nvmlDeviceGetMemoryInfo(self.handle)
        gpu = mem_info.free // 1024**2

        if self.h != max_h or self.w !=max_w or self.c != max_c or self.gpu_free - gpu > 500:
            
            self.h = max_h
            self.w = max_w
            self.c = max_c
            self.gpu_free = gpu
            print(f"image h: {self.h}, image w: {self.w} , image c: {self.c}, gpu memory free { self.gpu_free } MB")

        return -1


work_ctx = WorkContent()


for root, _, files in os.walk("coco2017/train2017"):
    for file in files:
        if file.endswith(".jpg"):
            work_ctx(os.path.join(root, file))

Image
As shown in the picture, my largest image is 640, but the available GPU memory is decreasing.

I found that if it is the same picture, the GPU memory will not have this problem, but it will be like this when different pictures are not continuously input.

@yimi-code yimi-code added the question Further information is requested label May 10, 2025
@yimi-code yimi-code changed the title [QUESTION] [QUESTION] GPU memory continue to increase May 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant