Skip to content

How to Transfer Data Efficiently? #163

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mizhitian-xiaomi opened this issue May 28, 2024 · 2 comments
Open

How to Transfer Data Efficiently? #163

mizhitian-xiaomi opened this issue May 28, 2024 · 2 comments
Assignees
Labels
question Question(s) from user.

Comments

@mizhitian-xiaomi
Copy link

The data types of input image and the model inferencing input data are numpy.ndarray.
I have to perform many data type conversions when using cvcuda, which is very inefficient.
How can I solve this problem?
image

@mizhitian-xiaomi mizhitian-xiaomi added the question Question(s) from user. label May 28, 2024
@bhaefnerNV
Copy link
Contributor

Hi @mizhitian-xiaomi,
Thanks for you interest in CV-CUDA!
Could you provide more information to your use-case?

Is the workflow to go from a torch tensor -> cvcuda tensor -> numpy array, and while having the cvcuda tensor, do data type conversions? Then repeat this process for multiple tensors?

@dsuthar-nvidia dsuthar-nvidia self-assigned this Apr 3, 2025
@dsuthar-nvidia
Copy link
Contributor

@mizhitian-xiaomi Generally CV-CUDA recommends that you use GPU accelerated data decoding libraries like nvimagecodec or pynvvideocodec. CV-CUDA samples cover a broad range of pipelines that reads videos and images and pass it on to a model for inference.

If your input data must come from Numpy and if it must be converted to Numpy before feeding into the model, then yes you would require data types conversions that you mentioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question(s) from user.
Projects
None yet
Development

No branches or pull requests

3 participants