-
Notifications
You must be signed in to change notification settings - Fork 29
Feature request: download all features but only load only part of DGS Corpus at a time? #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Possibly something like
would work |
If that works, I wonder if it would be good to: |
sigh |
Giving it a try on my personal workstation |
...nope, still "Killed" |
All of these crash on my workstation, using up all 33 GB:
|
Maybe one of these tricks can work? https://www.tensorflow.org/guide/data_performance#reducing_memory_footprint |
Maybe a custom data generator? https://medium.com/analytics-vidhya/write-your-own-custom-data-generator-for-tensorflow-keras-1252b64e41c3 |
Maybe something from here? tensorflow/tfjs#7801 |
Oh hey, this looks relevant, and I see a familiar name: huggingface/datasets#741. It's |
Reading https://www.tensorflow.org/datasets/api_docs/python/tfds/load, maybe we can do |
OK, so tf.Dataset does support streaming... https://stackoverflow.com/questions/63140320/how-to-use-sequence-generator-on-tf-data-dataset-object-to-fit-partial-data-into, so is it the split generation where the issue comes? |
Using the "manually add print statements to the site-packages in my conda env" method I kept following it all the way down and it gets killed in here: and makes it to here, and makes it to here https://github.com/tensorflow/datasets/blob/v4.9.3/tensorflow_datasets/core/split_builder.py#L415 and gets killed around there somewhere |
https://github.com/gruns/icecream might be helpful, note to self |
Or, you know, I could look at one of these: https://stackify.com/top-5-python-memory-profilers/ |
I've done a lot of searching but tfds just doesn't seem to have a way to stream part of a large dataset that I can find. |
So I still can't figure out how to (1) only download a portion, (2) assuming it's all successfully downloaded, load only a portion into memory without the split generation using all available memory. |
lots of comments... in the future it would be helpful if you keep editing the same comment or handful. When you: tfds.load('dgs_corpus', split=["train:2%"]) What happens is that first the entire dataset is being prepared, then only 2% of it is loaded. So you will need the exact same disk space. Now since there are two processes here:
can you tell where the memory consumption is too high? my suspicion is number 1, but i don't know. |
Right, sorry, I forget that I'm not the only one getting spammed by all these, apologies. |
I'm also suspecting 1, based on the fact that I can sprinkle print statements all the way until https://github.com/tensorflow/datasets/blob/v4.9.3/tensorflow_datasets/core/split_builder.py#L415. Edit: my big issue is that testing it currently requires me to run it until it crashes. Which, if I'm doing it on Google Colab, means that any modifications I've made to the code are then gone. I've got a workstation I can test locally on but don't have access as conveniently. Edit again: Edit 3: Edit 4: OK, trying it in a "high-memory" notebook in Colab Pro, I get this: Edit 5: full stacktrace on the high-memory notebook:
Edit: and here's how many resources were used: Update: OK, it seems that just these three files being encoded is enough to use many gigabytes. |
Well I am thoroughly stumped. I've narrowed it down to where in tfds the massive memory allocations are happening, but I still don't know why. I just don't understand why it needs nearly 30 GiB to "encode" and then "serialize" the videos Here's the memray report. For some reason reading in the frames results in over 50k allocations? many GiB worth? Also the serialize_example calls use huge memory as well I don't know what to do or how to fix it. I know the UFC101 dataset doesn't have this issue. If anyone has thoughts let me know This is all before we even get to the protobuf max size error Update: well, I inspected the tmp folder that gets created, and there are indeed nearly 13k frames extracted from just one of the videos: which ends up being nearly 5GB legitimately: Note: perhaps something in here may be relevant?
Like maybe there's a setting in there to not load every file but just a list of paths? |
Ok, so to me it seems like you are not using appropriate config
The "correct" config here would be: config = DgsCorpusConfig(name="only-annotations", version="1.0.0", include_video=False, include_pose=None)
dgs_corpus = tfds.load('dgs_corpus', builder_kwargs=dict(config=config)) Which loads only the annotations. You want to download the video but load them as paths? You want to load poses? |
When attempting to load the DGS Corpus's default configuration on either my own workstation or in Colab, I run out of memory and crash.
Here are some screenshots


https://colab.research.google.com/drive/1_vWFvWo0ZMg5_6AFU6Ln2LPHwm9TW_Rz?usp=sharing for example, will crash given time. Is there a method such that all the features, video and pose and gloss, can all be downloaded, yet only some portion loaded into memory at a time?
The text was updated successfully, but these errors were encountered: