Skip to content

RAM issue while loading several PCL in threads #7222

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
3 tasks done
avizipi opened this issue Apr 7, 2025 · 0 comments
Open
3 tasks done

RAM issue while loading several PCL in threads #7222

avizipi opened this issue Apr 7, 2025 · 0 comments
Labels
bug Not a build issue, this is likely a bug.

Comments

@avizipi
Copy link

avizipi commented Apr 7, 2025

Checklist

Describe the issue

I have a system that captures images every N seconds and updates both the Point Cloud (PCL) and Mesh objects. It’s a multi-threaded system. A new PCL is built from each image, and two mesh objects are merged in a daemon thread.

I’m encountering an issue with RAM not being properly freed after several iterations of this process.

To investigate further, I created a minimal Python script that simply loads a PCL object from a file while monitoring RAM usage using the psutil package. I was able to reproduce the problem with this script.

I also came across several related questions on GitHub discussing similar issues, including potential problems with using threads while loading DLL packages (e.g., ref1 ,ref2).

Questions:

Can you please confirm whether this is a known issue?

Do you have any suggested workarounds?

Steps to reproduce the bug

import psutil
import gc
import threading
import open3d as o3d

def monitor_ram_usage(interval=5.0, duration=100):
    process = psutil.Process(os.getpid())
    start_time = time.time()

    while True:
        # Get memory info for the process
        mem_info = process.memory_info()

        # Convert to MB for better readability
        rss_mb = mem_info.rss / (1024 * 1024)
        vms_mb = mem_info.vms / (1024 * 1024)

        print(f"RAM Usage - RSS: {rss_mb:.2f} MB, VMS: {vms_mb:.2f} MB")
        last_ram_val = rss_mb

        # Check if monitoring duration has been reached
        if duration and (time.time() - start_time >= duration):
            print("Monitoring complete.")
            break

        time.sleep(interval)
pcls = []
lock = threading.Lock()

def load_pcl():
    ply_point_cloud = o3d.data.PLYPointCloud()
    with lock:
        pcls.append(o3d.io.read_point_cloud(ply_point_cloud.path))

monitor_thread = threading.Thread(
    target=monitor_ram_usage,
    kwargs={"interval": 1, "duration": 501},
    daemon=True  # Thread will terminate when main program exits
)

monitor_thread.start()
for i in range(500):
    print("loading_another_pcl")
    threading.Thread(target=load_pcl, daemon=True).start()
    time.sleep(1)
    with lock:
        print(f"number of pcl loaded {len(pcls)}")
    if i % 50 == 1:
        with lock:
            del pcls
            pcls = []
        gc.collect()

Error message

No response

Expected behavior

I start with a RAM usage of around 325 MB. During the first 50 iterations, each PCL object adds approximately 14 MB. After 50 iterations, the memory usage reaches about 1000 MB. Even after explicitly triggering garbage collection, the RAM usage only drops to 640 MB - not back to the original 325 MB - indicating a potential memory leak or lingering memory usage.

Open3D, Python and System information

- Operating system: Ubuntu 20.04
- Python version: Python 3.11
- Open3D version: 0.19.0
- System architecture: x86
- Is this a remote workstation?: no
- How did you install Open3D?: pip

Additional information

No response

@avizipi avizipi added the bug Not a build issue, this is likely a bug. label Apr 7, 2025
@avizipi avizipi changed the title Summarize the bug (e.g., "Segmentation Fault for Colored Point Cloud Registration") RAM issue while loading several PCL in threads Apr 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Not a build issue, this is likely a bug.
Projects
None yet
Development

No branches or pull requests

1 participant