Skip to content

Unit test to measure the accuracy of correspondence generation #207

Open
@mvish7

Description

@mvish7

Hello,

I have implemented this work with Docker and without docker as well (to debug and to understand it).

My question is as follows:

From what I understand the quantitative evaluation was done considering the correspondences as the ground truth label.
In the YouTube video of the talk about this paper, it was said some sort of manual labeling was done for cross instance and cross configuration object categories.
If we consider the Single object within scene category, as here correspondences are generated during runtime, do you have any unit test to verify the accuracy of correspondences being generated?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions