Skip to content

Don't know how to get the average latency from metric #3310

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
gavinlichn opened this issue May 26, 2025 · 1 comment
Open

Don't know how to get the average latency from metric #3310

gavinlichn opened this issue May 26, 2025 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@gavinlichn
Copy link

gavinlichn commented May 26, 2025

Describe the bug
Need to get the average latency for each rerank request, but currently ovms_request_time_us_sum always 0,
Want to clarify which metric can I use, or how to calculate.

Firstly I considring ovms_request_time_us_sum/ovms_reauest_time_us_count, but found the time_us_sum always 0

The Non-zero metrics listed below, the largest on is ovms_graph_processing_time_us_sum, is it the total latency include both rerank and tokenizer?
I'm confusing to cauculate the rerank average latency

ovms_inference_time_us_count{name="BAAI/bge-reranker-base_rerank_model",version="1"}   20
ovms_inference_time_us_sum{name="BAAI/bge-reranker-base_rerank_model",version="1"}   2794429
ovms_inference_time_us_count{name="BAAI/bge-reranker-base_tokenizer_model",version="1"}   40
ovms_inference_time_us_sum{name="BAAI/bge-reranker-base_tokenizer_model",version="1"}   72140
ovms_wait_for_infer_req_time_us_count{name="BAAI/bge-reranker-base_rerank_model",version="1"}   20
ovms_wait_for_infer_req_time_us_sum{name="BAAI/bge-reranker-base_rerank_model",version="1"}   32
ovms_wait_for_infer_req_time_us_count{name="BAAI/bge-reranker-base_tokenizer_model",version="1"}   40
ovms_wait_for_infer_req_time_us_sum{name="BAAI/bge-reranker-base_tokenizer_model",version="1"}   32
ovms_graph_processing_time_us_count{method="Unary",name="BAAI/bge-reranker-base"}   20
ovms_graph_processing_time_us_sum{method="Unary",name="BAAI/bge-reranker-base"}   2890231

To Reproduce
Deploy ovms with BAAI/bge-reranker-base
enable metrics with parameter "--metrics_enable"
get metric with curl http://host_ip:port/metrics

Expected behavior
A clear and concise description of what you expected to happen.

Logs
Logs from OVMS, ideally with --log_level DEBUG. Logs from client.

Configuration

  1. OVMS version
  2. OVMS config.json file
  3. CPU, accelerator's versions if applicable
  4. Model repository directory structure
  5. Model or publicly available similar model that reproduces the issue

Additional context
Add any other context about the problem here.

@gavinlichn gavinlichn added the bug Something isn't working label May 26, 2025
@dkalinowski
Copy link
Collaborator

Hello @gavinlichn

Please take a look at this documentation page describing metrics for graphs: https://docs.openvino.ai/2025/model-server/ovms_docs_metrics.html#metrics-implementation-for-mediapipe-graphs

I think you are looking for ovms_graph_processing_time_us:

Tracks duration of successfully started mediapipe graphs in us. It can represent pipeline processing time for unary calls or the session length for streamed requests.

Please let me know if you find this helpful

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants