Skip to content

Files

Latest commit

author
wren-ai[bot]
Apr 30, 2025
d24963f Β· Apr 30, 2025

History

History

wren-ai-service

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Nov 27, 2024
Apr 30, 2025
Apr 30, 2025
Apr 30, 2025
Apr 18, 2025
Apr 30, 2025
Jun 28, 2024
Mar 13, 2024
Mar 13, 2025
Apr 18, 2025
Apr 18, 2025
Aug 14, 2024
Apr 29, 2025
Apr 30, 2025
Mar 13, 2024

AI Service of Wren AI

Concepts

Please read the documentation here to understand the concepts of Wren AI Service.

Setup for Local Development

Prerequisites

  1. Python: Install Python 3.12.*

    • Recommended: Use pyenv to manage Python versions
  2. Poetry: Install Poetry 1.8.3

    curl -sSL https://install.python-poetry.org | python3 - --version 1.8.3
  3. Just: Install Just command runner (version 1.36 or higher)

Step-by-Step Setup

  1. Install Dependencies:

    poetry install
  2. Generate Configuration Files:

    just init

    This creates both .env.dev and config.yaml. Use just init --non-dev to generate only config.yaml.

  3. Configure Environment:

    • Edit .env.dev to set environment variables
    • Modify config.yaml to configure components, pipelines, and other settings
    • Refer to AI Service Configuration for detailed setup instructions
  4. Set Up Development Environment (optional):

    • Install pre-commit hooks:

      poetry run pre-commit install
    • Run initial pre-commit checks:

      poetry run pre-commit run --all-files
  5. Run Tests (optional):

    just test

Starting the Service

  1. Start Required Containers:

    just up
  2. Launch the AI Service:

    just start
  3. Access the Service:

  4. Stop the Service: When finished, stop the containers:

    just down

This setup ensures a consistent development environment and helps maintain code quality through pre-commit hooks and tests. Follow these steps to get started with local development of the Wren AI Service.

Others

Pipeline Evaluation

For a comprehensive understanding of how to evaluate the pipelines, please refer to the evaluation framework. This document provides detailed guidelines on the evaluation process, including how to set up and run evaluations, interpret results, and utilize the evaluation metrics effectively. It is a valuable resource for ensuring that the evaluation is conducted accurately and that the results are meaningful.

Estimate the Speed of the Pipeline(may be outdated)

  • to run the load test
    • setup DATASET_NAME in .env.dev
    • adjust test config if needed
      • adjust user count in tests/locust/config_users.json
    • in wren-ai-service folder, run just up to start the docker containers
    • in wren-ai-service folder, run just start to start the ai service
    • run just load-test
    • check reports in /outputs/locust folder, there are 3 files with filename locustreport{test_timestamp}:
      • .json: test report in json format, including info like llm provider, version
      • .html: test report in html format, showing tables and charts
      • .log: test log

Contributing

Thank you for investing your time in contributing to our project! Please read this for more information!