/modelSelect

Select the optimal LLM to handle your query based on Not Diamond's routing algorithm.

This endpoint analyzes your messages and returns the best-suited model from your specified models. The router considers factors like query complexity, model capabilities, cost, and latency based on your preferences.

Key Features:

  • Intelligent routing across multiple LLM providers
  • Support for custom routers trained on your evaluation data
  • Optional cost/latency optimization
  • Function calling support for compatible models

Usage:

  1. Pass your messages in OpenAI format (array of objects with 'role' and 'content')
  2. Specify which LLM providers you want to route between
  3. Optionally provide a preference_id to use a custom router that you've trained
  4. Receive a recommended model and session_id
  5. Use the session_id to submit feedback and improve routing

Related Endpoints:

  • POST /v2/preferences/userPreferenceCreate - Create a preference ID for personalized routing
  • POST /v2/pzn/trainCustomRouter - Train a custom router on your evaluation data

Given an array of messages and a list of LLMs you want to route between, returns a label for which LLM you should call. Learn more

Language
Credentials
Bearer
Click Try It! to start a request and see the response here!