/adapt

Adapt your prompt from one LLM to work optimally across different target LLMs.

This endpoint automatically optimizes your prompt (system prompt + user message template) to improve accuracy on your use case across various models. Each model has unique characteristics, and what works well for GPT-5 might not work as well for Claude or Gemini.

How Prompt Adaptation Works:

  1. You provide your current prompt and optionally your current origin model
  2. You specify the target models you want to adapt your prompt to
  3. You provide evaluation examples (golden records) with expected answers
  4. The system runs optimization to find the best prompt for each target model
  5. You receive adapted prompts that perform well on your target models

Evaluation Metrics: Choose either a standard metric or provide custom evaluation:

  • Standard metrics: LLMaaJ:Sem_Sim_1 (semantic similarity), JSON_Match
  • Custom evaluation: Provide evaluation_config with your own LLM judge, prompt, and cutoff

Dataset Requirements:

  • Minimum 25 examples in train_goldens (more examples = better adaptation)
  • Prototype mode: Set prototype_mode: true to use as few as 3 examples for prototyping
    • Recommended when you don't have enough data yet to build a proof-of-concept
    • Note: Performance may be degraded compared to standard mode (25+ examples)
    • Trade-off: Faster iteration with less data vs. potentially less generalizability
  • Each example must have fields matching your template placeholders
  • Supervised evaluation requires 'answer' field in each golden record
  • Unsupervised evaluation can work without answers

Training Time:

  • Processing is asynchronous and typically takes 10-30 minutes
  • Time depends on: number of target models, dataset size, model availability
  • Use the returned adaptation_run_id to check status and retrieve results

Example Workflow:

1. POST /v2/prompt/adapt - Submit adaptation request
2. GET /v2/prompt/adaptStatus/{id} - Poll status until completed
3. GET /v2/prompt/adaptResults/{id} - Retrieve optimized prompts
4. Use optimized prompts in production with target models

Related Documentation: See https://docs.notdiamond.ai/docs/adapting-prompts-to-new-models for detailed guide.

Optimizes your prompt for a specified dataset, evaluation metric, and list of target models. Learn more

Language
Credentials
Bearer
Click Try It! to start a request and see the response here!