Model Providers API
The Model Providers API provides access to the universal model system's reference data — the AI providers and their available model definitions.
Endpoints
| Operation | Method | Endpoint |
|---|---|---|
| List Providers | GET | /api/model-providers |
| List Definitions | GET | /api/model-definitions |
List Providers
Returns all supported AI model providers. Providers are seeded reference data and cannot be created or modified via API.
GET /api/model-providers?includeModels=true
Query Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
includeModels | string | false | Include model definitions for each provider |
Response
{
"data": [
{
"id": "550e8400-e29b-41d4-a716-446655440010",
"slug": "openai",
"name": "OpenAI",
"authType": "api_key",
"apiFormat": "openai",
"baseUrl": "https://api.openai.com/v1",
"createdAt": "2026-01-01T00:00:00Z",
"modelDefinitions": [
{
"id": "def-uuid-1",
"slug": "gpt-4o",
"displayName": "GPT-4o",
"modelId": "gpt-4o",
"category": "chat",
"tier": "premium",
"contextWindow": 128000,
"maxOutputTokens": 16384,
"inputPricePerMillion": 2.5,
"outputPricePerMillion": 10.0,
"supportsVision": true,
"supportsFunctionCalling": true,
"supportsStreaming": true,
"isActive": true,
"isDeprecated": false
}
]
},
{
"id": "550e8400-e29b-41d4-a716-446655440011",
"slug": "anthropic",
"name": "Anthropic",
"authType": "api_key",
"apiFormat": "anthropic",
"baseUrl": "https://api.anthropic.com/v1"
},
{
"id": "550e8400-e29b-41d4-a716-446655440012",
"slug": "bedrock",
"name": "AWS Bedrock",
"authType": "iam",
"apiFormat": "bedrock",
"baseUrl": null
},
{
"id": "550e8400-e29b-41d4-a716-446655440013",
"slug": "google",
"name": "Google (Gemini)",
"authType": "api_key",
"apiFormat": "google",
"baseUrl": "https://generativelanguage.googleapis.com/v1beta"
},
{
"id": "550e8400-e29b-41d4-a716-446655440014",
"slug": "xai",
"name": "xAI (Grok)",
"authType": "api_key",
"apiFormat": "openai",
"baseUrl": "https://api.x.ai/v1"
}
]
}
List Model Definitions
Returns specific models available within providers. These are seeded reference data with pricing and capability information.
GET /api/model-definitions?providerSlug=openai&tier=premium&activeOnly=true
Query Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
providerId | UUID | — | Filter by provider ID |
providerSlug | string | — | Filter by provider slug (e.g., openai, anthropic) |
tier | string | — | Filter: standard, premium, economy |
category | string | — | Filter: chat, embedding, image |
activeOnly | string | true | Only return active, non-deprecated models |
Response
{
"data": [
{
"id": "def-uuid-1",
"providerId": "provider-uuid",
"slug": "gpt-4o",
"displayName": "GPT-4o",
"modelId": "gpt-4o",
"category": "chat",
"tier": "premium",
"contextWindow": 128000,
"maxOutputTokens": 16384,
"inputPricePerMillion": 2.5,
"outputPricePerMillion": 10.0,
"cachedInputPricePerMillion": 1.25,
"supportsVision": true,
"supportsFunctionCalling": true,
"supportsStreaming": true,
"isActive": true,
"isDeprecated": false,
"provider": {
"id": "provider-uuid",
"slug": "openai",
"name": "OpenAI",
"authType": "api_key",
"apiFormat": "openai"
}
},
{
"id": "def-uuid-2",
"providerId": "provider-uuid",
"slug": "gpt-4o-mini",
"displayName": "GPT-4o mini",
"modelId": "gpt-4o-mini",
"category": "chat",
"tier": "economy",
"contextWindow": 128000,
"maxOutputTokens": 16384,
"inputPricePerMillion": 0.15,
"outputPricePerMillion": 0.6,
"supportsVision": true,
"supportsFunctionCalling": true,
"supportsStreaming": true,
"isActive": true,
"isDeprecated": false,
"provider": {
"id": "provider-uuid",
"slug": "openai",
"name": "OpenAI",
"authType": "api_key",
"apiFormat": "openai"
}
}
]
}
Model Definition Fields
| Field | Type | Description |
|---|---|---|
slug | string | URL-safe unique identifier |
displayName | string | Human-readable name |
modelId | string | Provider-specific identifier sent in API calls |
category | string | chat, embedding, image, code, multimodal |
tier | string | standard, premium, economy |
contextWindow | integer | Maximum token capacity |
maxOutputTokens | integer | Maximum response tokens |
inputPricePerMillion | number | Cost per million input tokens (USD) |
outputPricePerMillion | number | Cost per million output tokens (USD) |
supportsVision | boolean | Can process images |
supportsFunctionCalling | boolean | Supports tool/function calling |
supportsStreaming | boolean | Supports streaming responses |
curl Examples
List all providers with their models:
curl -X GET "https://console.rocketwavelabs.io/api/model-providers?includeModels=true" \
-H "Authorization: Bearer YOUR_SERVICE_TOKEN"
List OpenAI chat models:
curl -X GET "https://console.rocketwavelabs.io/api/model-definitions?providerSlug=openai&category=chat" \
-H "Authorization: Bearer YOUR_SERVICE_TOKEN"
JavaScript Examples
// List all providers
const response = await fetch('/api/model-providers?includeModels=true');
const { data: providers } = await response.json();
// Find vision-capable models
const visionModels = providers
.flatMap(p => p.modelDefinitions || [])
.filter(m => m.supportsVision);
console.log('Vision models:', visionModels.map(m => m.displayName));
Related Topics
- Model Credentials API — Manage authentication credentials
- Model Configurations API — Create ready-to-use model configs
- Settings - Universal Model System — UI documentation
- OpenAPI Spec — Full OpenAPI specification