Skip to main content

Model Entity

The Model entity defines AI/LLM service configurations for use in Prompt entities. Models store authentication credentials and endpoint URLs for calling language model services.

Overview

The platform supports two model systems:

  1. Universal Model System (recommended) — A structured hierarchy of providers, definitions, credentials, and configurations supporting OpenAI, Anthropic, AWS Bedrock, Google, and xAI
  2. Legacy Models — Direct URL + credential storage for backward compatibility

Universal Model System

The universal model system uses Model Configurations that combine a model definition with credentials and default parameters. See Settings - Universal Model System for details on creating and managing configurations.

Supported providers:

ProviderModelsAuth
OpenAIGPT-4o, GPT-4o-mini, o1, o3-miniAPI Key
AnthropicClaude 3.5 Sonnet, Claude 3 Haiku, Claude 3 OpusAPI Key
AWS BedrockClaude, Nova, Titan (via IAM)IAM Role
GoogleGemini Pro, Gemini FlashAPI Key
xAIGrok-2, Grok-2-miniAPI Key

Image recognition is supported for vision-capable models (GPT-4o, Claude 3.5 Sonnet, Gemini Pro Vision). The adapters handle provider-specific image formatting automatically.

Legacy Models

Legacy models store the URL and credentials directly. They support:

  • Bearer token authentication (OpenAI, Anthropic direct)
  • Client key/secret authentication (custom APIs)
  • AWS Bedrock via special bedrock:// URL format (IAM auth)

Model Types

User Models

Organization-specific models created by users:

FieldRequiredDescription
organizationIdYesOwner organization UUID
nameYesDisplay name
modelUrlYesAPI endpoint or bedrock://model-id
tokenConditionalBearer token (OR clientKey/secretKey)
clientKeyConditionalClient key (requires secretKey)
secretKeyConditionalSecret key (requires clientKey)

System Models

Platform-wide models managed by admins:

FieldRequiredDescription
organizationIdNoNULL for system models
modelTypeYesSet to "system"
isVisibleYesShow in org model selectors
nameYesDisplay name
modelUrlYesAPI endpoint or bedrock://model-id

API Endpoints

OperationMethodEndpointPermission
List ModelsGET/api/modelsmodels:read
Create ModelPOST/api/modelsmodels:create
Get ModelGET/api/models/{id}models:read
Update ModelPUT/api/models/{id}models:update
Delete ModelDELETE/api/models/{id}models:delete

Create User Model

POST /api/models
Content-Type: application/json

{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "OpenAI GPT-4o",
"description": "Production OpenAI model",
"modelUrl": "https://api.openai.com/v1/chat/completions",
"token": "sk-proj-xxxxxxxxxxxxx"
}

Response:

{
"id": "770e8400-e29b-41d4-a716-446655440001",
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "OpenAI GPT-4o",
"description": "Production OpenAI model",
"modelUrl": "https://api.openai.com/v1/chat/completions",
"modelType": "user",
"isVisible": true,
"hasToken": true,
"hasClientKey": false,
"createdAt": "2025-12-15T10:00:00.000Z",
"updatedAt": "2025-12-15T10:00:00.000Z"
}
Security

Token, clientKey, and secretKey are never returned in API responses. Only hasToken and hasClientKey boolean flags are returned.


Create Bedrock Model

AWS Bedrock models use special bedrock:// URL format and IAM authentication:

POST /api/models
Content-Type: application/json

{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"name": "Claude 3 Sonnet",
"description": "AWS Bedrock Claude 3 Sonnet",
"modelUrl": "bedrock://anthropic.claude-3-sonnet-20240229-v1:0",
"token": "iam"
}

Bedrock URL Format:

bedrock://<model-id>

Supported Bedrock Models:

ProviderModel ID
Anthropic Claude 3 Sonnetanthropic.claude-3-sonnet-20240229-v1:0
Anthropic Claude 3 Haikuanthropic.claude-3-haiku-20240307-v1:0
Amazon Nova Proamazon.nova-pro-v1:0
Amazon Nova Liteamazon.nova-lite-v1:0
Amazon Titan Textamazon.titan-text-express-v1
IAM Authentication

For Bedrock models, the token field value is ignored. Authentication uses IAM roles configured on the ECS task. Set token to any non-empty value (e.g., "iam") to satisfy validation.


Create System Model

System models are available to all organizations:

POST /api/models
Content-Type: application/json

{
"name": "Platform Claude 3",
"description": "Shared Bedrock model for all orgs",
"modelUrl": "bedrock://anthropic.claude-3-sonnet-20240229-v1:0",
"token": "iam",
"modelType": "system",
"isVisible": true
}

List Models

Get Organization Models

GET /api/models?organizationId=550e8400-e29b-41d4-a716-446655440000

Get Organization + System Models

Use for model selection dropdowns:

GET /api/models?organizationId=550e8400-e29b-41d4-a716-446655440000&includeSystem=true

Get Only System Models

For admin management:

GET /api/models?modelType=system

Update Model

PUT /api/models/{id}
Content-Type: application/json

{
"name": "Updated Model Name",
"description": "Updated description"
}

To update credentials, include the new token or keys:

PUT /api/models/{id}
Content-Type: application/json

{
"token": "new-api-key-here"
}

Using Models with Prompts

Attach Model to Prompt Entity

When creating a Prompt entity, reference a model by ID:

POST /api/workflow-entities
Content-Type: application/json

{
"organizationId": "550e8400-e29b-41d4-a716-446655440000",
"environmentId": "660e8400-e29b-41d4-a716-446655440001",
"name": "Generate Tweet",
"workflowEntityTypeId": "<prompt-type-uuid>",
"modelId": "770e8400-e29b-41d4-a716-446655440001",
"prompt": "Generate a tweet about: {{content}}",
"tfCondition": "Single Path"
}

Model Execution at Runtime

When a Prompt entity with a model is executed:

  1. Consumer retrieves model credentials from cache
  2. Model URL determines authentication method:
    • bedrock:// → Uses IAM authentication
    • https:// + token → Bearer token auth
    • https:// + clientKey/secretKey → Custom header auth
  3. Prompt is sent to model endpoint
  4. Response stored via latestPromptResponse()

Best Practices

Security

  1. Never share tokens — Each org should have its own API keys
  2. Use Bedrock for production — IAM roles more secure than tokens
  3. Rotate credentials — Update tokens periodically

Organization

  1. Descriptive names — Include provider and capability: "OpenAI GPT-4o for Tweets"
  2. Use system models — For shared platform capabilities
  3. Hide deprecated — Set isVisible: false on old models

Cost Management

  1. Choose appropriate model — Use cheaper models for simple tasks
  2. Monitor usage — Track which models are called most
  3. Set limits — Use environment variables to cap requests