Model Usage
You can select which AI model suits you best for different uses from the list of models offered by the platform or plug in your own custom AI model. Supported usages include:- Conversation
- Image Analysis
- Image Generation
- Embeddings Generation
- Audio Transcription
- Text to Speech
- Document Summarization
- Speech to Speech (Conversation / Realtime)
- Moderation
Models used for Conversation must support Tools and streaming simultaneously.
How Praxis AI Uses Models
Praxis AI can orchestrate multiple providers and models in parallel using a unified interface:- Configure several providers in Configuration → Personalization and AI Models.
- Assign preferred models to each Model Use (Conversation, Images, Audio, etc.).
- Overwrite the conversation provider for each Assistant
Model Selection
Default for Your Digital Twin
Each Digital Twin in Praxis AI can use different models optimized for its domain. To select or change models:- Go to the Admin section.
- Edit your Digital Twin.
- Open the Personalization and AI Models section.
- Review or change the model used for each Model Use (Conversation, Images, Audio, etc.).

Conversation at Runtime
At runtime, you can easily switch the LLM used forConversation by accessing the Settings in the Side Bar panel and review model capabilities by clicking the Model Options detail


Specific to each assistants
You can specify which Conversation model to use for each assistants
- Use Case
- Assistant Specific model
- Token budget and cost constraints
- Model availability and latency
- User preferences and history
Platform Models
Praxis AI middleware offers access to a broad catalog of state-of-the-art AI models. You can select the model that best fits your needs based on performance, cost, and capabilities. Thedefault model is configured to use the latest, most capable model available on the platform. In most cases, you should keep default selected unless you have a specific requirement (for example, strict cost control, specific provider, or latency constraints).
Models can be accessed using:
- The OpenAI Client
- Or through Amazon Bedrock
Provider-Based Models
Praxis AI exposes conversation and related capabilities (vision, audio, embeddings, moderation, realtime) through two main provider types:- Amazon Bedrock
- OpenAI-Compatible Clients
Amazon Bedrock
Amazon Bedrock
- Anthropic
- Amazon
- OpenAI (Open Source)
- Meta
- Cohere
- Mistral
Anthropic models via Bedrock are platform models of choice, mainly for Conversation and Image Analysis.
| Model Name | Status | Capabilities | Input Context (tokens) | Output Context (tokens) | Typical Uses |
|---|---|---|---|---|---|
us.anthropic.claude-sonnet-4-5-20250929-v1:0 | Default | Tools, Streaming, Vision | 1,000,000 | 64,000 | Conversation, Image Analysis, Summary |
us.anthropic.claude-sonnet-4-20250514-v1:0 | Default | Tools, Streaming, Vision | 1,000,000 | 64,000 | Conversation, Image Analysis, Summary |
us.anthropic.claude-opus-4-1-20250805-v1:0 | Deprecated | Tools, Streaming, Vision | 200,000 | 32,000 | Conversation, Image Analysis |
us.anthropic.claude-opus-4-20250514-v1:0 | Deprecated | Tools, Streaming, Vision | 200,000 | 32,000 | Conversation, Image Analysis |
us.anthropic.claude-3-7-sonnet-20250219-v1:0 | Deprecated | Tools, Streaming, Vision | 200,000 | 64,000 | Conversation, Image Analysis, Summary |
us.anthropic.claude-3-5-sonnet-20241022-v2:0 | Deprecated | Tools, Streaming, Vision | 200,000 | 8,192 | Conversation, Image Analysis, Summary |
us.anthropic.claude-3-5-haiku-20241022-v1:0 | Deprecated | Tools, Streaming, Vision | 200,000 | 8,192 | Conversation, Image Analysis, Summary |
Deprecated models will be removed, migrate to a newer model.
OpenAI-Compatible Clients
OpenAI-Compatible Clients
- OpenAI
- Google Gemini
- Mistral (Direct API)
- xAI
- Anthropic (Direct API)
- Cohere (Direct API)
These models are configured against the OpenAI API and used across Conversation, Image Analysis, Summary, Audio, TTS, Moderation, and Realtime.
Conversation / Vision / Summary
| Model Name | Status | Capabilities | Input Context (tokens) | Output Context (tokens) | Typical Uses |
|---|---|---|---|---|---|
gpt-5.1 | Current | Tools, Streaming, Vision | 400,000 | 128,000 | Conversation, Image Analysis, Summary |
gpt-5-2025-08-07 | Deprecated | Tools, Streaming, Vision | 400,000 | 128,000 | Conversation, Image Analysis, Summary |
gpt-5-mini | Current | Tools, Streaming, Vision | 400,000 | 128,000 | Conversation, Image Analysis, Summary |
gpt-5-nano-2025-08-07 | Current | Tools, Streaming, Vision | 400,000 | 128,000 | Conversation, Image Analysis, Summary |
gpt-5 | Deprecated | Tools, Streaming, Vision | 400,000 | 128,000 | Conversation, Image Analysis, Summary |
gpt-4.1 | Current | Tools, Streaming, Vision | 1,047,576 | 32,768 | Conversation, Image Analysis, Summary |
gpt-4o | Deprecated | Tools, Streaming, Vision | 128,000 | 16,384 | Conversation, Image Analysis, Summary |
gpt-4o-mini | Deprecated | Tools, Streaming, Vision | 128,000 | 16,384 | Conversation, Image Analysis, Summary |
o4-mini-deep-research | Spec. | Streaming, Vision | 200,000 | 100,000 | Deep research, Image Analysis |
o4-mini | Current | Tools, Streaming, Vision | 200,000 | 100,000 | Conversation, Image Analysis |
o3-deep-research | Spec. | Streaming, Vision | 200,000 | 100,000 | Deep research, Image Analysis |
o3-pro | Deprecated | Tools, Streaming, Vision | 200,000 | 100,000 | Conversation, Image Analysis |
o3 | Deprecated | Tools, Streaming, Vision | 200,000 | 100,000 | Conversation, Image Analysis |
o3-mini | Deprecated | Tools, Streaming, Vision | 200,000 | 100,000 | Conversation, Image Analysis |
o1 | Deprecated | Tools, Streaming, Vision | 200,000 | 100,000 | Conversation, Image Analysis |
Image Generation
| Model Name | Capabilities | Typical Uses |
|---|---|---|
gpt-image-1 | Vision | Image Generation |
gpt-image-1-mini | Vision | Image Generation |
dall-e-3 | Vision | Image Generation |
Embeddings
| Model Name | Typical Uses |
|---|---|
text-embedding-3-small | Embeddings |
text-embedding-3-large | Embeddings |
Audio Transcription and Translation
| Model Name | Input (Hz) | Output (tokens) | Typical Uses |
|---|---|---|---|
whisper-1 | - | - | Audio Analysis |
gpt-4o-mini-transcribe | 16,000 | 2,000 | Audio Analysis |
gpt-4o-transcribe | 16,000 | 2,000 | Audio Analysis |
Text-to-Speech (TTS)
| Model Name | Typical Uses |
|---|---|
tts-1 | TTS |
tts-1-hd | TTS |
gpt-4o-mini-tts | TTS |
Moderation
| Model Name | Typical Uses |
|---|---|
omni-moderation-latest | Moderation |
Real-Time Speech-to-Speech (RT / STS)
| Model Name | Input Tokens | Output Tokens | Typical Uses |
|---|---|---|---|
gpt-realtime | 32,000 | 4,096 | Realtime voice agent |
gpt-realtime-mini | 32,000 | 4,096 | Realtime voice agent |
gpt-4o-realtime-preview | 32,000 | 4,096 | Realtime voice agent |
gpt-4o-mini-realtime-preview | 16,000 | 4,096 | Realtime voice agent |
More information:
https://platform.openai.com/docs/models
https://platform.openai.com/docs/models
Bring Your Own AI Model (BYOM)
You can connect your own hosted LLM (for example, a model deployed on Google Vertex AI, private OpenAI-compatible endpoint, or a Bedrock-hosted custom model) and use it as a replacement for any of the supported usages.Configure a Custom Model
To add a custom model for Conversation (or any other use):- In the Admin UI, edit your Digital Twin.
- Under Personalization and AI Models, click Add AI Model.

- In the Add AI Model panel, enter the properties required to connect to your LLM:

-
Model Name
The exact model identifier published by your hosting platform.
This value is case sensitive and must match your provider’s model name, for example:
gemini-flashorprojects/my-proj/locations/us/models/my-model. -
Status
Activemodels are considered by the system for routing and selection.
Inactivemodels are ignored but kept in configuration. -
Description
Human-readable description of the LLM for admins and authors using this Digital Twin. -
Model Use
The specific usage for this model (for example,Conversation,Image Generation,Document Summarization).
This determines which internal calls will use this model. -
Client Library Type
Choose from:Open AIfor OpenAI-compatible endpoints (including many custom or Vertex AI gateways exposing an OpenAI-style API).Bedrockfor Amazon Bedrock-hosted models.
Most Gemini-based models connected through an OpenAI-compatible proxy should useOpen AI.
-
API URL
The base public URL of your model endpoint, for example:
https://ai.my-school.eduor your Bedrock-compatible endpoint.
Typically, the model name or ID is appended to this base URL when interacting with the LLM. -
API Key
The secret key used to authenticate requests to your endpoint.
Keep this key secure and confidential; rotate it periodically for security.
- Click Save to register the new custom AI model.

- The model appears in the list of custom AI models.
- For its configured Model Use, it will replace the platform default model.
- All conversations or tasks mapped to that Model Use will start using your custom model without any client-side code changes.
End-to-End Workflow
1
Configure Provider Credentials
Go to Configuration → Personalization and AI Models and enter API keys and endpoints for each provider you plan to use (OpenAI-compatible, Bedrock, or custom gateways).
2
Select Models per Usage
For each Model Use (Conversation, Image, Audio, etc.), select the preferred model from the list of available platform and custom models.
3
Enable and Test Your Digital Twin
Use the Test or preview mode to run conversations against your updated configuration. Validate:
- Response quality
- Latency
- Tool and streaming support (for Conversation models)
4
Monitor and Optimize
Use Analytics to track token usage, latency, and error rates per model. Adjust your model selection or routing preferences to balance performance and cost.
5
Scale to Production
Once validated, deploy your Digital Twin to users through LMS integration (e.g., Canvas), Web SDK, or REST APIs—no additional code changes required when switching models.
6
Connect New Digital Twins
Repeat the configuration setup for any additional twins so they can connect to the same custom LLM
Need help choosing models or configuring BYOM?
Praxis AI supports multi-LLM orchestration and can route across OpenAI, Anthropic, Amazon, Google, Mistral, and your own hosted models in a single Digital Twin configuration.
Praxis AI supports multi-LLM orchestration and can route across OpenAI, Anthropic, Amazon, Google, Mistral, and your own hosted models in a single Digital Twin configuration.