Skip to main content
Admin Personalization Main Gi
1

Click Admin

Navigate to the Admin section of your dashboard.
2

Click Instances

Select the Instances tab from the admin menu.
3

Click Edit

Choose the instance you want to personalize and click Edit.
4

Click Personalization and AI Models

Access the Personalization and AI Models configuration tab.
The Personalization and AI Models tab is divided into two sections: Personalization for identity and branding, and AI Models for model selection and inference settings.

Personalization (Digital Twin)

Admin Personalizations Main Ui Web Configure the identity, appearance, and behavior of your Digital Twin.
The display name for your Digital Twin. This name appears in conversations, the gallery, and throughout the interface. Defaults to Pria if left blank.Choose a name that reflects the persona or expertise your Digital Twin represents (e.g., “Professor Adams”, “TechBot”, “Campus Guide”).
The avatar image for your Digital Twin. Enter a URL to an image that will be displayed as the profile picture in conversations and the gallery. A preview of the current image is shown next to the field.If left blank, the default Pria logo is used.
An animated avatar for your Digital Twin. Enter a URL to a GIF that adds visual personality to the conversation interface. This animated image is displayed alongside or in place of the static picture during interactions.If left blank, the static A.I. Picture is used as fallback.
A background image displayed behind the conversation area in light mode. Use this to create a branded or themed environment for your Digital Twin. A preview thumbnail is shown next to the field.
A separate background image used when the interface is in dark mode. This allows you to provide an appropriate background for both light and dark themes. If left blank, the default dark mode background is used.
A summary description of your Digital Twin that appears in the gallery and can optionally be used as the welcome screen content. This is a free-text area where you describe the Twin’s purpose, expertise, and capabilities.Click the Generate button to automatically create an about description. The generator analyzes the Digital Twin’s system instructions and the assistants configured for this instance to produce a unique summary that reflects the Twin’s actual skills and capabilities.
The Generate button builds the about text by examining your Digital Twin’s prompt instructions and all active assistants, creating a truly unique description based on the skills and expertise available in your instance. Update your instructions and assistants first, then generate the about for the most accurate result.
Use the Copy button to copy the about text to your clipboard.
When enabled, the About this Digital Twin text and the A.I. Picture are displayed on the login/welcome screen instead of the default welcome content. This creates a personalized landing experience for users.
Enables a minimal interface that removes the Profile menu, Sidebar, and Digital Twin selection from the UI. Use this for publicly facing Digital Twins deployed via the Web SDK or embedded widgets, where you want a clean, focused conversation experience without navigation elements.
When enabled, conversation history is compressed using the summary AI model before each request, reducing token usage while preserving context accuracy. This helps keep conversations within the model’s context window during long interactions.Enabled by default.
The system prompt that governs your Digital Twin’s personality, expertise, behavior, and response patterns. This is the most important configuration for shaping how your Digital Twin interacts with users.Use the Onboarding Questions feature to help generate this prompt through a guided interview process.Use the Copy button to copy the instructions to your clipboard.
For a comprehensive methodology on writing effective Digital Twin instructions — including frameworks like CRISPE and ALERT, behavioral guidelines, and starter templates — see Crafting Digital Twin Instructions.
Inject custom CSS to further customize the look and feel of your Digital Twin interface. Use this for branding, font adjustments, color overrides, and layout modifications.For example, you can modify the default font size:
:root {
  font-size: 14px;
}
body {
  font-size: 1rem;
}
A starter template with all available CSS classes is available at the View template link above the field.
For the full CSS template reference with all available classes and example customizations, see UI Customization.
A configuration key used for branding customization. This allows the system to apply instance-specific branding rules based on the key value.
This field is only visible to super administrators.

AI Models

Select which AI models power each capability of your Digital Twin. Each dropdown lists the models available to your instance from the system-level model catalog.
Custom AI Models can be defined in the AI Models admin section to override the default system-level models listed below. When a custom model is active for a capability, the corresponding system-level dropdown is hidden.
The primary LLM used for all conversational interactions. This is the model that generates responses to user messages, processes tool results, and drives the Digital Twin’s core intelligence.
The model used for analyzing images uploaded by users. Supports visual question-answering, image description, and content extraction from screenshots, diagrams, and photos.
The model used for generating images from text prompts (e.g., DALL-E). When a user asks the Digital Twin to create an image, this model handles the generation.
The model used to generate vector embeddings for RAG (Retrieval-Augmented Generation). These embeddings power document search and knowledge retrieval from the IP Vault.
The model used for transcribing audio and video files uploaded to the IP Vault. Converts spoken content into searchable text for RAG indexing.
The model used for converting text responses to spoken audio. This powers the read-aloud functionality in the conversation interface.
The model used for summarizing uploaded documents during RAG ingestion and for compacting conversation history when Compact History is enabled.
The voice provider for Convo Mode (real-time speech-to-speech). Options include:
  • OpenAI GPT-Realtime — Default provider with client-side voice selection, VAD control, tool calling, and MCP support
  • ElevenLabs — Alternative provider with custom voice clones and dashboard-managed configuration
When set to ElevenLabs, you must also configure the ElevenLabs Agent ID and API Key in the Integrations section. See Convo Mode for a provider comparison.
The model used for content moderation when Enable Moderation is turned on in the Configuration settings. Evaluates user messages for policy violations.

Inference Settings

These settings control how the AI models generate responses across all conversations in this instance.
Sets the maximum number of tokens the conversation model can generate per response. Options are grouped by provider:
  • Unspecified (LLM Default) — Let the model determine the optimal response length (recommended)
  • Auto — System-managed token allocation
  • OpenAI values — 1,024 to 65,536 tokens
  • Anthropic values — 1,000 to 64,000 tokens
Anthropic models activate thinking mode when max_tokens is set to at least 4,000 tokens. Models accessed through Bedrock face additional throttling constraints — each request reserves five times the specified token count toward a system-wide limit. If users encounter “Too many tokens” errors, reduce this value or set it to Unspecified.
Controls how much thinking/reasoning the AI model performs before responding. Higher effort levels produce more thorough analysis but increase latency and token usage:
  • None — Disable thinking (fastest, lowest cost)
  • Low — Minimal reasoning
  • Medium — Balanced reasoning
  • High — Thorough reasoning
  • Max — Maximum reasoning depth (highest latency and cost)
Only applies to models that support thinking (e.g., Claude 3.7+, Sonnet 4+, Opus 4+, OpenAI o-series/GPT-5+, Gemini 2.5+). Individual AI models can override this default. See AI Models for details.
Enables the 1 million token context window for supported Claude models (Opus 4.6, Sonnet 4.5, Sonnet 4). Standard context is 200K tokens.
Extended context incurs premium pricing from Anthropic. Enable only when your use case requires processing very large documents or maintaining extremely long conversation histories.