Instances are the core of your Praxis AI experience, allowing you to manage and interact with various digital twins.
This feature is only available for Digital Twin Instance Admins
Start by clicking the Admin dashboard, located at the top of the main user interface.The Instances tab displays all instances your user is entitled to manage and also allows you to create a new instance if you are entitled to do so.
The Instances tab displays all instances you have access to. Each instance is listed with its name, status, and other relevant details.
Instance Details
Clicking on an instance will provide you with detailed information, including its configuration, members, and activity logs.
Depending on your administrative rights, your editing privileges may be limited.
The Configuration section allows you to manage your instance settings. Configure these key features to optimize your digital twin’s performance and accessibility.
Name
Set your instance name using consistent naming conventions:
The credit balance for this instance. New instances receive 50 starter credits. Purchase additional credits at https://pria.praxislxp.com/my-profile/pricing or contact your institution’s IT administrator for enterprise options.
This field is editable only by super administrators.
Status
Controls instance availability:
Active — Instance is operational for all users
Inactive — Instance is temporarily disabled
This field is editable only by super administrators.
Public ID
Unique identifier automatically generated during instance creation. Used for API integrations, Chat Completions, and system connections. This field is read-only — use the copy button to copy it to your clipboard.
Contact Emails
Specify administrator email addresses (comma-separated) for instance monitoring and notifications. Used for moderation alerts and system notifications.
Account
Links your instance to an institutional account for centralized credit management and distribution across multiple digital twins. Select from available accounts in the dropdown.
Allow Joining
Controls how new users can join this instance:
Disabled — Users cannot self-join; they must be added by an administrator
Account only — Only users within the same institutional account can join
Everyone (public) — Anyone can join this instance
This field is editable only by super administrators.
Joining Admin Only
When enabled, only administrators can join the instance — regular users are blocked from self-joining. This option appears only when Allow Joining is set to “Account only” or “Everyone (public)”.
This field is editable only by super administrators.
Pool Credits
Enable credit sharing across all instance users. Each interaction consumes credits based on complexity. When credits are depleted, users cannot interact until credits are replenished according to your service agreement.
When Pool Credits is disabled, requests from users draw from each user’s personal credit pool instead.
Credits Awarded to New Users
The number of credits automatically granted to new users when they first join this instance. Set to 0 to disable automatic credit awards. This field is only visible when Pool Credits is disabled (i.e., users manage their own credit balance).
Grant Entitlements to New Admins
Automatically grants full entitlements to new administrators, providing complete dashboard access and customization capabilities. For specific entitlement questions, contact humans@praxis-ai.com.
Content Moderation
Enable AI-powered content moderation to screen user input before it reaches the AI model.How it works:
When enabled, every user message is sent to the moderation API before being processed by the conversation model
If the message is flagged for harmful content (harassment, self-harm, violence, etc.), the system:
Blocks the message from reaching the AI model
Returns a safe response to the user
Sends an email alert to the instance’s Contact Emails with details about the flagged content
Configuration fields:
Field
Description
Default
Enable Moderation
Turns content screening on or off
Off
Moderation Model
The moderation model to use (e.g., omni-moderation-latest, text-moderation-stable)
omni-moderation-latest
Content moderation adds a small amount of latency to each request (typically 100-300ms) because the moderation check runs before the AI generates a response. For most use cases, this is imperceptible.
Configure Contact Emails on your instance to receive moderation alerts. Without contact emails, flagged content is still blocked but no notification is sent.
Cite Sources
When enabled, the Digital Twin always includes source citations in its responses. This instructs the AI to reference the documents, files, or knowledge base entries it used to generate each answer, improving transparency and traceability.
Don't Prompt to Personalize
Disables the initial personalization popup for new users, allowing direct access to the conversation interface without being prompted to set preferences first.
Use Location
Enables location-based responses for queries about local services, weather, campus information, and regional content. Improves response accuracy and relevance for location-dependent requests.
Enable Convo (Speech to Speech)
Activates real-time voice communication capabilities. Users can speak directly to the Digital Twin and receive verbal responses in real-time. See Convo Mode for details.When enabled, additional Convo settings appear below.
Convo Admin Only
Restricts Convo Mode to administrators only — regular users will not see the Convo button. Visible only when Enable Convo is turned on.
Convo Enable Text Input
Allows users to type text messages during a Convo session in addition to speaking. Visible only when Enable Convo is turned on.
Voice
Selects the default voice for Convo Mode when using the OpenAI GPT-Realtime provider. Options include Cedar, Marin, Alloy, Ash, Ballad, Coral, Echo, Sage, Shimmer, and Verse. Set to User’s Choice to let each user pick their preferred voice.This setting is hidden when the Convo voice provider is set to ElevenLabs — in that case, the voice is configured in the ElevenLabs Agent dashboard.Visible only when Enable Convo is turned on.
Voice Activity Detection Eagerness
Controls how aggressively the system detects when the user starts or stops speaking during Convo Mode:
Low — Less sensitive; waits longer before detecting speech boundaries (fewer interruptions)
Medium — Balanced detection (default)
High — More sensitive; responds faster to speech boundaries (may cut in sooner)
This setting is hidden when the Convo voice provider is set to ElevenLabs. Visible only when Enable Convo is turned on.
Display Tools Details
Shows detailed information about the tool calls and agent actions made to respond to queries. When enabled, users can see which tools were invoked and their results in the conversation interface.
Prevent Switching to Other Instances
When enabled, users are locked to this instance and cannot navigate to or switch to other Digital Twin instances. Useful for kiosk-style deployments or single-purpose integrations.
Disable Clipboard for Users
Prevents users from using the copy-to-clipboard functionality on AI responses. Useful in exam or assessment environments where content must not be easily copied.
Disable Add Credits
Hides the option for users to purchase or add credits to their account. Use this when credit management is handled exclusively by administrators or through institutional agreements.
History Compaction
Controls automatic conversation history management to keep conversations within AI model context limits.When enabled, Pria uses LLM summarization to compress older messages in long conversations. This allows conversations to continue indefinitely without losing context — earlier messages are summarized rather than truncated.
Field
Description
Default
Compact History
Enable automatic history compaction
On
K-Mean Score
Similarity threshold (0-1) for determining when messages should be compacted. Lower values compact more aggressively.
0.5
How it works:
As a conversation grows, Pria monitors the total token count
When approaching the model’s context limit, older messages are grouped and summarized by an LLM
The summaries replace the original messages, preserving key context while reducing token usage
Base64-encoded data (images, files) is automatically stripped from compacted history
Tool results are enriched with human-readable summaries before compaction
History compaction is recommended for instances where users have long, ongoing conversations (e.g., research projects, multi-session tutoring). For short Q&A interactions, it’s unnecessary.
These settings control fine-grained behavior of your instance. Most administrators can leave these at their default values.
Tool Result Size Limit
Field:toolResultsMaxCharsDefault:40000Maximum number of characters returned by a single tool call. Longer results are truncated. Increase this for instances that work with large documents or detailed API responses. Decrease to reduce token usage.
Maximum Files
Field:maxFilesDefault:300Maximum number of files a user can upload to their IP Vault on this instance. This applies per-user, not per-instance.
Assistant Access Controls
Fine-grained control over who can use and create assistants on this instance.
Field
Description
Disable Assistants for User
When enabled, regular users cannot access the Assistant Library or use custom assistants
Disable Create Assistants for User
When enabled, regular users can use assistants but cannot create new ones
Enable Assistants for Email
Whitelist specific email addresses that are allowed to use assistants (comma-separated). When set, only these users see the Assistant Library
UI Customization
Field
Description
Default
Theme
dark or light — sets the default UI theme for the instance
light
Custom CSS
Inject custom CSS to style the Pria interface. Use this for branding (colors, fonts, logos)
Empty
Custom CSS is injected as-is into the page. Test thoroughly to avoid breaking the layout.
The Integrations section enables you to connect your digital twin instance with various platforms and services to extend functionality and streamline workflows.
All inbound requests are validated against this authorized URL list. Only requests originating from these URLs will be permitted to access your instance.
LTI (Learning Tools Interoperability) contexts are automatically populated when you connect an LTI placement to your digital twin instance.Key Features:
Automatic Population: Contexts are created when LTI placements are established
Instance Memory: The system remembers which instance to launch when accessing your digital twin through LTI or Canvas theme-based integration
Easy Management: Remove placements directly from this list or through your profile page
To disconnect an LTI integration:
Remove the placement from the LTI Contexts list, or
Navigate to your profile page for additional management options
Enable Google Workspace services (Gmail, Drive, Calendar, Sheets, Docs, Slides, Meet, Classroom) for your Digital Twin at the institution level. Rather than entering Client ID/Secret manually, the admin connects a Google account through an OAuth consent flow directly from this page.
Master toggle that enables Google Workspace services for users of this Digital Twin. When disabled, no Google tools are available regardless of other settings.
Use Digital Twin Identity
When enabled, all users access Google services through this Digital Twin’s configured Google account (institution-shared). When disabled, each user connects their own Google account individually from their Profile settings.
Institution-shared mode is useful for sharing a departmental Drive, Gmail, or Calendar with all users — or for simplifying onboarding when users don’t have institutional Google accounts.
Service Selection
When both toggles above are enabled, a grid of Google services appears. Select which services to authorize before connecting:
Enabling a child service automatically enables its parent. Service selection is locked once the account is connected — to change services, disconnect and reconnect.
Connect Google Account
Click Connect Google Account to initiate the OAuth flow. You will be redirected to Google’s consent screen to authorize the selected services. Once authorized:
The connected email address and last refresh timestamp are displayed
All user-level Google OAuth tokens in this institution are cleared (users now use the shared account)
Tokens are automatically refreshed in the background
To disconnect, click the Disconnect button. This revokes the token with Google and clears the institution-level credentials.
When institution-shared credentials are active, they take precedence over any personal Google credentials a user may have configured. If an admin connects a shared Gmail, all users will use that shared Gmail — even if they’ve also authorized their own personal account.
See the Google Workspace Integration guide for the full authorization model, service dependencies, and troubleshooting.
Connect your Digital Twin to an ElevenLabs Conversational AI agent to enable voice capabilities — both for Pria’s built-in Convo Mode and for embeddable client widgets.
In the Integrations section of your instance settings, configure:
ElevenLabs Agent ID — The Agent ID from your ElevenLabs Conversational AI agent (found in the agent settings on the ElevenLabs dashboard)
ElevenLabs API Key — The API key used to authenticate requests between ElevenLabs and your Digital Twin. This must match the key configured in your ElevenLabs agent’s Custom LLM settings.
If no institution-level API key is set, the system falls back to the platform-wide ElevenLabs key (if configured).
Once the credentials are configured, you can select ElevenLabs as the voice provider for Pria’s Convo Mode (speech-to-speech).
Navigate to Personalization and AI Models for your Digital Twin instance
Under Convo Mode, select ElevenLabs from the list of supported voice providers
The Convo widget will now use your ElevenLabs agent for real-time speech-to-speech conversations
Voice selection is not available in ElevenLabs Convo Mode. The voice used is the one configured in your agent’s settings on the ElevenLabs dashboard. To change the voice, update it there.
See the ElevenLabs Voice Agent integration guide for the full end-to-end setup, including creating the ElevenLabs agent, configuring the Custom LLM connection, and deploying client widgets.
Connect your Digital Twin to Canvas LMS so it can query courses, assignments, grades, and more through the Canvas REST APIs. If these credentials are not configured, the call_canvas and search_canvas tools will be unavailable.
Canvas Client ID
The numeric identifier from your Canvas Developer Key (e.g., 10000000000217). This is generated when a Canvas administrator creates an API-type Developer Key under Admin → Developer Keys.One Developer Key can be shared across multiple Digital Twin instances within the same institution.
Canvas Client Secret
The secret key paired with the Client ID above. Click Show Key on your Canvas Developer Key to reveal and copy it. Together with the Client ID, this enables OAuth2 authentication so users can authorize the Digital Twin to access Canvas on their behalf.
Canvas Faculty Access Token
A personal access token generated by a registered course instructor. This is optional but required for teacher-level operations such as posting grade comments, creating announcements, or performing bulk grading — actions that require instructor-scoped permissions even when the end user is a student.To generate one: in Canvas, go to Account → Settings → New Access Token.
This token grants instructor-level access to the courses the faculty member is enrolled in. Store it securely and rotate it periodically.
Canvas API Scopes
A whitelist of Canvas REST API endpoints your Digital Twin is allowed to call. Each scope follows the syntax:
Copy
url:<HTTP_METHOD>|<API_ENDPOINT>
For example: url:GET|/api/v1/courses/:course_id/smartsearchScopes must be configured in two places — on the Canvas Developer Key and here in your Digital Twin instance. If a scope is missing from either location, the API call will be rejected.
During development, disable scope enforcement on your Canvas Developer Key to discover which endpoints your Digital Twin needs. Monitor the Agent Details in your dialog history to see which API calls are made, then add those scopes here for production. Use the Upload/Download buttons to manage the list from a JSON file.
See Canvas Scopes Reference for the full list of available scopes and a starter configuration.
Authenticate with Canvas on SDK Login
When enabled, users must authenticate with Canvas and generate an authorization token before accessing the Digital Twin through the Web SDK or LTI integration. This adds an extra security layer by requiring a valid Canvas session upfront, rather than prompting for authorization only when a Canvas tool is first invoked.
For a complete walkthrough — including creating the Developer Key in Canvas, configuring scopes, testing your integration, and troubleshooting — see the Canvas Pria Tools integration guide.
Connect your Digital Twin to the Kaltura Video Platform to enable video search, playback, and media-related interactions within conversations. When configured, your Digital Twin can search and reference video content hosted in your Kaltura account.
Kaltura Partner ID
Your Kaltura account’s numeric Partner ID. Found in the Kaltura Management Console under Settings → Integration Settings. This identifies which Kaltura account the Digital Twin connects to.
Kaltura Secret
The API secret used to authenticate with the Kaltura API. Found alongside the Partner ID in Settings → Integration Settings. This can be either an Admin Secret or a User Secret depending on the access level you want to grant.
This is a sensitive credential. It is stored encrypted and excluded from API responses by default.
Is Admin Secret
Toggle this on if the secret you provided is an Admin Secret rather than a User Secret. Admin secrets provide full administrative access to the Kaltura account, including content management and analytics. User secrets provide limited, read-only access.When this is enabled, the Kaltura Client ID field is hidden since admin secrets do not require a separate user identifier.
Kaltura Client ID
The Kaltura User ID associated with the secret. This identifies which user’s permissions and content library the Digital Twin operates under. Only visible when Is Admin Secret is disabled (i.e., when using a User Secret).
Connect your Digital Twin to your own AWS services — primarily Amazon Bedrock for AI model access and Amazon S3 for storage. Administrators configure their own IAM credentials so the Digital Twin uses their AWS account directly, allowing access to Bedrock foundation models (e.g., Claude, Titan, Llama) and S3 storage under your organization’s billing and security policies.
AWS Region
The AWS region where your services are deployed. Select the region closest to your users or where your Bedrock models and S3 buckets are provisioned.Common choices:
us-east-1 (N. Virginia) — Broadest Bedrock model availability
us-west-2 (Oregon) — Good Bedrock coverage
eu-central-1 (Frankfurt) — European data residency
ap-southeast-1 (Singapore) — Asia-Pacific
Not all Bedrock foundation models are available in every region. Check the AWS Bedrock model availability page for your region.
AWS Access Key ID
The access key ID from your AWS IAM credentials. Generate this in the AWS IAM Console by creating an IAM user or role with the necessary permissions for Bedrock and/or S3.
AWS Secret Access Key
The secret access key paired with the Access Key ID above. This is shown only once when you create the IAM credentials in AWS — store it securely.
This is a sensitive credential. It is stored encrypted and excluded from API responses by default. Never share it in client-side code or logs.
IAM Permissions: Your IAM user or role needs the following policies:
Amazon Bedrock — bedrock:InvokeModel, bedrock:InvokeModelWithResponseStream, and bedrock:ListFoundationModels
Amazon S3 — s3:GetObject, s3:PutObject, and s3:ListBucket on the relevant buckets
By default, your Digital Twin uses Praxis-managed credits to access AI models. When you provide your own API keys, the Digital Twin routes requests directly to that provider using your account and token pool — giving you full control over billing, rate limits, and model availability.This is especially useful for organizations that already have enterprise agreements with AI providers or want to manage usage and costs independently.
OpenAI API Key
Connects to OpenAI models including GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo, and o-series reasoning models. Also enables OpenAI-powered features such as DALL-E image generation and GPT-Realtime voice (Convo Mode).Generate your key at platform.openai.com/api-keys.
Anthropic API Key
Connects to Anthropic Claude models including Claude Opus, Sonnet, and Haiku. Enables extended thinking and reasoning capabilities when available.Generate your key at console.anthropic.com/settings/keys.
Gemini API Key
Connects to Google Gemini models including Gemini Pro, Gemini Flash, and Gemini Ultra. Uses the Google GenAI SDK directly for native tool calling and multimodal support.Generate your key at aistudio.google.com/apikey.
xAI API Key
Connects to xAI Grok models including Grok-2 and Grok-3. Provides access to Grok’s reasoning and real-time knowledge capabilities.Generate your key at the xAI Console.
Mistral API Key
Connects to Mistral AI models including Mistral Large, Medium, and Small. Offers multilingual and code-focused model options.Generate your key at console.mistral.ai/api-keys.
DeepSeek API Key
Connects to DeepSeek models including DeepSeek-V3 and DeepSeek-R1 (reasoning). Provides high-performance models with competitive pricing.Generate your key at platform.deepseek.com/api_keys.
This field is only visible to super administrators.
Cohere API Key
Connects to Cohere models including Command R and Command R+. Enables Cohere’s retrieval-augmented generation and enterprise search capabilities.Generate your key at dashboard.cohere.com/api-keys.
How it works: When an API key is configured for a provider, any AI model from that provider selected in your AI Models configuration will use your key instead of Praxis-managed credits. If no key is set, the system falls back to the platform’s shared pool.
Security: All API keys are stored encrypted and excluded from API responses by default. They are never exposed in client-side code or logs. Only administrators with the appropriate entitlements can view or modify these values.