Choose Your Model
Praxis AI operates on a flexible pay-as-you-go or subscription model designed to fit your needs, with every new user receiving 50 complimentary credits to explore the platform’s capabilities. Since credits are required for each message sent within Praxis AI, you can easily maintain your usage by purchasing discounted credit bundles either individually or through your institution, ensuring uninterrupted access to AI-powered learning and productivity tools.Per Credit Usage
The Per Credit Usage model allows you to buy bundles of credits and apply them to your personal account of the Digital Twins you manage for your school or organization organization. Each user or administrator interaction typically consumes 1 or more credit based on the complexity of the query.Bring Your Own API Keys (BYOK): Customers who provide their own Large Language Model API credentials (OpenAI, Anthropic, Gemini, etc.) receive a substantial 40% discount on our standard pricing, allowing you to maintain direct control over AI usage costs while leveraging your existing provider relationships and any enterprise agreements you may have, with the discount applying to base service fees while standard setup and support charges remain unchanged.
Credit Bundles
- Personal Credits
- Digital Twin Credits
Packages for Personal Use
Select in the Galery
Personal Use
Each user receives a default personal account called Pria, which serves as your dedicated digital assistant for handling personal tasks and inquiries. This account is designed specifically for your individual needs and can be managed independently by purchasing and adding credits directly to maintain its functionality and access to various services.
| Level | Credits | Description |
|---|---|---|
| 1 | $ 10 | 75 credits - never expire |
| 2 | $ 25 | 200 credits - never expire |
| 3 | $ 50 | 500 credits - never expire |
| 4 | $ 120 | 1,300 credits - never expire |

Packages for Personal Credits
By default your personal account is awarded 50 credits to start.
Caching Discounts
Praxis AI’s middleware takes advantage of LLM provider caching to reduce overall credit usage and stay cost‑competitive. When parts of a prompt are cached by the model, they are billed at a lower rate than non‑cached input. However, adding content to the cache itself has a higher upfront cost, so the net benefit is not always straightforward to estimate in advance. For Praxis AI’s credit system, this translates into reduced credit consumption when using models that support caching (for example, OpenAI GPT‑5, GPT‑Realtime, or Claude Sonnet v4, which typically provide stronger caching behavior). These savings also apply in Conversation Mode. Any effective caching discounts are surfaced in both the Dialog Report Card and the Admin → History panel, so administrators can review and validate the realized credit reductions.Token and Credit Calculation
When an interaction runs, several token metrics are tracked and used to determine the final number of credits billed.
Definitions
TokensTotal number of tokens processed by the model, including both input and output. Input Tokens
Number of tokens sent to the model for the interaction (prompt, system instructions, tools, etc.). Input Cached Discount
Portion of the input tokens that are recognized as cached and therefore discounted from the billable input. These tokens still count toward usage, but not fully toward cost. Discount Ratio
Percentage discount applied to the cached portion of the input tokens. A higher ratio means a larger cost reduction from caching. Baseline Tokens
Final effective token count used to compute credits after applying the caching discount. This is the value that is converted into credits.
Example
In the example below, without any caching benefit, the interaction would have cost 5 credits. With caching enabled:- Some of the input is recognized as cached
- The Input Cached Discount is applied
- The Discount Ratio determines how much of those cached tokens are discounted
Completion
Content generated by the LLM counts toward Completion tokens (output). For most models that support caching, these completion tokens are generally around 10× more expensive than input tokens. It is important to note that caching is applied only to input tokens—completion tokens are never cached by the underlying models. To your direct benefit, Praxis AI does not introduce any surcharge or special markup for completion tokens:- Completion tokens are billed using the same pricing curve as input tokens.
- Caching discounts apply only to input tokens when the underlying model supports input caching.
- As a result, you get transparent, predictable pricing for all generated output, without hidden multipliers or extra completion-specific fees.
Credit Optimization
See Credits Optimization for ways to optimize your credit usage.Named User Subscription
The Named User Subscription model is an annual contract that allows Client to purchase a per named user subscription that can be used anytime, anywhere, with an aggregated number of maximum credits based on the size of the population; calculated as one thousand (1,000) credits x number of named users. For example, a population of 500 named users would have a combined 500,000 credits to be used among all users each year. This model is perfect for Schools, Corportions and entities that to pull credits on behalf of users. Below is the volume breakdown:| Tier | Users | Description |
|---|---|---|
| 1 | Up to 1,000 | Annual subscription per named user |
| 2 | 1,001 to 5,000 | Annual subscription per named user |
| 3 | 5,0001 to 10,000 | Annual subscription per named user |
| 4 | 10,001 to 15,000 | Annual subscription per named user |
| 5 | 15,001 to 50,000 | Annual subscription per named user |
| 6 | 50,001 to 75,000 | Annual subscription per named user |
| 7 | 75,000+ | Annual subscription per named user |
