Security enhancements to IP Vault and new Gemini 3.1 Flash Live STS model for Convo mode
curl --request POST \
--url https://pria.praxislxp.com/api/user/embedding/{id}/sanitize \
--header 'x-access-token: <api-key>'{
"success": true,
"sanitizedText": "This is the cleaned paragraph with noise removed and formatting normalized...",
"tokensUsed": 142,
"model": "anthropic.claude-3-haiku-20250514-v1:0"
}Sends the chunk text to the institution’s summary model for AI-powered cleanup. Removes noise (navigation, boilerplate, encoding artifacts), normalizes whitespace, and fixes broken formatting — without summarizing or shortening the content.
The sanitized text is returned for preview but not persisted. To save the
cleaned text, call PUT /api/user/embedding/{id} with the returned sanitizedText.
Token usage is automatically tallied to the parent Upload’s tokens_used counter.
curl --request POST \
--url https://pria.praxislxp.com/api/user/embedding/{id}/sanitize \
--header 'x-access-token: <api-key>'{
"success": true,
"sanitizedText": "This is the cleaned paragraph with noise removed and formatting normalized...",
"tokensUsed": 142,
"model": "anthropic.claude-3-haiku-20250514-v1:0"
}Documentation Index
Fetch the complete documentation index at: https://docs.praxis-ai.com/llms.txt
Use this file to discover all available pages before exploring further.
JWT token passed in x-access-token header
Embedding chunk ID to sanitize
Segment sanitized successfully (preview only — not saved)
true
The AI-cleaned segment text. Not persisted — use PUT /embedding/{id} to save.
"This is the cleaned paragraph with noise removed and formatting normalized..."
Token count consumed by the sanitization LLM call (tallied to parent Upload)
142
The LLM model used for sanitization
"anthropic.claude-3-haiku-20250514-v1:0"
Was this page helpful?