Security enhancements to IP Vault and new Gemini 3.1 Flash Live STS model for Convo mode
curl --request POST \
--url https://pria.praxislxp.com/api/ai/experimental/personal/qanda-v2 \
--header 'Content-Type: application/json' \
--header 'x-access-token: <api-key>' \
--data '
{
"id": 1750660464754,
"requestArgs": {
"userISODate": "2025-06-23T06:34:24.754Z",
"userTimezone": "America/New_York",
"socketId": "DhXE7OVjCtfUmDFTAAAB",
"assistantId": "6856fa89cbafcff8d98680f5",
"selectedCourse": {
"course_id": 1750532703472,
"course_name": "Research Discussion",
"assistant": {
"_id": "6856fa89cbafcff8d98680f5"
}
},
"ragOnly": false,
"ragIgnore": false
},
"inputs": [
"What is machine learning?"
],
"outputs": []
}
'{
"success": true,
"streamingFailed": false,
"outputs": [
"Machine learning is a subset of artificial intelligence..."
],
"usage": 1234,
"credits": 5,
"query_duration_ms": 2500,
"model": "gpt-4o"
}Second-generation experimental variant of /api/ai/personal/qanda.
Identical request / response contract to the V1 experimental endpoint,
but uses the Soul Document V2 prompt generator
(getSoulDocumentPromptV2 / getSoulDocumentPromptV2Bypassed) which
refines the cohesive-anchor architecture introduced in V1. Used for
three-way A/B testing alongside classic /api/ai/personal/qanda and
the V1 experimental endpoint.
Behavior is otherwise identical to /api/ai/experimental/personal/qanda:
inputs[] + requestArgs request shape (QandARequest).requestArgs.socketId.creditCheck,
contentFilterCheck, creditPayment, saveToHistory,
sendResponse).User.findOne({_id}) re-load (the
resolveInstitution middleware still runs upstream, but the
handler operates on the freshly re-loaded user object).req.locals.experimental = true and req.locals.promptMode = 'soul-v2'
are stamped on the history record so the A/B harness can distinguish
V1 from V2 runs.See docs/plans/2026-02-01-soul-document-experiment-design.md and the
three-way comparison runner in routes/test/soul-comparison/.
curl --request POST \
--url https://pria.praxislxp.com/api/ai/experimental/personal/qanda-v2 \
--header 'Content-Type: application/json' \
--header 'x-access-token: <api-key>' \
--data '
{
"id": 1750660464754,
"requestArgs": {
"userISODate": "2025-06-23T06:34:24.754Z",
"userTimezone": "America/New_York",
"socketId": "DhXE7OVjCtfUmDFTAAAB",
"assistantId": "6856fa89cbafcff8d98680f5",
"selectedCourse": {
"course_id": 1750532703472,
"course_name": "Research Discussion",
"assistant": {
"_id": "6856fa89cbafcff8d98680f5"
}
},
"ragOnly": false,
"ragIgnore": false
},
"inputs": [
"What is machine learning?"
],
"outputs": []
}
'{
"success": true,
"streamingFailed": false,
"outputs": [
"Machine learning is a subset of artificial intelligence..."
],
"usage": 1234,
"credits": 5,
"query_duration_ms": 2500,
"model": "gpt-4o"
}Documentation Index
Fetch the complete documentation index at: https://docs.praxis-ai.com/llms.txt
Use this file to discover all available pages before exploring further.
JWT token passed in x-access-token header
Request payload for AI Q&A conversation
User messages to send to the AI
["What is machine learning?"]Client-generated request identifier (epoch timestamp)
1750660464754
Context arguments for AI conversation requests
Show child attributes
Previous AI responses (for context continuity)
[]AI response generated successfully (Soul Document V2 system prompt)
Response from AI Q&A conversation
Whether the request completed successfully
True if Socket.IO streaming failed (response still contains full output)
Error message if streaming failed
AI-generated response messages
Total tokens consumed (input + output)
Credits consumed for this request
User's total credits consumed
User's remaining credit balance
Total processing time in milliseconds
AI model identifier used for generation
"gpt-4o"
Was this page helpful?