Security enhancements to IP Vault and new Gemini 3.1 Flash Live STS model for Convo mode
curl --request POST \
--url https://pria.praxislxp.com/api/ai/experimental/personal/qanda \
--header 'Content-Type: application/json' \
--header 'x-access-token: <api-key>' \
--data '
{
"id": 1750660464754,
"requestArgs": {
"userISODate": "2025-06-23T06:34:24.754Z",
"userTimezone": "America/New_York",
"socketId": "DhXE7OVjCtfUmDFTAAAB",
"assistantId": "6856fa89cbafcff8d98680f5",
"selectedCourse": {
"course_id": 1750532703472,
"course_name": "Research Discussion",
"assistant": {
"_id": "6856fa89cbafcff8d98680f5"
}
},
"ragOnly": false,
"ragIgnore": false
},
"inputs": [
"What is machine learning?"
],
"outputs": []
}
'{
"success": true,
"streamingFailed": false,
"outputs": [
"Machine learning is a subset of artificial intelligence..."
],
"usage": 1234,
"credits": 5,
"query_duration_ms": 2500,
"model": "gpt-4o"
}Experimental variant of /api/ai/personal/qanda. Identical request /
response contract and Socket.IO streaming semantics, but the system
prompt is generated by the Soul Document V1 prompt generator
(getSoulDocumentPrompt / getSoulDocumentPromptBypassed) instead of
the production fragmented-prompt generator. Used for A/B testing
prompt architectures against the classic production endpoint.
Behavior parity with /api/ai/personal/qanda:
inputs[] + requestArgs request shape.requestArgs.socketId for
RECEIVE_STREAM chunks; omit for synchronous response).creditCheck,
contentFilterCheck, creditPayment, saveToHistory,
sendResponse).Differences vs. /api/ai/personal/qanda:
req.locals.experimental = true and req.locals.promptMode = 'soul'
are stamped on the history record for downstream comparison.EXPERIMENTAL ENDPOINT label so they
can be filtered out of production triage.User.findOne({_id})
and populates institution + institution.account directly,
instead of consuming req.locals.resolvedUser produced upstream.
The resolveInstitution middleware still runs (so
institutionPublicId is validated and the institution switch is
persisted), but the user object the handler operates on is fresh
from this re-load — matching the soul-comparison harness’s
expectations.See docs/plans/2026-02-01-soul-document-experiment-design.md and
routes/test/soul-comparison/runner.js for the A/B harness that
drives this endpoint.
curl --request POST \
--url https://pria.praxislxp.com/api/ai/experimental/personal/qanda \
--header 'Content-Type: application/json' \
--header 'x-access-token: <api-key>' \
--data '
{
"id": 1750660464754,
"requestArgs": {
"userISODate": "2025-06-23T06:34:24.754Z",
"userTimezone": "America/New_York",
"socketId": "DhXE7OVjCtfUmDFTAAAB",
"assistantId": "6856fa89cbafcff8d98680f5",
"selectedCourse": {
"course_id": 1750532703472,
"course_name": "Research Discussion",
"assistant": {
"_id": "6856fa89cbafcff8d98680f5"
}
},
"ragOnly": false,
"ragIgnore": false
},
"inputs": [
"What is machine learning?"
],
"outputs": []
}
'{
"success": true,
"streamingFailed": false,
"outputs": [
"Machine learning is a subset of artificial intelligence..."
],
"usage": 1234,
"credits": 5,
"query_duration_ms": 2500,
"model": "gpt-4o"
}Documentation Index
Fetch the complete documentation index at: https://docs.praxis-ai.com/llms.txt
Use this file to discover all available pages before exploring further.
JWT token passed in x-access-token header
Request payload for AI Q&A conversation
User messages to send to the AI
["What is machine learning?"]Client-generated request identifier (epoch timestamp)
1750660464754
Context arguments for AI conversation requests
Show child attributes
Previous AI responses (for context continuity)
[]AI response generated successfully (Soul Document V1 system prompt)
Response from AI Q&A conversation
Whether the request completed successfully
True if Socket.IO streaming failed (response still contains full output)
Error message if streaming failed
AI-generated response messages
Total tokens consumed (input + output)
Credits consumed for this request
User's total credits consumed
User's remaining credit balance
Total processing time in milliseconds
AI model identifier used for generation
"gpt-4o"
Was this page helpful?