Security enhancements to IP Vault and new Gemini 3.1 Flash Live STS model for Convo mode
curl --request POST \
--url https://pria.praxislxp.com/api/ai/personal/qanda \
--header 'Content-Type: application/json' \
--header 'x-access-token: <api-key>' \
--data '
{
"id": 1750660464754,
"requestArgs": {
"userISODate": "2025-06-23T06:34:24.754Z",
"userTimezone": "America/New_York",
"socketId": "DhXE7OVjCtfUmDFTAAAB",
"institutionPublicId": "f831501f-b645-481a-9cbb-331509aaf8c1",
"assistantId": "6856fa89cbafcff8d98680f5",
"selectedCourse": {
"course_id": 1750532703472,
"course_name": "Research Discussion",
"assistant": {
"_id": "6856fa89cbafcff8d98680f5"
}
},
"ragOnly": false,
"ragIgnore": false
},
"inputs": [
"What is machine learning?"
],
"outputs": []
}
'{
"success": true,
"streamingFailed": false,
"outputs": [
"Machine learning is a subset of artificial intelligence..."
],
"usage": 1234,
"credits": 5,
"query_duration_ms": 2500,
"model": "gpt-4o"
}Processes conversational AI requests with full context awareness including:
Streaming: For real-time response streaming, include a valid socketId in requestArgs.
The AI will stream chunks via Socket.IO to the RECEIVE_STREAM event.
Without Streaming: If socketId is omitted, the full response is returned synchronously.
curl --request POST \
--url https://pria.praxislxp.com/api/ai/personal/qanda \
--header 'Content-Type: application/json' \
--header 'x-access-token: <api-key>' \
--data '
{
"id": 1750660464754,
"requestArgs": {
"userISODate": "2025-06-23T06:34:24.754Z",
"userTimezone": "America/New_York",
"socketId": "DhXE7OVjCtfUmDFTAAAB",
"institutionPublicId": "f831501f-b645-481a-9cbb-331509aaf8c1",
"assistantId": "6856fa89cbafcff8d98680f5",
"selectedCourse": {
"course_id": 1750532703472,
"course_name": "Research Discussion",
"assistant": {
"_id": "6856fa89cbafcff8d98680f5"
}
},
"ragOnly": false,
"ragIgnore": false
},
"inputs": [
"What is machine learning?"
],
"outputs": []
}
'{
"success": true,
"streamingFailed": false,
"outputs": [
"Machine learning is a subset of artificial intelligence..."
],
"usage": 1234,
"credits": 5,
"query_duration_ms": 2500,
"model": "gpt-4o"
}Documentation Index
Fetch the complete documentation index at: https://docs.praxis-ai.com/llms.txt
Use this file to discover all available pages before exploring further.
JWT token passed in x-access-token header
Request payload for AI Q&A conversation
User messages to send to the AI
["What is machine learning?"]Client-generated request identifier (epoch timestamp)
1750660464754
Context arguments for AI conversation requests
Show child attributes
Previous AI responses (for context continuity)
[]AI response generated successfully
Response from AI Q&A conversation
Whether the request completed successfully
True if Socket.IO streaming failed (response still contains full output)
Error message if streaming failed
AI-generated response messages
Total tokens consumed (input + output)
Credits consumed for this request
User's total credits consumed
User's remaining credit balance
Total processing time in milliseconds
AI model identifier used for generation
"gpt-4o"
Was this page helpful?