Processes conversational AI requests with full context awareness including:
Streaming: For real-time response streaming, include a valid socketId in requestArgs.
The AI will stream chunks via Socket.IO to the RECEIVE_STREAM event.
Without Streaming: If socketId is omitted, the full response is returned synchronously.
JWT token passed in x-access-token header
Request payload for AI Q&A conversation
User messages to send to the AI
["What is machine learning?"]Client-generated request identifier (epoch timestamp)
1750660464754
Context arguments for AI conversation requests
Previous AI responses (for context continuity)
[]AI response generated successfully
Response from AI Q&A conversation
Whether the request completed successfully
True if Socket.IO streaming failed (response still contains full output)
Error message if streaming failed
AI-generated response messages
Total tokens consumed (input + output)
Credits consumed for this request
User's total credits consumed
User's remaining credit balance
Total processing time in milliseconds
AI model identifier used for generation
"gpt-4o"