roboto.ai.chat#
Submodules#
Package Contents#
- type roboto.ai.chat.AgentContent = AgentTextContent | AgentToolUseContent | AgentToolResultContent | AgentErrorContent#
Type alias for all possible content types within agent messages.
- class roboto.ai.chat.AgentContentType#
Bases:
roboto.compat.StrEnumEnumeration of different types of content within agent messages.
Defines the various content types that can be included in agent messages.
- ERROR = 'error'#
Error information when message generation fails.
- TEXT = 'text'#
Plain text content from users or AI responses.
- TOOL_RESULT = 'tool_result'#
Results returned from tool executions.
- TOOL_USE = 'tool_use'#
Tool invocation requests from the AI assistant.
- class roboto.ai.chat.AgentErrorContent(/, **data)#
Bases:
pydantic.BaseModelError content within an agent message.
Used when message generation fails due to an error or is cancelled by the user.
- Parameters:
data (Any)
- content_type: Literal[AgentContentType]#
- error_code: str | None = None#
Optional error code for programmatic handling.
- error_message: str#
User-friendly error message describing what went wrong.
- type roboto.ai.chat.AgentEvent = Union[AgentStartTextEvent, AgentTextDeltaEvent, AgentTextEndEvent, AgentToolUseEvent, AgentToolResultEvent]#
- class roboto.ai.chat.AgentMessage(/, **data)#
Bases:
pydantic.BaseModelA single message within an agent session.
Represents one message in the conversation, containing the sender role, content blocks, and generation status. Messages can contain multiple content blocks of different types (text, tool use, tool results).
- Parameters:
data (Any)
- content: list[AgentContent]#
List of content blocks that make up this message.
- created: datetime.datetime = None#
Timestamp when this message was created.
- is_complete()#
Check if message generation is complete.
- Returns:
True if the message status is COMPLETED, False otherwise.
- Return type:
bool
- is_unsuccessful()#
Check if message generation failed or was cancelled.
- Returns:
True if the message status is FAILED or CANCELLED, False otherwise.
- Return type:
bool
- status: AgentMessageStatus#
Current generation status of this message.
- classmethod text(text, role=AgentRole.USER)#
Create a simple text message.
Convenience method for creating a message containing only text content.
- Parameters:
text (str) – The text content for the message.
role (AgentRole) – The role of the message sender. Defaults to USER.
- Returns:
AgentMessage instance containing the text content.
- Return type:
- class roboto.ai.chat.AgentMessageStatus#
Bases:
roboto.compat.StrEnumEnumeration of possible message generation states.
Tracks the lifecycle of message generation from initiation to completion.
- CANCELLED = 'cancelled'#
Message generation was cancelled by the user.
- COMPLETED = 'completed'#
Message generation has finished and content is complete.
- FAILED = 'failed'#
Message generation failed due to an error.
- GENERATING = 'generating'#
Message content is currently being generated.
- NOT_STARTED = 'not_started'#
Message has been queued but generation has not begun.
- is_terminal()#
Check if the message generation is in a terminal state.
- Returns:
True if the message is in a terminal state, False otherwise.
- Return type:
bool
- class roboto.ai.chat.AgentRole#
Bases:
roboto.compat.StrEnumEnumeration of possible roles in an agent session.
Defines the different participants that can send messages in a session.
- ASSISTANT = 'assistant'#
AI agent responding to user queries and requests.
- ROBOTO = 'roboto'#
Roboto system providing tool results and system information.
- USER = 'user'#
Human user sending messages to the agent.
- class roboto.ai.chat.AgentSession(/, **data)#
Bases:
pydantic.BaseModelComplete record of an agent session.
Contains all the persistent data for a session including metadata, message history, and synchronization state.
- Parameters:
data (Any)
- property chat_id: str#
Backwards-compatible alias — serialized as chat_id in API responses.
- Return type:
str
- continuation_token: str#
Token used for incremental updates and synchronization.
- created: datetime.datetime#
Timestamp when this agent session was created.
- created_by: str#
User ID of the person who created this agent session.
- messages: list[AgentMessage] = None#
Complete list of messages in the conversation.
- model_profile: str | None = None#
Model profile used for this agent session (e.g., ‘standard’, ‘advanced’).
- org_id: str#
Organization ID that owns this agent session.
- session_id: str = None#
Unique identifier for this agent session.
- status: AgentSessionStatus#
Current status of this agent session.
- title: str | None = None#
Title of this agent session.
- class roboto.ai.chat.AgentSessionDelta(/, **data)#
Bases:
pydantic.BaseModelIncremental update to an agent session.
Contains only the changes since the last synchronization, used for efficient real-time updates without transferring the entire session history.
- Parameters:
data (Any)
- continuation_token: str#
Updated token for the next incremental synchronization.
- messages_by_idx: dict[int, AgentMessage]#
New or updated messages indexed by their position in the conversation.
- status: AgentSessionStatus | None = None#
Updated status of the agent session.
- title: str | None = None#
Updated title of the agent session.
- class roboto.ai.chat.AgentSessionStatus#
Bases:
roboto.compat.StrEnumEnumeration of possible agent session states.
Tracks the overall status of an agent session from creation to termination.
- CLIENT_TOOL_TURN = 'client_tool_turn'#
Client must execute pending tool uses and submit results.
- NOT_STARTED = 'not_started'#
Session has been created but no messages have been sent.
- ROBOTO_TURN = 'roboto_turn'#
Roboto is generating a message.
- USER_TURN = 'user_turn'#
User has the turn to send a message.
- class roboto.ai.chat.AgentStartTextEvent(/, **data)#
Bases:
pydantic.BaseModelSignals the beginning of text generation in a chat response.
- Parameters:
data (Any)
- class roboto.ai.chat.AgentTextContent(/, **data)#
Bases:
pydantic.BaseModelText content within an agent message.
- Parameters:
data (Any)
- text: str#
The actual text content of the message.
- class roboto.ai.chat.AgentTextDeltaEvent(/, **data)#
Bases:
pydantic.BaseModelContains incremental text content as the AI generates its response.
- Parameters:
data (Any)
- text: str#
Text fragment from the streaming response.
- class roboto.ai.chat.AgentTextEndEvent(/, **data)#
Bases:
pydantic.BaseModelSignals the completion of text generation in a chat response.
- Parameters:
data (Any)
- class roboto.ai.chat.AgentToolResultContent(/, **data)#
Bases:
pydantic.BaseModelTool execution result content within an agent message.
- Parameters:
data (Any)
- content_type: Literal[AgentContentType]#
- raw_response: dict[str, Any] | None = None#
Raw, unparsed response payload from tool execution.
- runtime_ms: int#
Wall-clock execution time of the tool in milliseconds.
- status: str#
Outcome of the tool execution (e.g. ‘success’, ‘error’).
- tool_name: str#
Name of the tool that was executed.
- tool_use_id: str#
Identifier of the tool invocation this result corresponds to.
- class roboto.ai.chat.AgentToolResultEvent(/, **data)#
Bases:
pydantic.BaseModelContains the result of a tool invocation.
- Parameters:
data (Any)
- name: str#
Name of the tool that was invoked.
- success: bool#
Whether the tool invocation succeeded.
- tool_use_id: str#
Unique identifier for this tool invocation.
- class roboto.ai.chat.AgentToolUseContent(/, **data)#
Bases:
pydantic.BaseModelTool usage request content within an agent message.
- Parameters:
data (Any)
- content_type: Literal[AgentContentType]#
- input: dict[str, Any] | None = None#
Parsed tool input parameters chosen by the LLM (provider-agnostic).
- raw_request: dict[str, Any] | None = None#
Raw, unparsed request payload for this tool invocation.
- tool_name: str#
Name of the tool the LLM is requesting to invoke.
- tool_use_id: str#
Unique identifier for this tool invocation, used to correlate with its result.
- class roboto.ai.chat.AgentToolUseEvent(/, **data)#
Bases:
pydantic.BaseModelSignals that the AI is invoking a tool to gather information.
- Parameters:
data (Any)
- name: str#
Name of the tool being invoked.
- tool_use_id: str#
Unique identifier for this tool invocation.
- class roboto.ai.chat.Chat(record, roboto_client=None)#
An interactive AI chat session within the Roboto platform.
A Chat represents a conversational interface with Roboto’s AI assistant, enabling users to ask questions, request data analysis, and interact with their robotics data through natural language. Chat sessions maintain conversation history and support streaming responses for real-time interaction.
Chat sessions are stateful and persistent, allowing users to continue conversations across multiple interactions. Each chat maintains a sequence of messages between the user, AI assistant, and Roboto system, with support for tool usage and structured responses.
The Chat class provides methods for starting new conversations, sending messages, streaming responses, and managing conversation state. It integrates with Roboto’s broader ecosystem to provide contextual assistance with data analysis, platform navigation, and robotics workflows.
- Parameters:
record (roboto.ai.chat.record.AgentSession)
roboto_client (Optional[roboto.http.RobotoClient])
- await_user_turn(tick=0.2, timeout=None)#
Wait for the conversation to reach a state where user input is expected.
Polls the chat session until the AI assistant has finished generating its response and is ready for the next user message. This method is useful for synchronous interaction patterns where you need to wait for the assistant to complete before proceeding.
- Parameters:
tick (float) – Polling interval in seconds between status checks.
timeout (Optional[float]) – Maximum time to wait in seconds. If None, waits indefinitely.
- Returns:
Self for method chaining.
- Raises:
TimeoutError – If the timeout is reached before the user turn is ready.
- Return type:
Examples
Wait for the assistant to finish responding:
>>> chat = Chat.start("Analyze my latest dataset") >>> chat.await_user_turn(timeout=30.0) >>> chat.send_text("What were the key findings?")
Wait for the assistant to finish responding, as a one-liner:
>>> chat = Chat.start("Analyze my latest dataset").await_user_turn() >>> chat.send_text("What were the key findings?").await_user_turn()
Use in a synchronous conversation loop:
>>> chat = Chat.start("Hello") >>> while True: ... chat.await_user_turn() ... user_input = input("You: ") ... if user_input.lower() == "quit": ... break ... chat.send_text(user_input)
- property chat_id: str#
Unique identifier for this chat session.
- Return type:
str
- classmethod from_id(chat_id, roboto_client=None, load_messages=True)#
Retrieve an existing chat session by its unique identifier.
Loads a previously created chat session from the Roboto platform, allowing users to resume conversations and access message history. This method is useful for continuing interrupted conversations or accessing chat sessions from different contexts.
- Parameters:
chat_id (str) – Unique identifier for the chat session.
roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.
load_messages (bool) – Whether to load the chat’s messages. If False, the chat’s messages will be empty.
- Returns:
Chat instance representing the existing chat session.
- Raises:
RobotoNotFoundException – If the chat session does not exist.
RobotoUnauthorizedException – If the caller lacks permission to access the chat.
- Return type:
Examples
Resume an existing chat session:
>>> chat = Chat.from_id("chat_abc123") >>> print(f"Chat has {len(chat.messages)} messages") Chat has 5 messages
Resume a chat and continue the conversation:
>>> chat = Chat.from_id("chat_abc123") >>> chat.send_text("What was my previous question?") >>> for text in chat.stream(): ... print(text, end="", flush=True)
- is_client_tool_turn()#
Check if the session is waiting for client-side tool execution.
Returns True when the assistant has completed a message ending with a tool-use block that the client must execute and submit results for.
- Returns:
True if the client must execute tools and submit results, False otherwise.
- Return type:
bool
- is_roboto_turn()#
Check if Roboto is currently generating a response.
- Returns:
True if Roboto is actively generating, False otherwise.
- Return type:
bool
- is_user_turn()#
Check if the conversation is ready for user input.
Determines whether the AI assistant has finished generating its response and is waiting for the next user message. This is true when the latest message is a completed text response from the assistant.
- Returns:
True if it’s the user’s turn to send a message, False otherwise.
- Return type:
bool
Examples
Check conversation state before sending a message:
>>> chat = Chat.start("Hello") >>> if chat.is_user_turn(): ... chat.send_text("How are you?") ... else: ... print("Assistant is still responding...")
Use in a polling loop (which you’d more typically use await_user_turn() for):
>>> chat = Chat.start("Analyze my data") >>> while not chat.is_user_turn(): ... time.sleep(0.1) >>> print("Assistant finished responding")
- property latest_message: roboto.ai.chat.record.AgentMessage | None#
The most recent message in the conversation, or None if no messages exist.
- Return type:
Optional[roboto.ai.chat.record.AgentMessage]
- property messages: list[roboto.ai.chat.record.AgentMessage]#
Complete list of messages in the conversation in chronological order.
- Return type:
- refresh()#
Update the chat session with the latest messages and status.
Fetches any new messages or updates from the server and updates the local chat state.
- Returns:
Self for method chaining.
- Return type:
Examples
Manually refresh chat state:
>>> chat = Chat.from_id("chat_abc123", load_messages=False) >>> print(f"Chat has {len(chat.messages)} messages") >>> chat.refresh() >>> print(f"Chat now has {len(chat.messages)} messages")
- send(message, context=None)#
Send a structured message to the chat session.
Sends an AgentMessage object to the conversation. The message will be processed by the AI assistant, and a response will be generated.
- Parameters:
message (roboto.ai.chat.record.AgentMessage) – AgentMessage object containing the message content and metadata.
context (Optional[roboto.ai.core.RobotoLLMContext]) – Optional context to include with the message.
- Returns:
Self for method chaining.
- Raises:
RobotoInvalidRequestException – If the message format is invalid.
RobotoUnauthorizedException – If the caller lacks permission to send messages.
- Return type:
Examples
Send a structured message:
>>> from roboto.ai.chat import AgentMessage, AgentRole >>> message = AgentMessage.text("What's in my latest dataset?", AgentRole.USER) >>> chat.send(message) >>> for text in chat.stream(): ... print(text, end="", flush=True)
- send_text(text, context=None)#
Send a text message to the chat session.
Convenience method for sending a simple text message without needing to construct an AgentMessage object. The text will be sent as a user message and processed by the AI assistant.
- Parameters:
text (str) – Text content to send to the assistant.
context (Optional[roboto.ai.core.RobotoLLMContext]) – Optional context to include with the message.
- Returns:
Self for method chaining.
- Raises:
RobotoInvalidRequestException – If the text is empty or invalid.
RobotoUnauthorizedException – If the caller lacks permission to send messages.
- Return type:
Examples
Send a simple text message:
>>> chat = Chat.start("Hello") >>> chat.await_user_turn() >>> chat.send_text("What datasets do I have access to?") >>> for response in chat.stream(): ... print(response, end="", flush=True)
- classmethod start(message, context=None, system_prompt=None, model_profile=None, org_id=None, roboto_client=None)#
Start a new chat session with an initial message.
Creates a new chat session and sends the initial message to begin the conversation. The AI assistant will process the message and generate a response, which can be retrieved using streaming or polling methods, or
await_user_turn().- Parameters:
message (Union[str, roboto.ai.chat.record.AgentMessage, collections.abc.Sequence[roboto.ai.chat.record.AgentMessage]]) – Initial message to start the conversation. Can be a simple text string, a structured AgentMessage object, or a sequence of AgentMessage objects for multi-turn initialization.
context (Optional[roboto.ai.core.RobotoLLMContext]) – Optional context to scope the AI assistant’s knowledge for this conversation (e.g., specific datasets or resources).
system_prompt (Optional[str]) – Optional system prompt to customize the AI assistant’s behavior and context for this conversation.
model_profile (Optional[str]) – Optional model profile ID (e.g. “standard”, “advanced”). Determines which underlying model is used. Defaults to the deployment’s default profile.
org_id (Optional[str]) – Organization ID to create the chat in. If None, uses the caller’s default organization.
roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.
- Returns:
Chat instance representing the newly created chat session.
- Raises:
RobotoInvalidRequestException – If the message format is invalid.
RobotoUnauthorizedException – If the caller lacks permission to create chats.
- Return type:
Examples
Start a simple chat with a text message:
>>> chat = Chat.start("What datasets do I have access to?") >>> for text in chat.stream(): ... print(text, end="", flush=True)
- property status: roboto.ai.chat.record.AgentSessionStatus#
Current status of the chat session.
- Return type:
- stream(tick=0.2, timeout=None)#
Stream the AI assistant’s response in real-time.
Continuously polls the chat session and yields text content as it becomes available from the AI assistant. This provides a real-time streaming experience which allows you to get partial content as it is generated by potentially long-running conversational AI processing.
The generator will continue yielding text until the assistant completes its response and the conversation reaches a user turn state.
- Parameters:
tick (float) – Polling interval in seconds between checks for new content.
timeout (Optional[float]) – Maximum time to wait in seconds. If None, waits indefinitely.
- Yields:
Text content from the AI assistant’s response as it becomes available.
- Raises:
TimeoutError – If the timeout is reached before the response completes.
- Return type:
collections.abc.Generator[str, None, None]
Examples
Stream a response and print it in real-time:
>>> chat = Chat.start("Explain machine learning") >>> for text in chat.stream(): ... print(text, end="", flush=True) >>> print() # New line after streaming completes
Stream with timeout and error handling:
>>> try: ... for text in chat.stream(timeout=30.0): ... print(text, end="", flush=True) ... except TimeoutError: ... print("Response timed out")
- stream_events(tick=0.2, timeout=None)#
Stream events from the chat session in real-time.
Continuously polls the chat session and yields AgentEvent objects as they become available. This provides a real-time streaming experience which allows you to get partial content as it is generated by potentially long-running conversational AI processing.
- Parameters:
tick (float) – Polling interval in seconds between checks for new content.
timeout (Optional[float]) – Maximum time to wait in seconds. If None, waits indefinitely.
- Yields:
AgentEvent objects (AgentStartTextEvent, AgentTextDeltaEvent, AgentTextEndEvent, AgentToolUseEvent, AgentToolResultEvent) as they become available.
- Return type:
collections.abc.Generator[roboto.ai.chat.event.AgentEvent, None, None]
Examples
Stream events and handle them by type:
>>> chat = Chat.start("Hello") >>> for event in chat.stream_events(): ... if isinstance(event, AgentTextDeltaEvent): ... print(event.text, end="", flush=True)
- property transcript: str#
Human-readable transcript of the entire conversation.
Returns a formatted string containing all messages in the conversation, with role indicators and message content clearly separated.
- Return type:
str
- roboto.ai.chat.ChatContent#
- roboto.ai.chat.ChatContentType#
- roboto.ai.chat.ChatErrorContent#
- roboto.ai.chat.ChatEvent#
- roboto.ai.chat.ChatMessage#
- roboto.ai.chat.ChatMessageStatus#
- roboto.ai.chat.ChatRecord#
- roboto.ai.chat.ChatRecordDelta#
- roboto.ai.chat.ChatRole#
- roboto.ai.chat.ChatStartTextEvent#
- roboto.ai.chat.ChatStatus#
- roboto.ai.chat.ChatTextContent#
- roboto.ai.chat.ChatTextDeltaEvent#
- roboto.ai.chat.ChatTextEndEvent#
- class roboto.ai.chat.ChatToolDetailResponse(/, **data)#
Bases:
pydantic.BaseModelUnsanitized tool request and response details for a chat tool invocation.
- Parameters:
data (Any)
- tool_result: roboto.ai.core.record.AgentToolResultContent#
- roboto.ai.chat.ChatToolResultContent#
- roboto.ai.chat.ChatToolResultEvent#
- roboto.ai.chat.ChatToolUseContent#
- roboto.ai.chat.ChatToolUseEvent#
- class roboto.ai.chat.ClientToolResult(/, **data)#
Bases:
pydantic.BaseModelResult of executing a client-side tool.
- Parameters:
data (Any)
- output: dict[str, Any] | None = None#
Structured output returned by the tool.
- runtime_ms: int#
Wall-clock execution time of the tool in milliseconds.
- status: ClientToolResultStatus#
Outcome of the tool execution.
- tool_name: str#
Name of the tool that was executed.
- tool_use_id: str#
Identifier of the tool invocation this result corresponds to.
- class roboto.ai.chat.ClientToolResultStatus#
Bases:
roboto.compat.StrEnumOutcome of executing a client-side tool.
- DECLINED = 'declined'#
- ERROR = 'error'#
- SUCCESS = 'success'#
- class roboto.ai.chat.ClientToolSpec(/, **data)#
Bases:
pydantic.BaseModelDeclarative specification for a client-side tool.
Unlike AgentTool (which is an ABC with a __call__ method for server-side execution), ClientToolSpec is a plain data model. The backend includes it in the LLM’s tool list but never executes it — the client is responsible for execution and submitting the result.
- Parameters:
data (Any)
- description: str#
- input_schema: dict[str, Any]#
- name: str#
- class roboto.ai.chat.SendMessageRequest(/, **data)#
Bases:
pydantic.BaseModelRequest payload for sending a message to a chat session.
Contains the message content and optional context for the AI assistant.
- Parameters:
data (Any)
- client_tools: list[roboto.ai.core.record.ClientToolSpec] | None = None#
Optional client-side tools available for this invocation.
- context: roboto.ai.core.RobotoLLMContext | None = None#
Optional context to include with the message.
- message: roboto.ai.core.record.AgentMessage#
Message content to send.
- class roboto.ai.chat.StartChatRequest(/, **data)#
Bases:
pydantic.BaseModelRequest payload for starting a new chat session.
Contains the initial messages and configuration for creating a new chat conversation.
- Parameters:
data (Any)
- client_tools: list[roboto.ai.core.record.ClientToolSpec] | None = None#
Optional client-side tools available for this invocation.
- context: roboto.ai.core.RobotoLLMContext | None = None#
Optional context to include with the message.
- messages: list[roboto.ai.core.record.AgentMessage]#
Initial messages to start the conversation with.
- model_profile: str | None = None#
Optional model profile ID for the session (e.g. ‘standard’, ‘advanced’).
- system_prompt: str | None = None#
Optional system prompt to customize AI assistant behavior.
- class roboto.ai.chat.SubmitToolResultsRequest(/, **data)#
Bases:
pydantic.BaseModelRequest payload for submitting client-side tool execution results.
- Parameters:
data (Any)
- client_tools: list[roboto.ai.core.record.ClientToolSpec] | None = None#
Optional updated client-side tools for the next invocation.
- tool_results: list[ClientToolResult]#
Tool results from client-side execution.