roboto.ai#

Submodules#

Package Contents#

class roboto.ai.AISummary(/, **data)#

Bases: pydantic.BaseModel

A wire-transmissible representation of an AI summary

Parameters:

data (Any)

created: datetime.datetime#

The time at which the summary was created.

status: AISummaryStatus#

The status of the summary.

summary_id: str#

The ID of the summary.

text: str#

The text of the summary.

class roboto.ai.Chat(record, roboto_client=None)#

An interactive AI chat session within the Roboto platform.

A Chat represents a conversational interface with Roboto’s AI assistant, enabling users to ask questions, request data analysis, and interact with their robotics data through natural language. Chat sessions maintain conversation history and support streaming responses for real-time interaction.

Chat sessions are stateful and persistent, allowing users to continue conversations across multiple interactions. Each chat maintains a sequence of messages between the user, AI assistant, and Roboto system, with support for tool usage and structured responses.

The Chat class provides methods for starting new conversations, sending messages, streaming responses, and managing conversation state. It integrates with Roboto’s broader ecosystem to provide contextual assistance with data analysis, platform navigation, and robotics workflows.

Parameters:
await_user_turn(tick=0.2, timeout=None)#

Wait for the conversation to reach a state where user input is expected.

Polls the chat session until the AI assistant has finished generating its response and is ready for the next user message. This method is useful for synchronous interaction patterns where you need to wait for the assistant to complete before proceeding.

Parameters:
  • tick (float) – Polling interval in seconds between status checks.

  • timeout (Optional[float]) – Maximum time to wait in seconds. If None, waits indefinitely.

Returns:

Self for method chaining.

Raises:

TimeoutError – If the timeout is reached before the user turn is ready.

Return type:

Chat

Examples

Wait for the assistant to finish responding:

>>> chat = Chat.start("Analyze my latest dataset")
>>> chat.await_user_turn(timeout=30.0)
>>> chat.send_text("What were the key findings?")

Wait for the assistant to finish responding, as a one-liner:

>>> chat = Chat.start("Analyze my latest dataset").await_user_turn()
>>> chat.send_text("What were the key findings?").await_user_turn()

Use in a synchronous conversation loop:

>>> chat = Chat.start("Hello")
>>> while True:
...     chat.await_user_turn()
...     user_input = input("You: ")
...     if user_input.lower() == "quit":
...         break
...     chat.send_text(user_input)
property chat_id: str#

Unique identifier for this chat session.

Return type:

str

classmethod from_id(chat_id, roboto_client=None, load_messages=True)#

Retrieve an existing chat session by its unique identifier.

Loads a previously created chat session from the Roboto platform, allowing users to resume conversations and access message history. This method is useful for continuing interrupted conversations or accessing chat sessions from different contexts.

Parameters:
  • chat_id (str) – Unique identifier for the chat session.

  • roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.

  • load_messages (bool) – Whether to load the chat’s messages. If False, the chat’s messages will be empty.

Returns:

Chat instance representing the existing chat session.

Raises:
Return type:

Chat

Examples

Resume an existing chat session:

>>> chat = Chat.from_id("chat_abc123")
>>> print(f"Chat has {len(chat.messages)} messages")
Chat has 5 messages

Resume a chat and continue the conversation:

>>> chat = Chat.from_id("chat_abc123")
>>> chat.send_text("What was my previous question?")
>>> for text in chat.stream():
...     print(text, end="", flush=True)
is_user_turn()#

Check if the conversation is ready for user input.

Determines whether the AI assistant has finished generating its response and is waiting for the next user message. This is true when the latest message is a completed text response from the assistant.

Returns:

True if it’s the user’s turn to send a message, False otherwise.

Return type:

bool

Examples

Check conversation state before sending a message:

>>> chat = Chat.start("Hello")
>>> if chat.is_user_turn():
...     chat.send_text("How are you?")
... else:
...     print("Assistant is still responding...")

Use in a polling loop (which you’d more typically use await_user_turn() for):

>>> chat = Chat.start("Analyze my data")
>>> while not chat.is_user_turn():
...     time.sleep(0.1)
>>> print("Assistant finished responding")
property latest_message: roboto.ai.chat.record.ChatMessage | None#

The most recent message in the conversation, or None if no messages exist.

Return type:

Optional[roboto.ai.chat.record.ChatMessage]

property messages: list[roboto.ai.chat.record.ChatMessage]#

Complete list of messages in the conversation in chronological order.

Return type:

list[roboto.ai.chat.record.ChatMessage]

refresh()#

Update the chat session with the latest messages and status.

Fetches any new messages or updates from the server and updates the local chat state.

Returns:

Self for method chaining.

Return type:

Chat

Examples

Manually refresh chat state:

>>> chat = Chat.from_id("chat_abc123", load_messages=False)
>>> print(f"Chat has {len(chat.messages)} messages")
>>> chat.refresh()
>>> print(f"Chat now has {len(chat.messages)} messages")
send(message, context=None)#

Send a structured message to the chat session.

Sends a ChatMessage object to the conversation. The message will be processed by the AI assistant, and a response will be generated.

Parameters:
Returns:

Self for method chaining.

Raises:
Return type:

Chat

Examples

Send a structured message:

>>> from roboto.ai.chat import ChatMessage, ChatRole
>>> message = ChatMessage.text("What's in my latest dataset?", ChatRole.USER)
>>> chat.send(message)
>>> for text in chat.stream():
...     print(text, end="", flush=True)
send_text(text, context=None)#

Send a text message to the chat session.

Convenience method for sending a simple text message without needing to construct a ChatMessage object. The text will be sent as a user message and processed by the AI assistant.

Parameters:
  • text (str) – Text content to send to the assistant.

  • context (Optional[roboto.ai.core.RobotoLLMContext]) – Optional context to include with the message.

Returns:

Self for method chaining.

Raises:
Return type:

Chat

Examples

Send a simple text message:

>>> chat = Chat.start("Hello")
>>> chat.await_user_turn()
>>> chat.send_text("What datasets do I have access to?")
>>> for response in chat.stream():
...     print(response, end="", flush=True)
classmethod start(message, context=None, system_prompt=None, org_id=None, roboto_client=None)#

Start a new chat session with an initial message.

Creates a new chat session and sends the initial message to begin the conversation. The AI assistant will process the message and generate a response, which can be retrieved using streaming or polling methods, or await_user_turn().

Parameters:
  • message (Union[str, roboto.ai.chat.record.ChatMessage, collections.abc.Sequence[roboto.ai.chat.record.ChatMessage]]) – Initial message to start the conversation. Can be a simple text string, a structured ChatMessage object, or a sequence of ChatMessage objects for multi-turn initialization.

  • system_prompt (Optional[str]) – Optional system prompt to customize the AI assistant’s behavior and context for this conversation.

  • org_id (Optional[str]) – Organization ID to create the chat in. If None, uses the caller’s default organization.

  • roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.

  • context (Optional[roboto.ai.core.RobotoLLMContext])

Returns:

Chat instance representing the newly created chat session.

Raises:
Return type:

Chat

Examples

Start a simple chat with a text message:

>>> chat = Chat.start("What datasets do I have access to?")
>>> for text in chat.stream():
...     print(text, end="", flush=True)
property status: roboto.ai.chat.record.ChatStatus#

Current status of the chat session.

Return type:

roboto.ai.chat.record.ChatStatus

stream(tick=0.2, timeout=None)#

Stream the AI assistant’s response in real-time.

Continuously polls the chat session and yields text content as it becomes available from the AI assistant. This provides a real-time streaming experience which allows you to get partial content as it is generated by potentially long-running conversational AI processing.

The generator will continue yielding text until the assistant completes its response and the conversation reaches a user turn state.

Parameters:
  • tick (float) – Polling interval in seconds between checks for new content.

  • timeout (Optional[float]) – Maximum time to wait in seconds. If None, waits indefinitely.

Yields:

Text content from the AI assistant’s response as it becomes available.

Raises:

TimeoutError – If the timeout is reached before the response completes.

Return type:

collections.abc.Generator[str, None, None]

Examples

Stream a response and print it in real-time:

>>> chat = Chat.start("Explain machine learning")
>>> for text in chat.stream():
...     print(text, end="", flush=True)
>>> print()  # New line after streaming completes

Stream with timeout and error handling:

>>> try:
...     for text in chat.stream(timeout=30.0):
...         print(text, end="", flush=True)
... except TimeoutError:
...     print("Response timed out")
stream_events(tick=0.2, timeout=None)#

Stream events from the chat session in real-time.

Continuously polls the chat session and yields ChatRecordDelta objects as they become available. This provides a real-time streaming experience which allows you to get partial content as it is generated by potentially long-running conversational AI processing.

Parameters:
  • tick (float) – Polling interval in seconds between checks for new content.

  • timeout (Optional[float]) – Maximum time to wait in seconds. If None, waits indefinitely.

Yields:

ChatRecordDelta objects containing new messages and updates as they become available.

Return type:

collections.abc.Generator[roboto.ai.chat.event.ChatEvent, None, None]

Examples

Stream events and print them in real-time:

>>> chat = Chat.start("Hello")
>>> for delta in chat.stream_events():
...     for idx in sorted(delta.messages_by_idx.keys()):
...         print(f"Message {idx}: {delta.messages_by_idx[idx]}")
property transcript: str#

Human-readable transcript of the entire conversation.

Returns a formatted string containing all messages in the conversation, with role indicators and message content clearly separated.

Return type:

str

class roboto.ai.PromptRequest(/, **data)#

Bases: pydantic.BaseModel

A generic request intended for a natural-language powered endpoint which accepts a human-readable prompt.

Parameters:

data (Any)

prompt: str#

The prompt to send to the AI model.

class roboto.ai.SetSummaryRequest(/, **data)#

Bases: pydantic.BaseModel

A request to set the summary of an entity.

Parameters:

data (Any)

summary: str#

The summary to set.