roboto.domain.files.file#

Module Contents#

class roboto.domain.files.file.File(record, roboto_client=None)#

Represents a file within the Roboto platform.

Files are the fundamental data storage unit in Roboto. They can be uploaded to datasets, imported from external sources, or created as outputs from actions. Once in the platform, files can be tagged with metadata, post-processed by actions, added to collections, visualized in the web interface, and searched using the query system.

Files contain structured data that can be ingested into topics for analysis and visualization. Common file formats include ROS bags, MCAP files, ULOG files, CSV files, and many others. Each file has an associated ingestion status that tracks whether its data has been processed and made available for querying.

Files are versioned entities - each modification creates a new version while preserving the history. Files are associated with datasets and inherit access permissions from their parent dataset.

The File class provides methods for downloading, updating metadata, managing tags, accessing topics, and performing other file operations. It serves as the primary interface for file manipulation in the Roboto SDK.

Parameters:
static construct_s3_obj_arn(bucket, key, partition='aws')#

Construct an S3 object ARN from bucket and key components.

Parameters:
  • bucket (str) – S3 bucket name.

  • key (str) – S3 object key (path within the bucket).

  • partition (str) – AWS partition name, defaults to “aws”.

Returns:

Complete S3 object ARN string.

Return type:

str

Examples

>>> arn = File.construct_s3_obj_arn("my-bucket", "path/to/file.bag")
>>> print(arn)
'arn:aws:s3:::my-bucket/path/to/file.bag'
static construct_s3_obj_uri(bucket, key, version=None)#

Construct an S3 object URI from bucket, key, and optional version.

Parameters:
  • bucket (str) – S3 bucket name.

  • key (str) – S3 object key (path within the bucket).

  • version (Optional[str]) – Optional S3 object version ID.

Returns:

Complete S3 object URI string.

Return type:

str

Examples

>>> uri = File.construct_s3_obj_uri("my-bucket", "path/to/file.bag")
>>> print(uri)
's3://my-bucket/path/to/file.bag'
>>> versioned_uri = File.construct_s3_obj_uri("my-bucket", "path/to/file.bag", "abc123")
>>> print(versioned_uri)
's3://my-bucket/path/to/file.bag?versionId=abc123'
property created: datetime.datetime#

Timestamp when this file was created.

Returns the UTC datetime when this file was first uploaded or created in the Roboto platform. This timestamp is immutable.

Return type:

datetime.datetime

property created_by: str#

Identifier of the user who created this file.

Returns the user ID or identifier of the person or service that originally uploaded or created this file in the Roboto platform.

Return type:

str

property dataset_id: str#

Identifier of the dataset that contains this file.

Returns the unique identifier of the dataset that this file belongs to. Files are always associated with exactly one dataset.

Return type:

str

delete()#

Delete this file from the Roboto platform.

Permanently removes the file and all its associated data, including topics and metadata. This operation cannot be undone.

For files that were imported from customer S3 buckets (read-only BYOB integrations), this method does not delete the file content from S3. It only removes the metadata and references within the Roboto platform.

Raises:
Return type:

None

Examples

>>> file = File.from_id("file_abc123")
>>> file.delete()
# File is now permanently deleted
property description: str | None#

Human-readable description of this file.

Returns the optional description text that provides details about the file’s contents, purpose, or context. Can be None if no description was provided.

Return type:

Optional[str]

download(local_path, credential_provider=None, progress_monitor_factory=NoopProgressMonitorFactory())#

Download this file to a local path.

Downloads the file content from cloud storage to the specified local path. The parent directories are created automatically if they don’t exist.

Parameters:
  • local_path (pathlib.Path) – Local filesystem path where the file should be saved.

  • credential_provider (Optional[roboto.domain.files.file_creds.CredentialProvider]) – Custom credentials for accessing the file storage. If None, uses default credentials for the file’s dataset.

  • progress_monitor_factory (roboto.domain.files.progress.ProgressMonitorFactory) – Factory for creating progress monitors to track download progress. Defaults to no progress monitoring.

Raises:
  • RobotoUnauthorizedException – Caller lacks permission to download the file.

  • FileNotFoundError – File content is not available in storage.

Examples

>>> import pathlib
>>> file = File.from_id("file_abc123")
>>> local_path = pathlib.Path("/tmp/downloaded_file.bag")
>>> file.download(local_path)
>>> print(f"Downloaded to {local_path}")
>>> # Download with progress monitoring
>>> from roboto.domain.files.progress import TqdmProgressMonitorFactory
>>> progress_factory = TqdmProgressMonitorFactory()
>>> file.download(local_path, progress_monitor_factory=progress_factory)
property file_id: str#

Unique identifier for this file.

Returns the globally unique identifier assigned to this file when it was created. This ID is immutable and used to reference the file across the Roboto platform.

Return type:

str

classmethod from_id(file_id, version_id=None, roboto_client=None)#

Create a File instance from a file ID.

Retrieves file information from the Roboto platform using the provided file ID and optionally a specific version.

Parameters:
  • file_id (str) – Unique identifier for the file.

  • version_id (Optional[int]) – Specific version of the file to retrieve. If None, gets the latest version.

  • roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.

Returns:

File instance representing the requested file.

Raises:
Return type:

File

Examples

>>> file = File.from_id("file_abc123")
>>> print(file.relative_path)
'data/sensor_logs.bag'
>>> old_version = File.from_id("file_abc123", version_id=1)
>>> print(old_version.version)
1
classmethod from_path_and_dataset_id(file_path, dataset_id, version_id=None, roboto_client=None)#

Create a File instance from a file path and dataset ID.

Retrieves file information using the file’s relative path within a specific dataset. This is useful when you know the file’s location within a dataset but not its file ID.

Parameters:
  • file_path (Union[str, pathlib.Path]) – Relative path of the file within the dataset.

  • dataset_id (str) – ID of the dataset containing the file.

  • version_id (Optional[int]) – Specific version of the file to retrieve. If None, gets the latest version.

  • roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.

Returns:

File instance representing the requested file.

Raises:
Return type:

File

Examples

>>> file = File.from_path_and_dataset_id("logs/session1.bag", "ds_abc123")
>>> print(file.file_id)
'file_xyz789'
>>> file = File.from_path_and_dataset_id(pathlib.Path("data/sensors.csv"), "ds_abc123")
>>> print(file.relative_path)
'data/sensors.csv'
static generate_s3_client(credential_provider, tcp_keepalive=True)#

Generate a configured S3 client using Roboto credentials.

Creates an S3 client with refreshable credentials obtained from the provided credential provider. The client is configured with the appropriate region and connection settings.

Parameters:
  • credential_provider (roboto.domain.files.file_creds.CredentialProvider) – Function that returns AWS credentials for S3 access.

  • tcp_keepalive (bool) – Whether to enable TCP keepalive for the S3 connection.

Returns:

Configured boto3 S3 client instance.

Examples

>>> from roboto.domain.files.file_creds import FileCredentialsHelper
>>> helper = FileCredentialsHelper(roboto_client)
>>> cred_provider = helper.get_dataset_download_creds_provider("ds_123", "bucket")
>>> s3_client = File.generate_s3_client(cred_provider)
generate_summary()#

Generate a new AI generated summary of this file. If a summary already exists, it will be overwritten. The results of this call are persisted and can be retrieved with get_summary().

Returns: An AISummary object containing the summary text and the creation timestamp.

Example

>>> from roboto import File
>>> fl = File.from_id("fl_abc123")
>>> summary = fl.generate_summary()
>>> print(summary.text)
This file contains ...
Return type:

roboto.ai.summary.AISummary

get_signed_url(override_content_type=None, override_content_disposition=None)#

Generate a signed URL for direct access to this file.

Creates a time-limited URL that allows direct access to the file content without requiring Roboto authentication. Useful for sharing files or integrating with external systems.

Parameters:
  • override_content_type (Optional[str]) – Custom MIME type to set in the response headers.

  • override_content_disposition (Optional[str]) – Custom content disposition header value (e.g., “attachment; filename=myfile.bag”).

Returns:

Signed URL string that provides temporary access to the file.

Raises:

RobotoUnauthorizedException – Caller lacks permission to access the file.

Return type:

str

Examples

>>> file = File.from_id("file_abc123")
>>> url = file.get_signed_url()
>>> print(f"Direct access URL: {url}")
>>> # Force download with custom filename
>>> download_url = file.get_signed_url(
...     override_content_disposition="attachment; filename=data.bag"
... )
get_summary()#

Get the latest AI generated summary of this file. If no summary exists, one will be generated, equivalent to a call to generate_summary().

After the first summary for a file is generated, it will be persisted and returned by this method until generate_summary() is explicitly called again. This applies even if the file or its topics/metadata change.

Returns: An AISummary object containing the summary text and the creation timestamp.

Example

>>> from roboto import File
>>> fl = File.from_id("fl_abc123")
>>> summary = fl.get_summary()
>>> print(summary.text)
This file contains ...
Return type:

roboto.ai.summary.AISummary

get_summary_sync(timeout=60, poll_interval=2)#

Poll the summary endpoint until a summary’s status is COMPLETED, or raise an exception if the status is FAILED or the configurable timeout is reached.

This method will call get_summary() repeatedly until the summary reaches a terminal status. If no summary exists when this method is called, one will be generated automatically.

Parameters:
  • timeout (float) – The maximum amount of time, in seconds, to wait for the summary to complete. Defaults to 1 minute.

  • poll_interval (roboto.waiters.Interval) – The amount of time, in seconds, to wait between polling iterations. Defaults to 2 seconds.

Return type:

roboto.ai.summary.AISummary

Returns: An AI Summary object containing a full LLM summary of the file.

Raises:
Parameters:
  • timeout (float)

  • poll_interval (roboto.waiters.Interval)

Return type:

roboto.ai.summary.AISummary

Example

>>> from roboto import File
>>> fl = File.from_id("fl_abc123")
>>> summary = fl.get_summary_sync(timeout=60)
>>> print(summary.text)
This file contains ...
get_topic(topic_name)#

Get a specific topic from this file by name.

Retrieves a topic with the specified name that is associated with this file. Topics contain the structured data extracted from the file during ingestion.

Parameters:

topic_name (str) – Name of the topic to retrieve (e.g., “/camera/image”, “/imu/data”).

Returns:

Topic instance for the specified topic name.

Raises:
Return type:

roboto.domain.topics.Topic

Examples

>>> file = File.from_id("file_abc123")
>>> camera_topic = file.get_topic("/camera/image")
>>> print(f"Topic schema: {camera_topic.schema}")
>>> # Access topic data
>>> for record in camera_topic.get_data():
...     print(f"Timestamp: {record['timestamp']}")
get_topics(include=None, exclude=None)#

Get all topics associated with this file, with optional filtering.

Retrieves all topics that were extracted from this file during ingestion. Topics can be filtered by name using include/exclude patterns.

Parameters:
  • include (Optional[collections.abc.Sequence[str]]) – If provided, only topics with names in this sequence are yielded.

  • exclude (Optional[collections.abc.Sequence[str]]) – If provided, topics with names in this sequence are skipped.

Yields:

Topic instances associated with this file, filtered according to the parameters.

Return type:

collections.abc.Generator[roboto.domain.topics.Topic, None, None]

Examples

>>> file = File.from_id("file_abc123")
>>> for topic in file.get_topics():
...     print(f"Topic: {topic.name}")
Topic: /camera/image
Topic: /imu/data
Topic: /gps/fix
>>> # Only get camera topics
>>> camera_topics = list(file.get_topics(include=["/camera/image", "/camera/info"]))
>>> print(f"Found {len(camera_topics)} camera topics")
>>> # Exclude diagnostic topics
>>> data_topics = list(file.get_topics(exclude=["/diagnostics"]))
classmethod import_batch(requests, roboto_client=None, caller_org_id=None)#

Import files from customer S3 bring-your-own buckets into Roboto datasets.

This is the ingress point for importing data stored in customer-owned S3 buckets that have been registered as read-only bring-your-own bucket (BYOB) integrations with Roboto. Files remain in their original S3 locations while metadata is registered with Roboto for discovery, processing, and analysis.

This method only works with S3 URIs from buckets that have been properly registered as BYOB integrations for your organization. It performs batch operations to efficiently import multiple files in a single API call, reducing overhead and improving performance.

Parameters:
  • requests (collections.abc.Sequence[roboto.domain.files.operations.ImportFileRequest]) – Sequence of import requests, each specifying file details and metadata.

  • roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.

  • caller_org_id (Optional[str]) – Organization ID of the caller. Required for multi-org users.

Returns:

Sequence of File objects representing the imported files.

Raises:
  • RobotoInvalidRequestException – If any URI is not a valid S3 URI, if the batch exceeds 500 items, or if bucket integrations are not properly configured.

  • RobotoUnauthorizedException – If the caller lacks upload permissions for target datasets or if buckets don’t belong to the caller’s organization.

Return type:

collections.abc.Sequence[File]

Notes

  • Only works with S3 URIs from registered read-only BYOB integrations

  • Files are not copied; only metadata is imported into Roboto

  • Batch size is limited to 500 items per request

  • All S3 buckets must be registered to the caller’s organization

Examples

>>> from roboto.domain.files import ImportFileRequest
>>> requests = [
...     ImportFileRequest(
...         dataset_id="ds_abc123",
...         relative_path="logs/session1.bag",
...         uri="s3://my-bucket/data/session1.bag",
...         size=1024000
...     ),
...     ImportFileRequest(
...         dataset_id="ds_abc123",
...         relative_path="logs/session2.bag",
...         uri="s3://my-bucket/data/session2.bag",
...         size=2048000
...     )
... ]
>>> files = File.import_batch(requests)
>>> print(f"Imported {len(files)} files")
Imported 2 files
classmethod import_one(dataset_id, relative_path, uri, description=None, tags=None, metadata=None, roboto_client=None)#

Import a single file from an external bucket into a Roboto dataset. This currently only supports AWS S3.

This is a convenience method for importing a single file from customer-owned buckets that have been registered as bring-your-own bucket (BYOB) integrations with Roboto. Unlike import_batch(), this method automatically determines the file size by querying the object store and verifies that the object actually exists before importing, providing additional validation and convenience for single-file operations.

The file remains in its original location while metadata is registered with Roboto for discovery, processing, and analysis. This method currently only works with S3 URIs from buckets that have been properly registered as BYOB integrations for your organization.

Parameters:
  • dataset_id (str) – ID of the dataset to import the file into.

  • relative_path (str) – Path of the file relative to the dataset root (e.g., logs/session1.bag).

  • uri (str) – URI where the file is located (e.g., s3://my-bucket/path/to/file.bag). Must be from a registered BYOB integration.

  • description (Optional[str]) – Optional human-readable description of the file.

  • tags (Optional[list[str]]) – Optional list of tags for file discovery and organization.

  • metadata (Optional[dict[str, Any]]) – Optional key-value metadata pairs to associate with the file.

  • roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.

Returns:

File object representing the imported file.

Raises:
Return type:

File

Notes

  • Only works with S3 URIs from registered BYOB integrations

  • File size is automatically determined from the object metadata

  • The file is not copied; only metadata is imported into Roboto

  • For importing multiple files efficiently, use import_batch() instead

Examples

Import a single ROS bag file:

>>> from roboto.domain.files import File
>>> file = File.import_one(
...     dataset_id="ds_abc123",
...     relative_path="logs/session1.bag",
...     uri="s3://my-bucket/data/session1.bag"
... )
>>> print(f"Imported file: {file.relative_path}")
Imported file: logs/session1.bag

Import a file with metadata and tags:

>>> file = File.import_one(
...     dataset_id="ds_abc123",
...     relative_path="sensors/lidar_data.pcd",
...     uri="s3://my-bucket/sensors/lidar_data.pcd",
...     description="LiDAR point cloud from highway test",
...     tags=["lidar", "highway", "test"],
...     metadata={"sensor_type": "Velodyne", "resolution": "high"}
... )
>>> print(f"File size: {file.size} bytes")
property ingestion_status: roboto.domain.files.record.IngestionStatus#

Current ingestion status of this file.

Returns the status indicating whether this file has been processed and its data extracted into topics. Used to track ingestion pipeline progress.

Return type:

roboto.domain.files.record.IngestionStatus

mark_ingested()#

Mark this file as fully ingested and ready for post-processing.

Updates the file’s ingestion status to indicate that all data has been successfully processed and extracted into topics. This enables triggers and other automated workflows that depend on complete ingestion.

Returns:

Updated File instance with ingestion status set to Ingested.

Raises:

RobotoUnauthorizedException – Caller lacks permission to update the file.

Return type:

File

Notes

This method is typically called by ingestion actions after they have successfully processed all data in the file. Once marked as ingested, the file becomes eligible for additional post-processing actions.

Examples

>>> file = File.from_id("file_abc123")
>>> print(file.ingestion_status)
IngestionStatus.NotIngested
>>> updated_file = file.mark_ingested()
>>> print(updated_file.ingestion_status)
IngestionStatus.Ingested
property metadata: dict[str, Any]#

Custom metadata associated with this file.

Returns the file’s metadata dictionary containing arbitrary key-value pairs for storing custom information. Supports nested structures and dot notation for accessing nested fields.

Return type:

dict[str, Any]

property modified: datetime.datetime#

Timestamp when this file was last modified.

Returns the UTC datetime when this file’s metadata, tags, or other properties were most recently updated. The file content itself is immutable, but metadata can be modified.

Return type:

datetime.datetime

property modified_by: str#

Identifier of the user who last modified this file.

Returns the user ID or identifier of the person who most recently updated this file’s metadata, tags, or other mutable properties.

Return type:

str

property org_id: str#

Organization identifier that owns this file.

Returns the unique identifier of the organization that owns and has primary access control over this file.

Return type:

str

put_metadata(metadata)#

Add or update metadata fields for this file.

Adds new metadata fields or updates existing ones. Existing fields not specified in the metadata dict are preserved.

Parameters:

metadata (dict[str, Any]) – Dictionary of metadata key-value pairs to add or update.

Returns:

Updated File instance with the new metadata.

Raises:

RobotoUnauthorizedException – Caller lacks permission to update the file.

Return type:

File

Examples

>>> file = File.from_id("file_abc123")
>>> updated_file = file.put_metadata({
...     "vehicle_id": "vehicle_001",
...     "session_type": "highway_driving",
...     "weather": "sunny"
... })
>>> print(updated_file.metadata["vehicle_id"])
'vehicle_001'
put_tags(tags)#

Add or update tags for this file.

Replaces the file’s current tags with the provided list. To add tags while preserving existing ones, retrieve current tags first and combine them.

Parameters:

tags (list[str]) – List of tag strings to set on the file.

Returns:

Updated File instance with the new tags.

Raises:

RobotoUnauthorizedException – Caller lacks permission to update the file.

Return type:

File

Examples

>>> file = File.from_id("file_abc123")
>>> updated_file = file.put_tags(["sensor-data", "highway", "sunny"])
>>> print(updated_file.tags)
['sensor-data', 'highway', 'sunny']
classmethod query(spec=None, roboto_client=None, owner_org_id=None)#

Query files using a specification with filters and pagination.

Searches for files matching the provided query specification. Results are returned as a generator that automatically handles pagination, yielding File instances as they are retrieved from the API.

Parameters:
  • spec (Optional[roboto.query.QuerySpecification]) – Query specification with filters, sorting, and pagination options. If None, returns all accessible files.

  • roboto_client (Optional[roboto.http.RobotoClient]) – HTTP client for API communication. If None, uses the default client.

  • owner_org_id (Optional[str]) – Organization ID to scope the query. If None, uses caller’s org.

Yields:

File instances matching the query specification.

Raises:
  • ValueError – Query specification references unknown file attributes.

  • RobotoUnauthorizedException – Caller lacks permission to query files.

Return type:

collections.abc.Generator[File, None, None]

Examples

>>> from roboto.query import Comparator, Condition, QuerySpecification
>>> spec = QuerySpecification(
...     condition=Condition(
...         field="tags",
...         comparator=Comparator.Contains,
...         value="sensor-data"
...     ))
>>> for file in File.query(spec):
...     print(f"Found file: {file.relative_path}")
Found file: logs/sensors_2024_01_01.bag
Found file: logs/sensors_2024_01_02.bag
>>> # Query with metadata filter
>>> spec = QuerySpecification(
...     condition=Condition(
...         field="metadata.vehicle_id",
...         comparator=Comparator.Equals,
...         value="vehicle_001"
...     ))
>>> files = list(File.query(spec))
>>> print(f"Found {len(files)} files for vehicle_001")
property record: roboto.domain.files.record.FileRecord#

Underlying data record for this file.

Returns the raw FileRecord that contains all the file’s data fields. This provides access to the complete file state as stored in the platform.

Return type:

roboto.domain.files.record.FileRecord

refresh()#

Refresh this file instance with the latest data from the platform.

Fetches the current state of the file from the Roboto platform and updates this instance’s data. Useful when the file may have been modified by other processes or users.

Returns:

This File instance with refreshed data.

Raises:
Return type:

File

Examples

>>> file = File.from_id("file_abc123")
>>> # File may have been updated by another process
>>> refreshed_file = file.refresh()
>>> print(f"Current version: {refreshed_file.version}")
property relative_path: str#

Path of this file relative to its dataset root.

Returns the file path within the dataset, using forward slashes as separators regardless of the operating system. This path uniquely identifies the file within its dataset.

Return type:

str

rename_file(file_id, new_path)#

Rename this file to a new path within its dataset.

Changes the relative path of the file within its dataset. This updates the file’s location identifier but does not move the actual file content.

Parameters:
  • file_id (str) – File ID (currently unused, kept for API compatibility).

  • new_path (str) – New relative path for the file within the dataset.

Returns:

Updated FileRecord with the new path.

Raises:
Return type:

roboto.domain.files.record.FileRecord

Examples

>>> file = File.from_id("file_abc123")
>>> print(file.relative_path)
'old_logs/session1.bag'
>>> updated_record = file.rename_file("file_abc123", "logs/session1.bag")
>>> print(updated_record.relative_path)
'logs/session1.bag'
property tags: list[str]#

List of tags associated with this file.

Returns the list of string tags that have been applied to this file for categorization and filtering purposes.

Return type:

list[str]

to_association()#

Convert this file to an Association reference.

Creates an Association object that can be used to reference this file in other contexts, such as when creating collections or specifying action inputs.

Returns:

Association object referencing this file and its current version.

Return type:

roboto.association.Association

Examples

>>> file = File.from_id("file_abc123")
>>> association = file.to_association()
>>> print(f"Association: {association.association_type}:{association.association_id}")
Association: file:file_abc123
to_dict()#

Convert this file to a dictionary representation.

Returns the file’s data as a JSON-serializable dictionary containing all file attributes and metadata.

Returns:

Dictionary representation of the file data.

Return type:

dict[str, Any]

Examples

>>> file = File.from_id("file_abc123")
>>> file_dict = file.to_dict()
>>> print(file_dict["relative_path"])
'logs/session1.bag'
>>> print(file_dict["metadata"])
{'vehicle_id': 'vehicle_001', 'session_type': 'highway'}
update(description=NotSet, metadata_changeset=NotSet, ingestion_complete=NotSet)#

Update this file’s properties.

Updates various properties of the file including description, metadata, and ingestion status. Only specified parameters are updated; others remain unchanged.

Parameters:
  • description (Optional[Union[str, roboto.sentinels.NotSetType]]) – New description for the file. Use NotSet to leave unchanged.

  • metadata_changeset (Union[roboto.updates.MetadataChangeset, roboto.sentinels.NotSetType]) – Metadata changes to apply (add, update, or remove fields/tags). Use NotSet to leave metadata unchanged.

  • ingestion_complete (Union[Literal[True], roboto.sentinels.NotSetType]) – Set to True to mark the file as fully ingested. Use NotSet to leave ingestion status unchanged.

Returns:

Updated File instance with the new properties.

Raises:

RobotoUnauthorizedException – Caller lacks permission to update the file.

Return type:

File

Examples

>>> file = File.from_id("file_abc123")
>>> updated_file = file.update(description="Updated sensor data from highway test")
>>> print(updated_file.description)
'Updated sensor data from highway test'
>>> # Update metadata and mark as ingested
>>> from roboto.updates import MetadataChangeset
>>> changeset = MetadataChangeset(put_fields={"processed": True})
>>> updated_file = file.update(
...     metadata_changeset=changeset,
...     ingestion_complete=True
... )
property uri: str#

Storage URI for this file’s content.

Returns the storage location URI where the file’s actual content is stored. This is typically an S3 URI or similar cloud storage reference.

Return type:

str

property version: int#

Version number of this file.

Returns the version number that increments each time the file’s metadata or properties are updated. The file content itself is immutable, but metadata changes create new versions.

Return type:

int