API#

Python API Reference#

Cluster([name, software, container, ...])

Create a Dask cluster with Coiled

cluster_logs(cluster_id[, account, ...])

Returns cluster logs as a dictionary, with a key for the scheduler and each worker.

create_software_environment([name, account, ...])

Create a software environment

delete_cluster(name[, account, workspace])

Delete a cluster

delete_software_environment(name[, account, ...])

Delete a software environment

diagnostics([account])

Run a diagnostic check aimed to help support with any issues.

get_billing_activity([account, cluster, ...])

Retrieve Billing information.

get_software_info(name[, account, workspace])

Retrieve solved spec for a Coiled software environment

list_clusters([account, workspace, max_pages])

List clusters

list_core_usage([account])

Get a list of used cores.

list_gpu_types()

List allowed GPU Types.

list_instance_types([backend, min_cores, ...])

List allowed instance types for the cloud provider configured on your account.

BackendOptions

A dictionary with the following key/value pairs

AWSOptions

A dictionary with the following key/value pairs plus any pairs in BackendOptions

GCPOptions

A dictionary with GCP specific key/value pairs plus any pairs in BackendOptions

list_local_versions()

Get information about local versions.

list_performance_reports([account])

List performance reports stored on Coiled Cloud

list_software_environments([account, workspace])

List software environments

list_user_information()

List information about your user.

performance_report([filename, private, account])

Generates a static performance report and saves it to Coiled Cloud

set_backend_options([account, workspace, ...])

Configure workspace-level settings for cloud provider and container registry.

function(*[, software, container, vm_type, ...])

Decorate a function to run on cloud infrastructure

Software Environments#

coiled.create_software_environment(name=None, *, account=None, workspace=None, conda=None, pip=None, container=None, log_output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, force_rebuild=False, use_entrypoint=True, gpu_enabled=False, arm=False, architecture=ArchitectureTypesEnum.X86_64, region_name=None, include_local_code=False, ignore_local_packages=None, use_uv_installer=True)[source]#

Create a software environment

See also

By default, your local environment is automatically replicated in your cluster (see Manage Software).

Parameters:
  • name (Optional[str]) – Name of software environment. Name can’t contain uppercase letters.

  • account (Optional[str]) – DEPRECATED. Use workspace instead.

  • workspace (Optional[str]) – The workspace in which to create the software environment, if not given in the name.

  • conda (Union[list, CondaEnvSchema, str, Path, None]) – Specification for packages to install into the software environment using conda. Can be a list of packages, a dictionary, or a path to a conda environment YAML file. Can be used together with pip keyword argument, cannot be used together with container.

  • pip (Union[list, str, Path, None]) – Packages to install into the software environment using pip. Can be a list of packages or a path to a pip requirements file. Can be used together with conda keyword argument, cannot be used together with container.

  • container (Optional[str]) – Reference to a custom container image. For images in Docker Hub, you can reference by name, for example, daskdev/dask:latest. For images in other registries, you need to reference by registry URL, for example, 789111821368.dkr.ecr.us-east-2.amazonaws.com/prod/coiled. Custom container image cannot be used together with conda or pip.

  • log_output – Stream to output logs to. Defaults to sys.stdout.

  • force_rebuild (bool) – By default, if an existing software environment with the same name and dependencies already exists, a rebuild is aborted. If this is set to True, those checks are skipped and the environment will be rebuilt. Defaults to False

  • use_entrypoint (bool) – Whether to use (or override) entrypoint set on container.

  • gpu_enabled (bool) – Set CUDA version for Conda

  • arm (bool) – Build software environment for ARM CPU architecture; defaults to False; if True, this takes precedence over architecture.

  • architecture (ArchitectureTypesEnum) – CPU architecture of the software environment. Defaults to x86_64; specify aarch64 for ARM.

  • region_name (Optional[str]) – The AWS or GCP region name to use to store the software environment. If not provided, defaults to us-east-1 for AWS and us-east1 for GCP.

  • include_local_code (bool) – Whether to include local code in the software environment. Defaults to False. Local code means any editable installs of packages, and any importable python files.

  • ignore_local_packages (Optional[List[str]]) – A list of package names to ignore when including local code. Defaults to None.

Return type:

Optional[SoftwareEnvironmentAlias]

coiled.delete_software_environment(name, account=None, workspace=None)[source]#

Delete a software environment

Parameters:

name – Name of software environment to delete.

coiled.get_software_info(name, account=None, workspace=None)[source]#

Retrieve solved spec for a Coiled software environment

Parameters:
  • name (str) – Software environment name

  • workspace (Optional[str]) – The workspace in which the software environment is located

Returns:

Coiled software environment information

Return type:

results

coiled.inspect(name)[source]#

View the details of a Coiled software environment

Parameters:

name (str) –

Identifier of the software environment to use, in the format (<account>/)<name>. If the software environment is owned by the same account as that passed into “account”, the (<account>/) prefix is optional.

For example, suppose your account is “wondercorp”, but your friends at “friendlycorp” have an environment named “xgboost” that you want to use; you can specify this with “friendlycorp/xgboost”. If you simply entered “xgboost”, this is shorthand for “wondercorp/xgboost”.

The “name” portion of (<account>/)<name> can only contain ASCII letters, hyphens and underscores.

Examples

>>> import coiled
>>> coiled.inspect("coiled/default")
coiled.list_software_environments(account=None, workspace=None)[source]#

List software environments

Parameters:

account (Optional[str]) – Name of the Coiled account to list software environments. If not provided, will use the coiled.account configuration value.

Returns:

Dictionary with information about each software environment in the specified account. Keys in the dictionary are names of software environments, while the values contain information about the corresponding software environment.

Clusters#

class coiled.Cluster(name=None, *, software=None, container=None, n_workers=None, worker_class=None, worker_options=None, worker_vm_types=None, worker_cpu=None, worker_memory=None, worker_disk_size=None, worker_disk_throughput=None, worker_gpu=None, worker_gpu_type=None, scheduler_options=None, scheduler_vm_types=None, scheduler_cpu=None, scheduler_memory=None, scheduler_disk_size=None, scheduler_gpu=None, asynchronous=False, cloud=None, account=None, workspace=None, shutdown_on_close=None, idle_timeout=None, no_client_timeout=<object object>, use_scheduler_public_ip=None, use_dashboard_https=None, dashboard_custom_subdomain=None, credentials='local', credentials_duration_seconds=None, timeout=None, environ=None, tags=None, send_dask_config=True, unset_single_threading_variables=None, backend_options=None, show_widget=True, custom_widget=None, configure_logging=None, wait_for_workers=None, package_sync=None, package_sync_strict=False, package_sync_conda_extras=None, package_sync_ignore=None, package_sync_only=None, package_sync_fail_on='critical-only', package_sync_use_uv_installer=True, private_to_creator=None, use_best_zone=True, allow_cross_zone=False, compute_purchase_option=None, spot_policy=None, extra_worker_on_scheduler=None, _n_worker_specs_per_host=None, scheduler_port=None, allow_ingress_from=None, allow_ssh_from=None, allow_ssh=None, allow_spark=None, open_extra_ports=None, jupyter=None, mount_bucket=None, region=None, arm=None, batch_job_ids=None, batch_job_container=None)#

Create a Dask cluster with Coiled

Parameters:
  • n_workers (Union[int, List[int], None]) – Number of workers in this cluster. Can either be an integer for a static number of workers, or a list specifying the lower and upper bounds for adaptively scaling up/down workers depending on the amount of work submitted. Defaults to n_workers=[4, 20] which adaptively scales between 4 and 20 workers.

  • name (Optional[str]) – Name to use for identifying this cluster. Defaults to None.

  • software (Optional[str]) – Name of the software environment to use; this allows you to use and re-use existing Coiled software environments. Specifying this argument will disable package sync, and it cannot be combined with container.

  • container (Optional[str]) – Name or URI of container image to use; when using a pre-made container image with Coiled, this allows you to skip the step of explicitly creating a Coiled software environment from that image. Specifying this argument will disable package sync, and it cannot be combined with software.

  • worker_class (Optional[str]) – Worker class to use. Defaults to distributed.nanny.Nanny.

  • worker_options (Optional[dict]) – Mapping with keyword arguments to pass to worker_class. Defaults to {}.

  • worker_vm_types (Optional[list]) – List of instance types that you would like workers to use, default instance type selected contains 4 cores. You can use the command coiled.list_instance_types() to see a list of allowed types.

  • worker_cpu (Union[int, List[int], None]) – Number, or range, of CPUs requested for each worker. Specify a range by using a list of two elements, for example: worker_cpu=[2, 8].

  • worker_memory (Union[str, List[str], None]) – Amount of memory to request for each worker, Coiled will use a +/- 10% buffer from the memory that you specify. You may specify a range of memory by using a list of two elements, for example: worker_memory=["2GiB", "4GiB"].

  • worker_disk_size (Union[int, str, None]) – Non-default size of persistent disk attached to each worker instance, specified as string with units or integer for GiB.

  • worker_disk_throughput (Optional[int]) – EXPERIMENTAL. For AWS, non-default throughput (in MB/s) for EBS gp3 volumes attached to workers.

  • worker_gpu (Union[int, bool, None]) – Number of GPUs to attach to each worker. Default is 0, True is interpreted as 1. Note that this is ignored if you’re explicitly specifying an instance type which includes a fixed number of GPUs.

  • worker_gpu_type (Optional[str]) – For GCP, this lets you specify type of guest GPU for instances. Should match the way the cloud provider specifies the GPU, for example: worker_gpu_type="nvidia-tesla-t4". By default, Coiled will request NVIDIA T4 if GPU type isn’t specified. For AWS, if you want GPU other than T4, you’ll need to explicitly specify the VM instance type (e.g., p3.2xlarge for instance with one NVIDIA Tesla V100).

  • scheduler_options (Optional[dict]) – Mapping with keyword arguments to pass to the Scheduler __init__. Defaults to {}.

  • scheduler_vm_types (Optional[list]) – List of instance types that you would like the scheduler to use, default instances type selected contains 4 cores. You can use the command coiled.list_instance_types() to se a list of allowed types.

  • scheduler_cpu (Union[int, List[int], None]) – Number, or range, of CPUs requested for the scheduler. Specify a range by using a list of two elements, for example: scheduler_cpu=[2, 8].

  • scheduler_memory (Union[str, List[str], None]) – Amount of memory to request for the scheduler, Coiled will use a +/-10% buffer from the memory what you specify. You may specify a range of memory by using a list of two elements, for example: scheduler_memory=["2GiB", "4GiB"].

  • scheduler_gpu (Optional[bool]) – Whether to attach GPU to scheduler; this would be a single NVIDIA T4. The best practice for Dask is to have a GPU on the scheduler if you are using GPUs on your workers, so if you don’t explicitly specify, Coiled will follow this best practice and give you a scheduler GPU just in case you have worker_gpu set.

  • asynchronous (bool) – Set to True if using this Cloud within async/await functions or within Tornado gen.coroutines. Otherwise this should remain False for normal use. Default is False.

  • cloud (Optional[CloudV2]) – Cloud object to use for interacting with Coiled. This object contains user/authentication/account information. If this is None (default), we look for a recently-cached Cloud object, and if none exists create one.

  • account (Optional[str]) – DEPRECATED. Use workspace instead.

  • workspace (Optional[str]) – The Coiled workspace (previously “account”) to use. If not specified, will check the coiled.workspace or coiled.account configuration values, or will use your default workspace if those aren’t set.

  • shutdown_on_close (Optional[bool]) – Whether or not to shut down the cluster when it finishes. Defaults to True, unless name points to an existing cluster.

  • idle_timeout (Optional[str]) – Shut down the cluster after this duration if no activity has occurred. E.g. “30 minutes” Default: “20 minutes”

  • no_client_timeout (Union[str, None, object]) – Shut down the cluster after this duration after all clients have disconnected. When shutdown_on_close is False this is disabled, since shutdown_on_close=False usually means you want to keep cluster up after disconnecting so you can later connect a new client. Default: “2 minutes”, or idle_timeout if there’s a non-default idle timeout

  • use_scheduler_public_ip (Optional[bool]) – Boolean value that determines if the Python client connects to the Dask scheduler using the scheduler machine’s public IP address. The default behaviour when set to True is to connect to the scheduler using its public IP address, which means traffic will be routed over the public internet. When set to False, traffic will be routed over the local network the scheduler lives in, so make sure the scheduler private IP address is routable from where this function call is made when setting this to False.

  • use_dashboard_https (Optional[bool]) – When public IP address is used for dashboard, we’ll enable HTTPS + auth by default. You may want to disable this if using something that needs to connect directly to the scheduler dashboard without authentication, such as jupyter dask-labextension<=6.1.0.

  • credentials (Optional[str]) – Which credentials to use for Dask operations and forward to Dask clusters – options are “local”, or None. The default behavior is to use local credentials if available. NOTE: credential handling currently only works with AWS credentials.

  • credentials_duration_seconds (Optional[int]) – For “local” credentials shipped to cluster as STS token, set the duration of STS token. If not specified, the AWS default will be used.

  • timeout (Union[int, float, None]) – Timeout in seconds to wait for a cluster to start, will use default_cluster_timeout set on parent Cloud by default.

  • environ (Optional[Dict[str, str]]) – Dictionary of environment variables. Values will be transmitted to Coiled; for private environment variables (e.g., passwords or access keys you use for data access), send_private_envs() is recommended.

  • send_dask_config (bool) – Whether to send a frozen copy of local dask.config to the cluster.

  • unset_single_threading_variables (Optional[bool]) – By default, Dask sets environment variables such as OMP_NUM_THREADS and MKL_NUM_THREADS so that relevant libraries use a single thread per Dask worker (by default there are as many Dask workers as CPU cores). In some cases this is not what you want, so this option overrides the default Dask behavior.

  • backend_options (Union[AWSOptions, GCPOptions, None]) – Dictionary of backend specific options.

  • show_widget (bool) – Whether to use the rich-based widget display in IPython/Jupyter (ignored if not in those environments). For use cases involving multiple Clusters at once, show_widget=False is recommended. (Default: True)

  • custom_widget (Optional[ClusterWidget]) – Use the rich-based widget display outside of IPython/Jupyter (Default: False)

  • tags (Optional[Dict[str, str]]) – Dictionary of tags.

  • wait_for_workers (Union[int, float, bool, None]) – Whether to wait for a number of workers before returning control of the prompt back to the user. Usually, computations will run better if you wait for most workers before submitting tasks to the cluster. You can wait for all workers by passing True, or not wait for any by passing False. You can pass a fraction of the total number of workers requested as a float(like 0.6), or a fixed number of workers as an int (like 13). If None, the value from coiled.wait-for-workers in your Dask config will be used. Default: 0.3. If the requested number of workers don’t launch within 10 minutes, the cluster will be shut down, then a TimeoutError is raised.

  • package_sync (Union[bool, List[str], None]) – DEPRECATED – Always enabled when container and software are not given. Synchronize package versions between your local environment and the cluster. Cannot be used with the container or software options. Passing specific packages as a list of strings will attempt to synchronize only those packages, use with caution. (Deprecated: use package_sync_only instead.) We recommend reading the additional documentation for this feature

  • package_sync_conda_extras (Optional[List[str]]) – A list of conda package names (available on conda-forge) to include in the environment that are not in your local environment. Use with caution, as this can lead to dependency conflicts with local packages. Note, this will only work for conda package with platform-specific builds (i.e., not “noarch” packages).

  • package_sync_ignore (Optional[List[str]]) – A list of package names to exclude from the environment. Note their dependencies may still be installed, or they may be installed by another package that depends on them!

  • package_sync_only (Optional[List[str]]) –

    A list of package names to only include from the environment. Use with caution. We recommend reading the additional documentation for this feature

  • package_sync_strict (bool) – Only allow exact packages matches, not recommended unless your client platform/architecture matches the cluster platform/architecture

  • package_sync_use_uv_installer (bool) – Use uv to install pip packages when building the software environment. This should only be disabled if you are experiencing issues with uv and need to use pip instead. (Default: True)

  • private_to_creator (Optional[bool]) – Only allow the cluster creator, not other members of team account, to connect to this cluster.

  • use_best_zone (bool) – Allow the cloud provider to pick the zone (in your specified region) that has best availability for your requested instances. We’ll keep the scheduler and workers all in a single zone in order to avoid any cross-zone network traffic (which would be billed).

  • allow_cross_zone (bool) – Allow the cluster to have VMs in distinct zones. There’s a cost for cross-zone traffic (usually pennies per GB), so this is a bad choice for shuffle-heavy workloads, but can be a good choice for large embarrassingly parallel workloads.

  • spot_policy (Optional[Literal['on-demand', 'spot', 'spot_with_fallback']]) – Purchase option to use for workers in your cluster, options are “on-demand”, “spot”, and “spot_with_fallback”; by default this is “on-demand”. (Google Cloud refers to this as “provisioning model” for your instances.) Spot instances are much cheaper, but can have more limited availability and may be terminated while you’re still using them if the cloud provider needs more capacity for other customers. On-demand instances have the best availability and are almost never terminated while still in use, but they’re significantly more expensive than spot instances. For most workloads, “spot_with_fallback” is likely to be a good choice: Coiled will try to get as many spot instances as we can, and if we get less than you requested, we’ll try to get the remaining instances as on-demand. For AWS, when we’re notified that an active spot instance is going to be terminated, we’ll attempt to get a replacement instance (spot if available, but could be on-demand if you’ve enabled “fallback”). Dask on the active instance will attempt a graceful shutdown before the instance is terminated so that computed results won’t be lost.

  • scheduler_port (Optional[int]) – Specify a port other than the default (443) for communication with Dask scheduler. Usually the default is the right choice; Coiled supports using 443 concurrently for scheduler comms and for scheduler dashboard.

  • allow_ingress_from (Optional[str]) – Control the CIDR from which cluster firewall allows ingress to scheduler; by default this is open to any source address (0.0.0.0/0). You can specify CIDR, or “me” for just your IP address.

  • allow_ssh_from (Optional[str]) – Allow connections to scheduler over port 22 (used for SSH) for a specified IP address or CIDR.

  • allow_ssh (Optional[bool]) – Allow connections to scheduler over port 22, used for SSH.

  • allow_spark (Optional[bool]) – Allow (secured) connections to scheduler on port 15003 used by Spark Connect. By default, this port is open.

  • jupyter (Optional[bool]) – Start a Jupyter server in the same process as Dask scheduler. The Jupyter server will be behind HTTPS with authentication (unless you disable use_dashboard_https, which we strongly recommend against). Note that jupyterlab will need to be installed in the software environment used on the cluster (or in your local environment if using package sync). Once the cluster is running, you can use jupyter_link to get link to access the Jupyter server.

  • mount_bucket (Union[str, List[str], None]) – Optional name or list of names of buckets to mount. For example, "s3://my-s3-bucket" will mount the S3 bucket my-s3-bucket, using your forwarded AWS credentials, and "gs://my-gcs-bucket" will mount the GCS bucket my-gcs-bucket using your forwarded Google Application Default Credentials. Buckets are mounted to subdirectories in both /mount and ./mount (relative to working directory for Dask), subdirectory name will be taken from bucket name.

  • region (Optional[str]) – The cloud provider region in which to run the cluster.

  • arm (Optional[bool]) – Use ARM instances for cluster; default is x86 (Intel) instances.

adapt(Adaptive=<class 'coiled.cluster.CoiledAdaptive'>, *, minimum=1, maximum=200, target_duration='3m', wait_count=24, interval='5s', **kwargs)[source]#

Dynamically scale the number of workers in the cluster based on scaling heuristics.

Parameters:
  • minimum (int) – Minimum number of workers that the cluster should have while on low load, defaults to 1.

  • maximum (int) – Maximum numbers of workers that the cluster should have while on high load.

  • wait_count (int) – Number of consecutive times that a worker should be suggested for removal before the cluster removes it.

  • interval (timedelta or str) – Milliseconds between checks, defaults to 5000 ms.

  • target_duration (timedelta or str) – Amount of time we want a computation to take. This affects how aggressively the cluster scales up.

Return type:

Adaptive

property asynchronous#

Are we running in the event loop?

close(force_shutdown=False, reason=None)[source]#

Close the cluster.

Return type:

Optional[Awaitable[None]]

property details_url#

URL for cluster on the web UI at cloud.coiled.io.

get_client()[source]#

Return client for the cluster

If a client has already been initialized for the cluster, return that otherwise initialize a new client object.

get_logs(scheduler=True, workers=True)[source]#

Return logs for the scheduler and workers :type scheduler: bool :param scheduler: Whether or not to collect logs for the scheduler :type scheduler: boolean :type workers: bool :param workers: Whether or not to collect logs for the workers :type workers: boolean

Returns:

logs – A dictionary of logs, with one item for the scheduler and one for the workers

Return type:

Dict[str]

get_spark(block_till_ready=True, spark_connect_config=None, executor_memory_factor=None, worker_memory_factor=None)[source]#

Get a spark client. Experimental and subject to change without notice.

To use this, start the cluster with coiled.spark.get_spark_cluster.

spark_connect_config:

Optional dictionary of additional config options. For example, {"spark.foo": "123"} would be equivalent to --config spark.foo=123 when running spark-submit --class spark-connect.

executor_memory_factor:

Determines spark.executor.memory based on the available memory, can be any value between 1 and 0. Default is 1.0, giving all available memory to the executor.

worker_memory_factor:

Determines --memory for org.apache.spark.deploy.worker.Worker, can be any value between 1 and 0. Default is 1.0.

async recommendations(target)[source]#

Make scale up/down recommendations based on current state and target.

Return a recommendation of the form - {“status”: “same”} - {“status”: “up”, “n”: <desired number of total workers>} - {“status”: “down”, “workers”: <list of workers to close>}

Return type:

dict

scale(n, force_stop=True)[source]#

Scale cluster to n workers

Parameters:
  • n (int) – Number of workers to scale cluster size to.

  • force_stop (bool) – Stop the VM even if scheduler did not retire the worker; for example, if worker has unique data that could not be moved to another worker.

Return type:

Optional[Awaitable[None]]

async scale_down(workers, reason=None, force_stop=True)[source]#

Remove specified workers from the cluster.

Parameters:
  • workers (Iterable[str]) – Iterable of worker names

  • reason (Optional[str]) – Optional reason for why these workers are being removed (e.g., adaptive scaling)

  • force_stop (bool) – Stop the VM even if scheduler did not retire the worker; for example, if worker has unique data that could not be moved to another worker.

Return type:

None

async scale_up(n, reason=None)[source]#

Scales up to a target number of n workers

It’s documented that scale_up should scale up to a certain target, not scale up BY a certain amount:

dask/distributed

Return type:

None

send_credentials(automatic_refresh=False)[source]#

Manually trigger sending STS token to cluster.

Usually STS token is automatically sent and refreshed by default, this allows you to manually force a refresh in case that’s needed for any reason.

send_private_envs(env)[source]#

Send potentially private environment variables to be set on scheduler and all workers.

You can use this to send secrets (passwords, auth tokens) that you can use in code running on cluster. Unlike environment variables set with coiled.Cluster(environ=...), the values will be transmitted directly to your cluster without being transmitted to Coiled, logged, or written to disk.

The Dask scheduler will ensure that these environment variables are set on any new workers you add to the cluster.

set_keepalive(keepalive)[source]#

Set how long to keep cluster running if all the clients have disconnected.

This is a way to shut down no-longer-used cluster, in additional to dask idle timeout. With no keepalive set, cluster will not shut down on account of clients going away.

Parameters:

keepalive – duration string like “30s” or “5m”

shutdown()[source]#

Shutdown the cluster; useful when shutdown_on_close is False.

Return type:

Optional[Awaitable[None]]

sync(func, *args, asynchronous=None, callback_timeout=None, **kwargs)[source]#

Call func with args synchronously or asynchronously depending on the calling context

Return type:

Union[TypeVar(_T), Coroutine[Any, Any, TypeVar(_T)]]

wait_for_workers(n_workers, timeout=None)[source]#

Blocking call to wait for n workers before continuing

Parameters:
  • n_workers (int) – The number of workers

  • timeout (number, optional) – Time in seconds after which to raise a dask.distributed.TimeoutError

Return type:

None

async workers_to_close(target)[source]#

Determine which, if any, workers should potentially be removed from the cluster.

Notes

Cluster.workers_to_close dispatches to Scheduler.workers_to_close(), but may be overridden in subclasses.

Return type:

List of worker addresses to close, if any

See also

Scheduler.workers_to_close

write_files_for_dask(files, symlink_dirs=None)[source]#

Use Dask to write files to scheduler and all workers.

files:

Dictionary of files to write, for example, {"/path/to/file": "text to write"}.

coiled.cluster_logs(cluster_id, account=None, workspace=None, scheduler=True, workers=True, errors_only=False)[source]#

Returns cluster logs as a dictionary, with a key for the scheduler and each worker.

Changed in version 0.2.0: cluster_name is no longer accepted, use cluster_id instead.

coiled.delete_cluster(name, account=None, workspace=None)[source]#

Delete a cluster

Parameters:

name (str) – Name of cluster to delete.

coiled.list_clusters(account=None, workspace=None, max_pages=None)[source]#

List clusters

Parameters:

account – Name of the Coiled account to list clusters. If not provided, will use the coiled.account configuration value.

Returns:

Dictionary with information about each cluster in the specified account. Keys in the dictionary are names of clusters, while the values contain information about the corresponding cluster.

coiled.list_core_usage(account=None)[source]#

Get a list of used cores.

Returns a table that shows the limit of cores that the user can use and a breakdown of the core usage split up between account, user and clusters.

Parameters:
  • account (Optional[str]) – Name of the Coiled workspace (account) to list core usage. If not provided, will use the coiled.workspace or coiled.account configuration values.

  • json – If set to True, it will return this list in json format instead of a table.

Return type:

dict

coiled.list_gpu_types()[source]#

List allowed GPU Types.

For AWS the GPU types are tied to the instance type, but for GCP you can add different GPU types to GPU enabled instances. Please refer to GPUs for more information.

Parameters:

json – if set to True, it will return this list in json format instead of a table.

Return type:

Dict

coiled.list_instance_types(backend=None, min_cores=None, min_gpus=None, min_memory=None, cores=None, memory=None, gpus=None, arch=None)[source]#

List allowed instance types for the cloud provider configured on your account.

This command allows you to get all instance types available for a backend or a filtered list of instance types that match your requirements by using the available keyword arguments. Please refer to VM Size and Type for more information.

Parameters:
  • backend (Optional[str]) – Relevant cloud provider (aws or gcp) to get a list of allowed instance types. If not provided the list will show the instances for your account cloud provider.

  • min_cores (Optional[int]) – Filter results on the minimum number of required cores

  • min_gpus (Optional[int]) – Filter results on the minimum number of required GPUs

  • min_memory (Union[int, str, float, None]) – Filter results on the minimum amount of memory

  • cores (Union[int, List[int], None]) – The exact number of cores to filter for example cores=1 or a list containg the minimum and maximum amount of cores to filter instances by, for example cores=[2,8].

  • memory (Union[int, str, float, List[int], List[str], List[float], None]) – The exact amount of memory or a list containing the minimum and maximum amount of memory to filter instances by.

  • gpus (Union[int, List[int], None]) – The exact number of gpus to filter or a list containing the minimum and maximum number of GPUS to filter instances by.

  • arch (Optional[Literal['x86_64', 'arm64']]) – CPU architecture, defaults to x86_64. There’s no way to get both x86_64 and arm64 instances in a single call.

Return type:

Dict[str, VmType]

class coiled.BackendOptions[source]#

A dictionary with the following key/value pairs

Parameters:
  • region_name – Region name to launch cluster in. For example: us-east-2

  • zone_name – Zone name to launch cluster in. For example: us-east-2a

  • firewall – Deprecated; use ingress instead.

  • ingress – Allows you to specify multiple CIDR blocks (and corresponding ports) to open for ingress on the scheduler firewall.

  • spot – Whether to request spot instances.

  • spot_on_demand_fallback – If requesting spot, whether to request non-spot instances if we get fewer spot instances than desired.

  • spot_replacement – By default we’ll attempt to replace interrupted spot instances; set to False to disable.

  • multizone – Tell the cloud provider to pick zone with best availability; all VMs will be in a single zone unless you also use multizone_allow_cross_zone.

  • multizone_allow_cross_zone – By default, “multizone” cluster is still in a single zone (which zone is picked by cloud provider). This option allows the cluster to have VMs in distinct zones. There’s a cost for cross-zone traffic (usually pennies per GB), so this is a bad choice for shuffle-heavy workloads, but can be a good choice for large embarrassingly parallel workloads.

  • use_dashboard_public_ip – Public IP is used by default, lets you choose to use private IP for dashboard link.

  • use_dashboard_https – When public IP address is used for dashboard, we’ll enable HTTPS + auth by default. You may want to disable this if using something that needs to connect directly to the scheduler dashboard without authentication, such as jupyter dask-labextension.

  • network_volumes – Very experimental option to allow mounting SMB volume on cluster nodes.

  • docker_shm_size – Non-default value for shm_size.

class coiled.AWSOptions#

A dictionary with the following key/value pairs plus any pairs in BackendOptions

Parameters:
  • keypair_name

    AWS Keypair to assign worker/scheduler instances. This would need to be an existing keypair in your

    account, and needs to be in the same region as your cluster. Note that Coiled can also manage adding a unique, ephemeral keypair for SSH access to your cluster; see How to SSH into your cluster for more information.

  • use_placement_group – If possible, this will attempt to put workers in the same cluster placement group (in theory this can result in better network between workers, since they’d be physically close to each other in datacenter, though we haven’t seen this to have much benefit in practice).

class coiled.GCPOptions#

A dictionary with GCP specific key/value pairs plus any pairs in BackendOptions

class coiled.FirewallOptions#

A dictionary with the following key/value pairs

Parameters:
  • ports – List of ports to open to cidr on the scheduler. For example, [22, 8786] opens port 22 for SSH and 8786 for client to Dask connection.

  • cidr – CIDR block from which to allow access. For example 0.0.0.0/0 allows access from any IP address.

Labs#

coiled.function(*, software=None, container=None, vm_type=None, cpu=None, memory=None, gpu=None, account=None, workspace=None, region=None, arm=None, disk_size=None, allow_ingress_from=None, shutdown_on_close=True, spot_policy=None, idle_timeout='6 hours', keepalive='30 seconds', package_sync_ignore=None, environ=None, threads_per_worker=1, local=False, name=None, tags=None, n_workers=None, extra_kwargs=None)[source]#

Decorate a function to run on cloud infrastructure

This creates a Function object that executes its code on a remote cluster with the hardware and software specified in the arguments to the decorator. It can run either as a normal function, or it can return Dask Futures for parallel computing.

Parameters:
  • software (Optional[str]) – Name of the software environment to use; this allows you to use and re-use existing Coiled software environments, and should not be used with package sync or when specifying a container to use for this specific cluster.

  • container (Optional[str]) – Name or URI of container image to use; when using a pre-made container image with Coiled, this allows you to skip the step of explicitly creating a Coiled software environment from that image. Note that this should not be used with package sync or when specifying an existing Coiled software environment.

  • vm_type (Union[str, list[str], None]) – Instance type, or list of instance types, that you would like to use. You can use coiled.list_instance_types() to see a list of allowed types.

  • cpu (Union[int, list[int], None]) – Number, or range, of CPUs requested. Specify a range by using a list of two elements, for example: cpu=[2, 8].

  • memory (Union[str, list[str], None]) – Amount of memory to request for each VM, Coiled will use a +/- 10% buffer from the memory that you specify. You may specify a range of memory by using a list of two elements, for example: memory=["2GiB", "4GiB"].

  • disk_size (Union[int, str, None]) – Size of persistent disk attached to each VM instance, specified as string with units or integer for GiB.

  • gpu (Optional[bool]) – Whether to attach a GPU; this would be a single NVIDIA T4.

  • account (Optional[str]) – DEPRECATED. Use workspace instead.

  • workspace (Optional[str]) – The Coiled workspace (previously “account”) to use. If not specified, will check the coiled.workspace or coiled.account configuration values, or will use your default workspace if those aren’t set.

  • region (Optional[str]) – The cloud provider region in which to run the cluster.

  • arm (Optional[bool]) – Whether to use ARM instances for cluster; default is x86 (Intel) instances.

  • keepalive – Keep your cluster running for the specified time, even if your Python session closes. Default is “30 seconds”.

  • spot_policy (Optional[str]) –

    Purchase option to use for workers in your cluster, options are “on-demand”, “spot”, and “spot_with_fallback”; by default this is “spot_with_fallback” for Coiled Functions. (Google Cloud refers to this as “provisioning model” for your instances.) Note that even with this option, the first VM is always on-demand. This only applies to any additional VMs when running Coiled Functions in parallel across multiple VMs with the .map() and .submit() methods. When running on a single VM, an on-demand instance will be used.

    Spot instances are much cheaper, but can have more limited availability and may be terminated while you’re still using them if the cloud provider needs more capacity for other customers. On-demand instances have the best availability and are almost never terminated while still in use, but they’re significantly more expensive than spot instances. For most workloads, “spot_with_fallback” is likely to be a good choice: Coiled will try to get as many spot instances as we can, and if we get less than you requested, we’ll try to get the remaining instances as on-demand. For AWS, when we’re notified that an active spot instance is going to be terminated, we’ll attempt to get a replacement instance (spot if available, but could be on-demand if you’ve enabled “fallback”). Dask on the active instance will attempt a graceful shutdown before the instance is terminated so that computed results won’t be lost.

  • idle_timeout (str) – Shut down the cluster after this duration if no activity has occurred. Default is “6 hours”.

  • package_sync_ignore (Optional[list[str]]) – A list of package names to exclude from the cloud VM environment. This is useful when you have large libraries installed locally that aren’t needed for the function being run. Note the packages listed here may still be installed by another package that depends on them.

  • environ (Optional[Dict[str, str]]) – Dictionary of environment variables to securely pass to the cloud VM environment.

  • threads_per_worker (Optional[int]) – Number of threads to run concurrent tasks in for each VM. -1 can be used to run as many concurrent tasks as there are CPU cores. Default is 1.

  • allow_ingress_from (Optional[str]) – Control the CIDR from which cluster firewall allows ingress to scheduler; by default this is open to any source address (0.0.0.0/0). You can specify CIDR, or “me” for just your IP address.

  • local (bool) – Whether or not to run this function locally or on cloud VMs. If True, this function will be run on your local machine, which can be useful for debugging or during development. Default is False.

  • name (Optional[str]) – Name for the Coiled cluster on which this function will run. If not specified, VM specification parameters like vm_type, disk_size, etc. will be used to produce a unique, deterministic name. Note that name is used for sharing cloud VMs among Coiled Functions with the same hardware and software specification, so please use this parameter with care. Default to None.

  • tags (Optional[Dict[str, str]]) – Dictionary of tags.

  • n_workers (Union[int, List[int], None]) – Number of VMs to provision for parallel function execution. Can either be an integer for a static number of machines, or a list specifying the lower and upper bounds for adaptively scaling up/down machines depending on the amount of work submitted. Defaults to n_workers=[0, 500] which adaptively scales between 0 and 500 machines.

  • extra_kwargs (Optional[dict]) – Dictionary of any additional keyword arguments to pass to coiled.Cluster(). Note that any cluster arguments controlled by other @coiled.function keyword arguments will take precendence over the kwargs in this dictionary.

See the coiled.Cluster docstring for additional parameter descriptions.

Examples

>>> import coiled
>>> @coiled.function()
... def f(x):
...    return x + 1
>>> f(10)  # calling the function blocks until finished
11
>>> f.submit(10)  # immediately returns a future
<Future: pending, key=f-1234>
>>> f.submit(10).result()  # Call .result to get result
11
>>> futures = [f(i) for i in range(1000)]  # parallelize with a for loop
>>> [future.result() for future in futures]
...

Performance Reports#

coiled.list_performance_reports(account=None)[source]#

List performance reports stored on Coiled Cloud

Returns a list of dicts that contain information about Coiled Cloud hosted performance reports

Parameters:

account – Name of the Coiled workspace (previously “account”) from which to get report. If not specified, will use the current or default workspace.

Return type:

List[Dict]

coiled.performance_report(filename='dask-report.html', private=False, account=None)[source]#

Generates a static performance report and saves it to Coiled Cloud

This context manager lightly wraps Dask’s performance_report. It generates a static performance report and uploads it to Coiled Cloud. After uploading, it prints out the url where the report is hosted. For a list of hosted performance reports, utilize coiled.list_performance_reports(). Note each user is limited to 5 hosted reports with each a maximum file size of 10mb.

The context manager yields an object that will have the url as an attribute, though the URL is not available inside the context but only after (see example).

Example:

with coiled.performance_report("filename") as perf_url:
    dask.compute(...)

assert isinstance(perf_url["url"], str)
Parameters:
  • filename – The file name of the performance report file.

  • private – If set to True, the uploaded performance report is only accessible to logged in Coiled users who are members of the current / default or specified account.

  • account – Name of the Coiled workspace (previously “account”) to use.

Return type:

Generator[PerformanceReportURL, None, None]

Backend#

coiled.set_backend_options(account=None, workspace=None, backend='aws', ingress=None, firewall=None, network=None, aws_region='us-east-1', aws_access_key_id=None, aws_secret_access_key=None, gcp_service_creds_file=None, gcp_service_creds_dict=None, gcp_project_id=None, gcp_region=None, gcp_zone=None, instance_service_account=None, zone=None, registry_type='ecr', registry_namespace=None, registry_access_token=None, registry_uri='docker.io', registry_username=None, log_output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, **kwargs)[source]#

Configure workspace-level settings for cloud provider and container registry.

This method configures workspace-level backend settings for cloud providers, container registries, and setting up a workspace-level VPC for running clusters and other Coiled managed resources.

Parameters:
  • account (Optional[str]) – DEPRECATED. Use workspace instead.

  • workspace (Optional[str]) – The Coiled workspace (previously “account”) to configure. If not specified, will check the coiled.workspace or coiled.account configuration values, or will use your default workspace if those aren’t set.

  • backend (Literal['aws', 'gcp']) – Supported backends such as AWS VM (aws) and GCP VM (gcp).

  • ingress (Optional[List[Dict]]) – Specification of the ingress rules the firewall/security group that Coiled creates for the cluster scheduler. This is a list of ingress rules, each rule is a dictionary with a list of ports and a CIDR block from which to allow ingress on those ports to the scheduler. For example, [{"ports" [8787], "cidr": "0.0.0.0/0"}, {"ports" [8786], "cidr": "10.2.0.0/16"}] would allow the dashboard on 8787 to be accessed from any IP address, and the scheduler comm on 8786 to only be accessed from IP addresses in the 10.2.0.0/16 local network block.

  • firewall (Optional[Dict]) – A single ingress rule for the scheduler firewall/security group; this is deprecated and ingress rules should be specified with ingress instead.

  • network (Optional[Dict]) – Specification for your network/subnets, dictionary can take ID(s) for existing network and/or subnet(s).

  • aws_region (str) – The region which Coiled cloud resources will be deployed to and where other resources such as the docker registry are located or where a specified VPC will be created.

  • aws_access_key_id (Optional[str]) – For AWS support backend, this argument is required to create or use an existing Coiled managed VPC.

  • aws_secret_access_key (Optional[str]) – For AWS support backend, this argument is required to create or use an existing Coiled managed VPC.

  • use_scheduler_public_ip – Determines if the client connects to the Dask scheduler using it’s public or internal address.

  • gcp_service_creds_file (Optional[str]) – A string filepath to a Google Cloud Compute service account json credentials file used for creating and managing a Coiled VPC.

  • gcp_service_creds_dict (Optional[dict]) – A dictionary of the contents of a Google Cloud Compute service account json credentials file used for creating a VPC to host Coiled Cloud related assets.

  • gcp_project_id (Optional[str]) – The Google Cloud Compute project id in which a VPC will be created to host Coiled Cloud related assets.

  • gcp_region (Optional[str]) – The Google Cloud Compute region name in which a VPC will be created.

  • instance_service_account (Optional[str]) – Email for optional service account to attach to cluster instances; using this is the best practice for granting access to your data stored in Google Cloud services. This should be a scoped service instance with only the permissions needed to run your computations.

  • zone (Optional[str]) – Optional; used to specify zone to use for clusters (for either AWS or GCP).

  • registry_type (Literal['ecr', 'docker_hub', 'gar']) – Custom software environments are stored in a docker container registry. By default, container images will be stored in AWS ECR. Users are able to store contains on a private registry by providing additional configuration registry_* arguments and specifying registry_type=’docker_hub’. To use Google Artifact Registry, pass registry_type=’gar’, gcp_project_id, gcp_region, and one of gcp_service_creds_dict or gcp_service_creds_file.

  • registry_uri (str) – The container registry URI. Defaults to docker.io. Only required if registry_type=’docker_hub’.

  • registry_username (Optional[str]) – A registry username (should be lowercased). Only required if registry_type=’docker_hub’.

  • registry_namespace (Optional[str]) – A namespace for storing the container images. Defaults to username if not specified. More information about docker namespaces can be found here: https://docs.docker.com/docker-hub/repos/create/. Only required if registry_type=’docker_hub’.

  • registry_access_token (Optional[str]) – A token to access registry images. More information about access tokens ca be found here: https://docs.docker.com/docker-hub/access-tokens/. Only required if registry_type=’docker_hub’.

Information#

coiled.diagnostics(account=None)[source]#

Run a diagnostic check aimed to help support with any issues.

This command will call others to dump information that could help in troubleshooting issues. This command will return a json that will make it easier for you to share with the Coiled support team if needed.

Parameters:

account (Optional[str]) – Name of the Coiled workspace (previously “account”) to list core usage. If not provided, will use the coiled.workspace or coiled.account configuration values.

Return type:

dict

coiled.get_billing_activity(account=None, cluster=None, cluster_id=None, start_time=None, end_time=None, kind=None, page=None)[source]#

Retrieve Billing information.

Parameters:
  • account (Optional[str]) – The workspace (previously “account”) to retrieve billing information from. If not specified, will use the current or default workspace.

  • cluster (Optional[str]) – Cluster name. Filter billing events to this cluster. Defaults to None.

  • cluster_id (Optional[int]) – Cluster id. Filter billing events to this cluster by id. Defaults to None.

  • start_time (Optional[str]) – Filter events after this datetime (isoformat). Defaults to None.

  • end_time (Optional[str]) – Filter events before this datetime (isoformat). Defaults to None.

  • kind (Optional[Literal['instance', 'monthly_grant', 'manual_adjustment', 'payg_payment']]) – Filter events to this kind of event. Defaults to None.

  • page (Optional[int]) – Grab events from this page. Defaults to None.

Return type:

Dict

coiled.list_local_versions()[source]#

Get information about local versions.

Returns the versions of Python, Coiled, Dask and Distributed that are installed locally. This information could be useful when troubleshooting issues.

Parameters:

json – If set to True, it will return this list in json format instead of a table.

Return type:

dict

coiled.list_user_information()[source]#

List information about your user.

This command will give you more information about your account, which teams you are part of and any limits that your account might have.

Return type:

dict

Command Line API Reference#

coiled login#

Configure your Coiled account credentials

coiled login [OPTIONS]

Options

-t, --token <token>#

Coiled user token

-a, --account, --workspace <account>#

Coiled workspace (uses default workspace if not specified). Note: –account is deprecated, please use –workspace instead.

--retry, --no-retry#

Whether or not to automatically ask for a new token if an invalid token is entered

--browser, --no-browser#

Open browser with page where you grant access

Deprecated since version 0.4.0: The following options for coiled env create have been deprecated: --post-build, --conda-env-name, --private, --environ. You can continue to use this feature by downgrading your Coiled version, but it will be removed in the future.

coiled env create#

Create a Coiled software environment

coiled env create [OPTIONS]

Options

-n, --name <name>#

Name of software environment, it must be lowercase.

--container <container>#

Base docker image to use.

--ignore-container-entrypoint#

Ignore the ENTRYPOINT when using specified container.

--conda <conda>#

Conda environment file.

--pip <pip>#

Pip requirements file.

--force-rebuild#

Skip checks for an existing software environment build.

--account, --workspace <account>#

Workspace to use for creating this software environment. Note: –account is deprecated, please use –workspace instead.

--gpu-enabled#

Set CUDA virtual package for Conda

Default:

False

--arm#

Use ARM CPU architecture; takes precedence over --architecture option.

Default:

False

--architecture <architecture>#

CPU architecture to use for the software environment

Default:

'x86_64'

Options:

x86_64 | aarch64

--region-name <region_name>#

AWS or GCP region to use for storing this software environment.

--include-local-code#

Include local code in the software environment build. This includes editable installs and importable python files.

-i, --ignore-local-package <ignore_local_package>#

Ignore a local package in the software environment build. Only applies to packages included by the include-local-code option. Specify multiple times for multiple packages. Example: -i coiled -i pytorch

--disable-uv-installer#

Do not use uv to install PyPI packages when building this environment.

coiled env delete#

Delete a Coiled software environment

coiled env delete [OPTIONS] NAME

Options

--workspace <workspace>#

Coiled workspace (uses default workspace if not specified).

Arguments

NAME#

Required argument

coiled env list#

List the Coiled software environments in a workspace

coiled env list [OPTIONS]

Options

--workspace <workspace>#

Coiled workspace (uses default workspace if not specified).

coiled env inspect#

View the details of a Coiled software environment

coiled env inspect [OPTIONS] NAME

Arguments

NAME#

Required argument

coiled cluster list#

List the Coiled clusters in a workspace

coiled cluster list [OPTIONS]

Options

--workspace <workspace>#

Coiled workspace (uses default workspace if not specified).

--just-mine#

Show only my clusters

--max-pages <max_pages>#

Maximum number of pages to show (where each page is 100 clusters)

coiled cluster ssh#

coiled cluster ssh [OPTIONS] [CLUSTER]

Options

--account, --workspace <account>#

Coiled workspace (uses default workspace if not specified). Note: –account is deprecated, please use –workspace instead.

--private#

Use private IP address of scheduler (default is DNS hostname for public IP)

--by-ip#

Use public IP address of scheduler directly, not using DNS hostname

--worker <worker>#

Connect to worker with specified name or private IP address (default is to connect to scheduler)

--add-key#

Just add ssh key to local OpenSSH agent, no lifetime/expiration set

--delete-key#

Just delete ssh key from local OpenSSH agent

--dask#

Attach to shell in Dask container rather than shell on host machine.

Arguments

CLUSTER#

Optional argument

coiled cluster logs#

coiled cluster logs [OPTIONS] [CLUSTER_ARG]

Options

--account, --workspace <account>#

Coiled workspace (uses default workspace if not specified). Note: –account is deprecated, please use –workspace instead.

--cluster <cluster>#

Cluster for which to show logs, default is most recent

--no-scheduler#

Don’t include scheduler logs

--workers <workers>#

All worker logs included by default, specify ‘none’ or comma-delimited list of names, states, or internal IP addresses

--label <label>#
Options:

private_ip_address | name | id | public_ip_address | none

--system#

Just show system logs

--combined#

Show combined system and dask logs

--tail#

Keep tailing logs

--since <since>#

By default will show logs from start of cluster (or 30s ago if tailing)

--until <until>#

Show logs up to and including this time, by default will go through present time.

--filter <filter>#

Filter log messages

--color, --no-color#

Use for color in logs

--show-all-timestamps#

Prepend datetime to all log messages

--interval <interval>#

Tail polling interval

Arguments

CLUSTER_ARG#

Optional argument

coiled cluster azure-logs#

coiled cluster azure-logs [OPTIONS] [CLUSTER_ARG]

Options

--account, --workspace <account>#

Coiled workspace (uses default workspace if not specified). Note: –account is deprecated, please use –workspace instead.

--cluster <cluster>#

Cluster for which to show logs, default is most recent

--no-scheduler#

Don’t include scheduler logs

--workers <workers>#

All worker logs included by default, specify ‘none’ or comma-delimited list of names, states, or internal IP addresses

--label <label>#
Options:

private_ip_address | name | id | public_ip_address | none

--system#

Just show system logs

--combined#

Show combined system and dask logs

--tail#

Keep tailing logs

--since <since>#

By default will show logs from start of cluster (or 30s ago if tailing)

--until <until>#

Show logs up to and including this time, by default will go through present time.

--filter <filter>#

Filter log messages

--color, --no-color#

Use for color in logs

--show-all-timestamps#

Prepend datetime to all log messages

--interval <interval>#

Tail polling interval

Arguments

CLUSTER_ARG#

Optional argument

coiled package-sync scan#

coiled package-sync scan [OPTIONS]

Options

--csv#

Output as CSV

-v, --verbose#

Output files that will end up in wheels

coiled package-sync debug#

coiled package-sync debug [OPTIONS]

Options

--csv#

Output as CSV

coiled notebook start#

Launch or re-open a notebook session, with optional file syncing.

If a notebook session with the same name already exists, it’s not re-created. If file sync was initially not enabled, running coiled notebook start --sync will begin file sync without re-launching the notebook.

coiled notebook start [OPTIONS]

Options

--name <name>#

Cluster name. If not given, defaults to a hash based on current working directory.

--account, --workspace <account>#

Coiled workspace (uses default workspace if not specified). Note: –account is deprecated, please use –workspace instead.

--sync#

Sync the working directory with the filesystem on the notebook. Requires mutagen.

--allow-sync-from <allow_sync_from>#

IP address or CIDR from which connections to port 22 (SSH) are open; can also be specified as ‘everyone’ (0.0.0.0/0) or ‘me’ (automatically determines public IP detected for your local client). Note that –sync relies on SSH connection.

--software <software>#

Software environment name to use. If neither software nor container is specified, all the currently-installed Python packages are replicated on the VM using package sync.

--container <container>#

Container image to use. If neither software nor container is specified, all the currently-installed Python packages are replicated on the VM using package sync.

--vm-type <vm_type>#

VM type to use. Specify multiple times to provide multiple options.

--cpu <cpu>#

Number of CPUs requested for this notebook.

--memory <memory>#

Amount of memory for this notebook, Coiled will use a +/-10% buffer from the memory you specify.

--gpu#

Use GPU notebook server.

--disk-size <disk_size>#

Use larger-than-default disk on VM, specified in GiB.

--region <region>#

The cloud provider region in which to run the notebook.

--open#

Whether to open the notebook in the default browser once it’s launched.

--block, --no-block#

Whether to block while the notebook is running.

--include-vcs#

Include version control files when syncing (e.g., .git).

--idle-timeout <idle_timeout>#

Period of idleness after which to automatically shut down the notebook, e.g. ‘20 minutes’ or ‘1 hour’ or ‘10 days’ (default is ‘4 hours’). The notebook is considered active (not idle) if a browser is connected, even if no code is running.

--allow-ingress-from <allow_ingress_from>#

Control the CIDR from which cluster firewall allows ingress to scheduler; by default this is open to any source address (0.0.0.0/0). You can specify CIDR, or ‘me’ for just your IP address.

-e, --env <env>#

Environment variables securely transmitted to run command environment. Format is KEY=val, multiple vars can be set with separate --env for each.

-t, --tag <tag>#

Tags. Format is KEY=val, multiple vars can be set with separate --tag for each.

--sync-ignore <sync_ignore>#

Paths to not sync when using --sync.

--mount-bucket <mount_bucket>#

S3 or GCS bucket(s) to mount as volumes.

coiled notebook stop#

Shut down a notebook session

coiled notebook stop [OPTIONS] NAME

Options

--account, --workspace <account>#

Coiled workspace (uses default workspace if not specified). Note: –account is deprecated, please use –workspace instead.

Arguments

NAME#

Required argument

coiled run#

Run a command on the cloud.

coiled run [OPTIONS] [COMMAND]...

Options

--name <name>#

Run name. This identifier controls whether coiled run invocations are dispatched to the same cloud VM or not. Use the same name to run multiple commands on the same VM. Defaults to a unique name with no VM reuse.

--account, --workspace <account>#

Coiled workspace (uses default workspace if not specified). Note: --account is deprecated, please use --workspace instead.

--software <software>#

Software environment name to use. If neither software nor container is specified, all the currently-installed Python packages are replicated on the VM using package sync.

--container <container>#

Container image to use. If neither software nor container is specified, all the currently-installed Python packages are replicated on the VM using package sync.

--vm-type <vm_type>#

VM type to use. Specify multiple times to provide multiple options.

--gpu#

Have a GPU available.

--region <region>#

The cloud provider region in which to run the notebook.

--disk-size <disk_size>#

Use larger-than-default disk on VM, specified in GiB.

--keepalive <keepalive>#

Keep your VM running for the specified time, even after your command completes. In seconds (--keepalive 60) unless you specify units (--keepalive 3m for 3 minutes).Default to shutdown immediately after the command finishes.

-f, --file <file>#

Local files required to run command. Can be either individual file or entire directory. Multiple values can be specified, such as --file foo.txt --file my-subdir/.

-e, --env <env>#

Environment variables securely transmitted to run command environment. Format is KEY=val, multiple vars can be set with separate --env for each.

--subdomain <subdomain>#

Custom subdomain for the VM hostname.

--allow-ssh-from <allow_ssh_from>#

IP address or CIDR from which connections to port 22 (SSH) are open; can also be specified as ‘everyone’ (0.0.0.0/0) or ‘me’ (automatically determines public IP detected for your local client). Note that coiled run relies on SSH connection for executing commands on VM.

--port <port>#

Open extra ports in network firewall for inbound connections (multiple ports can be set with separate --port for each).

-it, --interactive#

Open an interactive session, e.g., coiled run --interactive bash or coiled run --interactive python.

--detach#

Start the run in the background, don’t wait for the results.

--sync#

Sync files between local working directory and /scratch/synced.

--root#

Act as root in Docker container.

--no-credential-forwarding#

Disable automatic forwarding of local cloud credentials; use if you want your code to use credentials attached to the VM (e.g., AWS Instance Profile).

--forward-gcp-adc#

Forward long-lived Google Cloud Application Default Credentials to VM for data access.

-t, --tag <tag>#

Tags. Format is KEY=val, multiple vars can be set with separate --tag for each.

--sync-ignore <sync_ignore>#

Paths to not sync when using --sync.

--mount-bucket <mount_bucket>#

S3 or GCS bucket(s) to mount as volumes.

--package-sync-strict#

Require exact package version matches when using package sync.

--package-sync-conda-extras <package_sync_conda_extras>#

A list of conda package names (available on conda-forge) to include in the environment that are not in your local environment.

Arguments

COMMAND#

Optional argument(s)

coiled batch run#

Submit a batch job to run on Coiled.

Batch Jobs is currently an experimental feature.

coiled batch run [OPTIONS] [COMMAND]...

Options

--workspace <workspace>#

Coiled workspace (uses default workspace if not specified).

--software <software>#

Existing Coiled software environment (Coiled will sync local Python software environment if neither software nor container is specified).

--container <container>#

Docker container in which to run the batch job tasks; this does not need to have Dask (or even Python), only what your task needs in order to run.

-e, --env <env>#

Environment variables transmitted to run command environment. Format is KEY=val, multiple vars can be set with separate --env for each.

--secret-env <secret_env>#

Environment variables transmitted to run command environment. Format is KEY=val, multiple vars can be set with separate --secret-env for each. Unlike environment variables specified with --env, these are only stored in our database temporarily.

-t, --tag <tag>#

Tags. Format is KEY=val, multiple vars can be set with separate --tag for each.

--vm-type <vm_type>#

VM type to use. Specify multiple times to provide multiple options.

--arm#

Use ARM VM type.

--cpu <cpu>#

Number of cores per VM.

--memory <memory>#

Memory per VM.

--gpu#

Have a GPU available.

--region <region>#

The cloud provider region in which to run the job.

--disk-size <disk_size>#

Use larger-than-default disk on VM, specified in GiB.

--ntasks, --n-tasks <ntasks>#

Number of tasks to run. Tasks will have ID from 0 to n-1, the COILED_ARRAY_TASK_ID environment variable for each task is set to the ID of the task.

--task-on-scheduler#

Run task with lowest job ID on scheduler node.

--array <array>#

Specify array of tasks to run with specific IDs (instead of using --ntasks to array from 0 to n-1). You can specify list of IDs, a range, or a list with IDs and ranges. For example, --array 2,4-6,8-10.

--scheduler-task-array <scheduler_task_array>#

Which tasks in array to run on the scheduler node. In most cases you’ll probably want to use --task-on-scheduler instead to run task with lowest ID on the scheduler node.

-N, --max-workers <max_workers>#

Maximum number of worker nodes (by default, there will be as many worker nodes as tasks).

--wait-for-ready-cluster#

Only assign tasks once full cluster is ready.

--forward-aws-credentials#

Forward STS token from local AWS credentials.

--package-sync-strict#

Require exact package version matches when using package sync.

--package-sync-conda-extras <package_sync_conda_extras>#

A list of conda package names (available on conda-forge) to include in the environment that are not in your local environment.

Arguments

COMMAND#

Optional argument(s)