API Reference#

Python API Reference#

coiled.Cluster([name, software, n_workers, ...])

Create a Dask cluster with Coiled

coiled.cluster_logs(cluster_id[, account, ...])

Returns cluster logs as a dictionary, with a key for the scheduler and each worker.

coiled.create_software_environment([name, ...])

Create a software environment

coiled.delete_cluster(name[, account])

Delete a cluster

coiled.delete_software_environment(name[, ...])

Delete a software environment

coiled.diagnostics([account])

Run a diagnose check aimed to help support with any issues.

coiled.get_billing_activity([account, ...])

Retrieve Billing information.

coiled.get_notifications([json, account, ...])

Get a list of all recent notifications.

coiled.get_software_info(name[, account])

Retrieve solved spec for a Coiled software environment

coiled.install(name)

Create a Coiled software environment locally

coiled.list_clusters([account])

List clusters

coiled.list_core_usage([account])

Get a list of used cores.

coiled.list_gpu_types()

List allowed GPU Types.

coiled.list_instance_types([backend, ...])

List allowed instance types for the cloud provider configured on your account.

coiled.BackendOptions

A dictionary with the following key/value pairs

coiled.AWSOptions

A dictionary with the following key/value pairs plus any pairs in BackendOptions

coiled.GCPOptions

coiled.list_local_versions()

Get information about local versions.

coiled.list_performance_reports([account])

List performance reports stored on Coiled Cloud

coiled.list_software_environments([account])

List software environments

coiled.list_user_information()

List information about your user.

coiled.performance_report([filename, ...])

Generates a static performance report and saves it to Coiled Cloud

coiled.set_backend_options([account, ...])

Configure account level settings for cloud provider and container registry.

Software Environments#

coiled.create_software_environment(name=None, *, account=None, conda=None, pip=None, container=None, log_output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, post_build=None, conda_env_name=None, backend_options=None, private=False, force_rebuild=False, environ=None, use_entrypoint=True)[source]#

Create a software environment

Parameters
  • name (Optional[str]) – Name of software environment. Name can’t contain uppercase letters.

  • account (Optional[str]) – The account in which to create the software environment, if not given in the name.

  • conda (Union[list, dict, str, None]) – Specification for packages to install into the software environment using conda. Can be a list of packages, a dictionary, or a path to a conda environment YAML file.

  • pip (Union[list, str, None]) – Packages to install into the software environment using pip. Can be a list of packages or a path to a pip requirements file.

  • container (Optional[str]) – Docker image to use for the software environment. Must be the name of a docker image on Docker hub. Defaults to coiled/default.

  • post_build (Union[list, str, None]) – List of commands or path to a local executable script to run after pip and conda packages have been installed.

  • log_output – Stream to output logs to. Defaults to sys.stdout.

  • conda_env_name (Optional[str]) – Name of conda environment to install packages into. Note that this should only be used when specifying a non-default value for container and when the non-default Docker image used expects commands to run in a conda environment not named “coiled”. Defaults to “coiled”.

  • backend_options (Optional[Dict]) – Dictionary of backend specific options (e.g. {'region': 'us-east-2'}). Any options specified with this keyword argument will take precedence over those stored in the coiled.backend-options configuration value.

  • private (bool) – Whether this software environment is private or public. Defaults to False

  • force_rebuild (bool) – By default, if an existing software environment with the same name and dependencies already exists, a rebuild is aborted. If this is set to True, those checks are skipped and the environment will be rebuilt. Defaults to False

  • environ (Optional[Dict]) – Dictionary of environment variables.

  • use_entrypoint (bool) – Whether to use (or override) entrypoint set on container.

Return type

None

coiled.delete_software_environment(name, account=None)[source]#

Delete a software environment

Parameters

name – Name of software environment to delete.

coiled.get_software_info(name, account=None)[source]#

Retrieve solved spec for a Coiled software environment

Parameters

name (str) – Software environment name

Returns

Coiled software environment information

Return type

results

coiled.inspect(name)[source]#

View the details of a Coiled software environment

Parameters

name (str) –

Identifier of the software environment to use, in the format (<account>/)<name>. If the software environment is owned by the same account as that passed into “account”, the (<account>/) prefix is optional.

For example, suppose your account is “wondercorp”, but your friends at “friendlycorp” have an environment named “xgboost” that you want to use; you can specify this with “friendlycorp/xgboost”. If you simply entered “xgboost”, this is shorthand for “wondercorp/xgboost”.

The “name” portion of (<account>/)<name> can only contain ASCII letters, hyphens and underscores.

Examples

>>> import coiled
>>> coiled.inspect("coiled/default")
coiled.install(name)[source]#

Create a Coiled software environment locally

Parameters

name

Identifier of the software environment to use, in the format (<account>/)<name>. If the software environment is owned by the same account as that passed into “account”, the (<account>/) prefix is optional.

For example, suppose your account is “wondercorp”, but your friends at “friendlycorp” have an environment named “xgboost” that you want to use; you can specify this with “friendlycorp/xgboost”. If you simply entered “xgboost”, this is shorthand for “wondercorp/xgboost”.

The “name” portion of (<account>/)<name> can only contain ASCII letters, hyphens and underscores.

Examples

>>> import coiled
>>> coiled.install("coiled/default")
coiled.list_software_environments(account=None)[source]#

List software environments

Parameters

account – Name of the Coiled account to list software environments. If not provided, will use the coiled.account configuration value.

Returns

Dictionary with information about each software environment in the specified account. Keys in the dictionary are names of software environments, while the values contain information about the corresponding software environment.

Clusters#

class coiled.Cluster(name=None, *, software=None, n_workers=4, worker_class=None, worker_options=None, worker_vm_types=None, worker_cpu=None, worker_memory=None, worker_disk_size=None, worker_gpu=None, worker_gpu_type=None, scheduler_class=None, scheduler_options=None, scheduler_vm_types=None, scheduler_cpu=None, scheduler_memory=None, scheduler_gpu=None, asynchronous=False, cloud=None, account=None, shutdown_on_close=None, use_scheduler_public_ip=None, credentials='local', timeout=None, environ=None, tags=None, send_dask_config=True, backend_options=None, show_widget=True, configure_logging=False, wait_for_workers=None, package_sync=False, package_sync_strict=False, package_sync_fail_on='critical-only', private_to_creator=None, use_best_zone=True, scheduler_port=None, allow_ingress_from=None, allow_ssh=None)#

Create a Dask cluster with Coiled

Parameters
  • n_workers (int) – Number of workers in this cluster. Defaults to 4.

  • name (Optional[str]) – Name to use for identifying this cluster. Defaults to None.

  • software (Optional[str]) – Name of the software environment to use.

  • worker_class (Optional[str]) – Worker class to use. Defaults to distributed.nanny.Nanny.

  • worker_options (Optional[dict]) – Mapping with keyword arguments to pass to worker_class. Defaults to {}.

  • worker_vm_types (Optional[list]) – List of instance types that you would like workers to use, default instance type selected contains 2 cores. You can use the command coiled.list_instance_types() to see a list of allowed types.

  • worker_cpu (Union[int, List[int], None]) – Number, or range, of CPUs requested for each worker. Specify a range by using a list of two elements, for example: worker_cpu=[2, 8].

  • worker_memory (Union[str, List[str], None]) – Amount of memory to request for each worker, Coiled will use a +/- 10% buffer from the memory that you specify. You may specify a range of memory by using a list of two elements, for example: worker_memory=["2GiB", "4GiB"].

  • worker_disk_size (Optional[int]) – Non-default size of persistent disk attached to each worker instance, specified in GB.

  • worker_gpu (Optional[int]) – For instance types that don’t come with a fixed number of GPUs, the number of GPUs to attach. This only applies to GCP, and will default to 1 if you specify worker_gpu_type. Coiled currently only supports a single GPU per instance.

  • worker_gpu_type (Optional[str]) – For instance types that don’t always come with GPU, the type of GPU to attach. This only applied to GCP. Should match the way the cloud provider specifies the GPU, for example: worker_gpu_type="nvidia-tesla-t4".

  • scheduler_class (Optional[str]) – Scheduler class to use. Defaults to distributed.scheduler.Scheduler.

  • scheduler_options (Optional[dict]) – Mapping with keyword arguments to pass to scheduler_class. Defaults to {}.

  • scheduler_vm_types (Optional[list]) – List of instance types that you would like the scheduler to use, default instances type selected contains 2 cores. You can use the command coiled.list_instance_types() to se a list of allowed types.

  • scheduler_cpu (Union[int, List[int], None]) – Number, or range, of CPUs requested for the scheduler. Specify a range by using a list of two elements, for example: scheduler_cpu=[2, 8].

  • scheduler_memory (Union[str, List[str], None]) – Amount of memory to request for the scheduler, Coiled will use a +/-10% buffer from the memory what you specify. You may specify a range of memory by using a list of two elements, for example: scheduler_memory=["2GiB", "4GiB"].

  • scheduler_gpu (Optional[bool]) – Whether to attach GPU to scheduler. This will affect instance type (if not specified explicitly). For Google Cloud, it will also add a single “guest” T4 to the scheduler. It’s recommended to use GPU on scheduler for GPU clusters.

  • asynchronous (bool) – Set to True if using this Cloud within async/await functions or within Tornado gen.coroutines. Otherwise this should remain False for normal use. Default is False.

  • cloud (Optional[CloudBeta]) – Cloud object to use for interacting with Coiled. This object contains user/authentication/account information. If this is None (default), we look for a recently-cached Cloud object, and if none exists create one.

  • account (Optional[str]) – Name of Coiled account to use. If not provided, will default to the user account for the cloud object being used.

  • shutdown_on_close – Whether or not to shut down the cluster when it finishes. Defaults to True, unless name points to an existing cluster.

  • use_scheduler_public_ip (Optional[bool]) – Boolean value that determines if the Python client connects to the Dask scheduler using the scheduler machine’s public IP address. The default behaviour when set to True is to connect to the scheduler using its public IP address, which means traffic will be routed over the public internet. When set to False, traffic will be routed over the local network the scheduler lives in, so make sure the scheduler private IP address is routable from where this function call is made when setting this to False.

  • credentials (Optional[str]) – Which credentials to use for Dask operations and forward to Dask clusters – options are “account”, “local”, or “none”. The default behavior is to use local credentials if available. NOTE: credential handling currently only works with AWS credentials.

  • timeout (Union[int, float, None]) – Timeout in seconds to wait for a cluster to start, will use default_cluster_timeout set on parent Cloud by default.

  • environ (Optional[Dict[str, str]]) – Dictionary of environment variables.

  • send_dask_config (bool) – Whether to send a frozen copy of local dask.config to the cluster.

  • backend_options (Union[AWSOptions, GCPOptions, None]) – Dictionary of backend specific options.

  • tags (Optional[Dict[str, str]]) – Dictionary of tags.

  • wait_for_workers (Union[int, float, bool, None]) – Whether to wait for a number of workers before returning control of the prompt back to the user. Usually, computations will run better if you wait for most workers before submitting tasks to the cluster. You can wait for all workers by passing True, or not wait for any by passing False. You can pass a fraction of the total number of workers requested as a float(like 0.6), or a fixed number of workers as an int (like 13). If None, the value from coiled.wait-for-workers in your Dask config will be used. Default: 0.3. If the requested number of workers don’t launch within 10 minutes, the cluster will be shut down, then a TimeoutError is raised.

  • package_sync (Union[bool, List[str]]) – Attempt to synchronize package versions between your local environment and the cluster. Cannot be used with the software option. Passing True will sync all packages (recommended). Passing specific packages as a list of strings will attempt to synchronize only those packages, use with caution. We strongly recommend reading the additional documentation for this feature (see https://docs.coiled.io/user_guide/package_sync.html)!

  • package_sync_strict (bool) – Only allow exact packages matches, not recommended unless your client platform/architecture matches the cluster platform/architecture

  • private_to_creator (Optional[bool]) – Only allow the cluster creator, not other members of team account, to connect to this cluster.

  • scheduler_port (Optional[int]) – Specify a port other than the default (8786) for communication with Dask scheduler; this is useful if your client is on a network that blocks 8786.

  • allow_ingress_from (Optional[str]) – Control the CIDR from which cluster firewall allows ingress to scheduler; by default this is open to any source address (0.0.0.0/0). You can specify CIDR, or “me” for just your IP address.

  • allow_ssh (Optional[bool]) – Allow connections to scheduler over port 22, used for SSH.

adapt(Adaptive=<class 'coiled.cluster.CoiledAdaptive'>, **kwargs)[source]#

Dynamically scale the number of workers in the cluster based on scaling heuristics.

Parameters
  • minimum (int) – Minimum number of workers that the cluster should have while on low load, defaults to 1.

  • maximum (int) – Maximum numbers of workers that the cluster should have while on high load. If maximum is not set, this value will be based on your core count limit. This value is also capped by your core count limit.

  • wait_count (int) – Number of consecutive times that a worker should be suggested for removal before the cluster removes it, defaults to 60.

  • interval (timedelta or str) – Milliseconds between checks, default sto 5000 ms.

  • target_duration (timedelta or str) – Amount of time we want a computation to take. This affects how aggressively the cluster scales up, defaults to 5s.

Return type

Adaptive

property asynchronous#

Are we running in the event loop?

close(force_shutdown=False)[source]#

Close the cluster.

Return type

Optional[Awaitable[None]]

get_logs(scheduler=True, workers=True)[source]#

Return logs for the scheduler and workers :type scheduler: bool :param scheduler: Whether or not to collect logs for the scheduler :type scheduler: boolean :type workers: bool :param workers: Whether or not to collect logs for the workers :type workers: boolean

Returns

logs – A dictionary of logs, with one item for the scheduler and one for the workers

Return type

Dict[str]

async recommendations(target)[source]#

Make scale up/down recommendations based on current state and target

Return type

dict

scale(n)[source]#

Scale cluster to n workers

Parameters

n (int) – Number of workers to scale cluster size to.

Return type

Optional[Awaitable[None]]

async scale_up(n)#

Scales up to a target number of n workers

It’s documented that scale_up should scale up to a certain target, not scale up BY a certain amount:

https://github.com/dask/distributed/blob/main/distributed/deploy/adaptive_core.py#L60

Return type

None

shutdown()[source]#

Shutdown the cluster; useful when shutdown_on_close is False.

Return type

Optional[Awaitable[None]]

sync(func, *args, asynchronous=None, callback_timeout=None, **kwargs)[source]#

Call func with args synchronously or asynchronously depending on the calling context

Return type

Union[TypeVar(_T), Awaitable[TypeVar(_T)]]

async workers_to_close(target)[source]#

Determine which, if any, workers should potentially be removed from the cluster.

Notes

Cluster.workers_to_close dispatches to Scheduler.workers_to_close(), but may be overridden in subclasses.

Return type

List of worker addresses to close, if any

See also

Scheduler.workers_to_close

coiled.cluster_logs(cluster_id, account=None, scheduler=True, workers=True, errors_only=False)[source]#

Returns cluster logs as a dictionary, with a key for the scheduler and each worker.

Changed in version 0.2.0: cluster_name is no longer accepted, use cluster_id instead.

coiled.delete_cluster(name, account=None)[source]#

Delete a cluster

Parameters

name (str) – Name of cluster to delete.

coiled.list_clusters(account=None)[source]#

List clusters

Parameters

account – Name of the Coiled account to list clusters. If not provided, will use the coiled.account configuration value.

Returns

Dictionary with information about each cluster in the specified account. Keys in the dictionary are names of clusters, while the values contain information about the corresponding cluster.

coiled.list_core_usage(account=None)[source]#

Get a list of used cores.

Returns a table that shows the limit of cores that the user can use and a breakdown of the core usage split up between account, user and clusters.

Parameters
  • account (Optional[str]) – Name of the Coiled account to list core usage. If not provided, will use the coiled.account configuration value.

  • json – If set to True, it will return this list in json format instead of a table.

Return type

dict

coiled.list_gpu_types()[source]#

List allowed GPU Types.

For AWS the GPU types are tied to the instance type, but for GCP you can add different GPU types to GPU enabled instances. Please refer to GPUs for more information.

Parameters

json – if set to True, it will return this list in json format instead of a table.

Return type

Dict

coiled.list_instance_types(backend=None, min_cores=None, min_gpus=None, min_memory=None, cores=None, memory=None, gpus=None)[source]#

List allowed instance types for the cloud provider configured on your account.

This command allows you to get all instance types available for a backend or a filtered list of instance types that match your requirements by using the available keyword arguments. Please refer to Selecting Instance Types for more information.

Parameters
  • backend (Optional[str]) – Relevant cloud provider (aws or gcp) to get a list of allowed instance types. If not provided the list will show the instances for your account cloud provider.

  • min_cores (Optional[int]) – Filter results on the minimum number of required cores

  • min_gpus (Optional[int]) – Filter results on the minimum number of required GPUs

  • min_memory (Union[int, str, float, None]) – Filter results on the minimum amount of memory

  • cores (Union[int, list[int], None]) – The exact number of cores to filter for example cores=1 or a list containg the minimum and maximum amount of cores to filter instances by, for example cores=[2,8].

  • memory (Union[int, str, float, list[int], list[str], list[float], None]) – The exact amount of memory or a list containing the minimum and maximum amount of memory to filter instances by.

  • gpus (Union[int, list[int], None]) – The exaxct number of gpus to filter or a list containing the minimum and maximum number of GPUS to filter instances by.

Return type

dict[str, VmType]

class coiled.BackendOptions[source]#

A dictionary with the following key/value pairs

Parameters
  • region_name (Optional[str]) – Region name to launch cluster in. For example: us-east-2

  • zone_name (Optional[str]) – Zone name to launch cluster in. For example: us-east-2a

  • firewall (Optional[FirewallOptions]) – Allows you to specify firewall for scheduler; see FirewallOptions for details.

  • ingress (Optional[List[FirewallOptions]]) – Allows you to specify multiple CIDR blocks (and corresponding ports) to open for ingress on the scheduler firewall.

  • spot (Optional[bool]) – Whether to request spot instances.

  • spot_on_demand_fallback (Optional[bool]) – If requesting spot, whether to request non-spot instances if we get fewer spot instances than desired.

  • multizone (Optional[bool]) – Tell the cloud provider to pick zone with best availability, we’ll keep workers all in the same zone, scheduler may or may not be in that zone as well.

class coiled.AWSOptions#

A dictionary with the following key/value pairs plus any pairs in BackendOptions

Parameters

keypair_name (Optional[str]) – AWS Keypair to assign worker/scheduler instances

class coiled.GCPOptions#
class coiled.FirewallOptions#

A dictionary with the following key/value pairs

Parameters
  • ports (List[int]) – List of ports to open to cidr on the scheduler. For example, [22, 8786] opens port 22 for SSH and 8786 for client to Dask connection.

  • cidr (str) – CIDR block from which to allow access. For example 0.0.0.0/0 allows access from any IP address.

Performance Reports#

coiled.list_performance_reports(account=None)[source]#

List performance reports stored on Coiled Cloud

Returns a list of dicts that contain information about Coiled Cloud hosted performance reports

Parameters

account – Associated account for which the user wishes to get reports from. If not specified, current / default account will be utilized.

Return type

List[Dict]

coiled.performance_report(filename='dask-report.html', private=False, account=None)[source]#

Generates a static performance report and saves it to Coiled Cloud

This context manager lightly wraps Dask’s performance_report. It generates a static performance report and uploads it to Coiled Cloud. After uploading, it prints out the url where the report is hosted. For a list of hosted performance reports, utilize coiled.list_performance_reports(). Note each user is limited to 5 hosted reports with each a maximum file size of 10mb.

The context manager yields an object that will have the url as an attribute, though the URL is not available inside the context but only after (see example).

Example:

with coiled.performance_report("filename") as perf_url:
    dask.compute(...)

assert isinstance(perf_url["url"], str)
Parameters
  • filename – The file name of the performance report file.

  • private – If set to True, the uploaded performance report is only accessible to logged in Coiled users who are members of the current / default or specified account.

  • account – Associated the account which user wishes to upload to. If not specified, current / default account will be utilized.

Return type

Generator[PerformanceReportURL, None, None]

Backend#

coiled.set_backend_options(account=None, backend='aws', ingress=None, firewall=None, network=None, aws_region='us-east-1', aws_access_key_id=None, aws_secret_access_key=None, gcp_service_creds_file=None, gcp_service_creds_dict=None, gcp_project_id=None, gcp_region=None, gcp_zone=None, instance_service_account=None, zone=None, registry_type='ecr', registry_namespace=None, registry_access_token=None, registry_uri='docker.io', registry_username=None, log_output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, **kwargs)[source]#

Configure account level settings for cloud provider and container registry.

This method configures account level backend settings for cloud providers, container registries, and setting up an account-level VPC for running clusters and other Coiled managed resources.

Parameters
  • account (Optional[str]) – Coiled account to configure if user has access. If not specified, current / default account will be utilized.

  • backend (Literal[‘aws’, ‘gcp’]) – Supported backends such as AWS VM (aws) and GCP VM (gcp).

  • ingress (Optional[List[Dict]]) – Specification of the ingress rules the firewall/security group that Coiled creates for the cluster scheduler. This is a list of ingress rules, each rule is a dictionary with a list of ports and a CIDR block from which to allow ingress on those ports to the scheduler. For example, [{"ports" [8787], "cidr": "0.0.0.0/0"}, {"ports" [8786], "cidr": "10.2.0.0/16"}] would allow the dashboard on 8787 to be accessed from any IP address, and the scheduler comm on 8786 to only be accessed from IP addresses in the 10.2.0.0/16 local network block.

  • firewall (Optional[Dict]) – A single ingress rule for the scheduler firewall/security group; this is deprecated and ingress rules should be specified with ingress instead.

  • network (Optional[Dict]) – Specification for your network/subnets, dictionary can take ID(s) for existing network and/or subnet(s).

  • aws_region (str) – The region which Coiled cloud resources will be deployed to and where other resources such as the docker registry are located or where a specified VPC will be created.

  • aws_access_key_id (Optional[str]) – For AWS support backend, this argument is required to create or use an existing Coiled managed VPC.

  • aws_secret_access_key (Optional[str]) – For AWS support backend, this argument is required to create or use an existing Coiled managed VPC.

  • use_scheduler_public_ip – Determines if the client connects to the Dask scheduler using it’s public or internal address.

  • gcp_service_creds_file (Optional[str]) – A string filepath to a Google Cloud Compute service account json credentials file used for creating and managing a Coiled VPC.

  • gcp_service_creds_dict (Optional[dict]) – A dictionary of the contents of a Google Cloud Compute service account json credentials file used for creating a VPC to host Coiled Cloud related assets.

  • gcp_project_id (Optional[str]) – The Google Cloud Compute project id in which a VPC will be created to host Coiled Cloud related assets.

  • gcp_region (Optional[str]) – The Google Cloud Compute region name in which a VPC will be created.

  • instance_service_account (Optional[str]) – Email for optional service account to attach to cluster instances; using this is the best practice for granting access to your data stored in Google Cloud services. This should be a scoped service instance with only the permissions needed to run your computations.

  • zone (Optional[str]) – Optional; used to specify zone to use for clusters (for either AWS or GCP).

  • registry_type (Literal[‘ecr’, ‘docker_hub’, ‘gar’]) – Custom software environments are stored in a docker container registry. By default, container images will be stored in AWS ECR. Users are able to store contains on a private registry by providing additional configuration registry_* arguments and specifying registry_type=’docker_hub’. To use Google Artifact Registry, pass registry_type=’gar’, gcp_project_id, gcp_region, and one of gcp_service_creds_dict or gcp_service_creds_file.

  • registry_uri (str) – The container registry URI. Defaults to docker.io. Only required if registry_type=’docker_hub’.

  • registry_username (Optional[str]) – A registry username (should be lowercased). Only required if registry_type=’docker_hub’.

  • registry_namespace (Optional[str]) – A namespace for storing the container images. Defaults to username if not specified. More information about docker namespaces can be found here: https://docs.docker.com/docker-hub/repos/#creating-a-repository. Only required if registry_type=’docker_hub’.

  • registry_access_token (Optional[str]) – A token to access registry images. More information about access tokens ca be found here: https://docs.docker.com/docker-hub/access-tokens/. Only required if registry_type=’docker_hub’.

Information#

coiled.diagnostics(account=None)[source]#

Run a diagnose check aimed to help support with any issues.

This command will call others to dump information that could help in troubleshooting issues. This command will return a json that will make it easier for you to share with the Coiled support team if needed.

Parameters

account (Optional[str]) – Name of the Coiled account to list core usage. If not provided, will use the coiled.account configuration value.

Return type

dict

coiled.get_billing_activity(account=None, cluster=None, cluster_id=None, start_time=None, end_time=None, kind=None, page=None)[source]#

Retrieve Billing information.

Parameters
  • account (Optional[str]) – The account to retrieve billing information from. If not provided, will default to Cloud.account configuration value.

  • cluster (Optional[str]) – Cluster name. Filter billing events to this cluster. Defaults to None.

  • cluster_id (Optional[int]) – Cluster id. Filter billing events to this cluster by id. Defaults to None.

  • start_time (Optional[str]) – Filter events after this datetime (isoformat). Defaults to None.

  • end_time (Optional[str]) – Filter events before this datetime (isoformat). Defaults to None.

  • kind (Optional[Literal[‘instance’, ‘monthly_grant’, ‘manual_adjustment’, ‘payg_payment’]]) – Filter events to this kind of event. Defaults to None.

  • page (Optional[int]) – Grab events from this page. Defaults to None.

Return type

Dict

coiled.get_notifications(json=False, account=None, limit=100, level=0, event_type=None)[source]#

Get a list of all recent notifications.

Parameters
  • account (Optional[str]) – Name of the Coiled account to list notifications. If not provided, will use the coiled.account configuration value.

  • json (bool) – If set to True, it will return this list in json format instead of a table.

  • limit (int) – The max number of notifications to return.

  • level (Union[int, str]) – A constant from the standard python logging library (e.g., logging.INFO), or a string of one of the following: debug, info, warning, error, or critical. This will be used to filter the returned notifications.

  • event_type (Optional[Literal[‘add_role_to_profile’, ‘attach_gateway_to_router’, ‘attach_subnet_to_router’, ‘create_vm’, ‘create_machine_image’, ‘create_schedulercreate_worker’, ‘delete_machine_image’, ‘create_fw_rule’, ‘create_fw’, ‘create_network_cidr’, ‘create_subnet’, ‘create_network’, ‘create_log_sink’, ‘create_router’, ‘create_iam_role’, ‘create_log_bucket’, ‘create_storage_bucket’, ‘create_instance_profile’, ‘check_log_sink_exists’, ‘check_or_attach_cloudwatch_policy’, ‘delete_vm’, ‘delete_route’, ‘get_firewall’, ‘get_network’, ‘get_subnet’, ‘get_policy_arn’, ‘get_log_group’, ‘gcp_instance_create’, ‘net_gateways_get_or_create’, ‘scale’]]) – The event_type that you wish to get notifications for. For example, you might want to see only vm_event types.

Return type

Optional[List[dict]]

coiled.list_local_versions()[source]#

Get information about local versions.

Returns the versions of Python, Coiled, Dask and Distributed that are installed locally. This information could be useful when troubleshooting issues.

Parameters

json – If set to True, it will return this list in json format instead of a table.

Return type

dict

coiled.list_user_information()[source]#

List information about your user.

This command will give you more information about your account, which teams you are part of and any limits that your account might have.

Return type

dict

Command Line API Reference#

coiled login#

Configure your Coiled account credentials

coiled login [OPTIONS]

Options

-s, --server <server>#

Coiled server to use

-t, --token <token>#

Coiled user token

-a, --account <account>#

Coiled account

--retry, --no-retry#

Whether or not to automatically ask for a new token if an invalid token is entered

coiled install#

Create Coiled conda software environment locally

coiled install [OPTIONS] NAME

Arguments

NAME#

Required argument

coiled env create#

Create a Coiled software environment

coiled env create [OPTIONS]

Options

-n, --name <name>#

Name of software environment, it must be lowercase.

--container <container>#

Base docker image to use.

--conda <conda>#

Conda environment file.

--pip <pip>#

Pip requirements file.

--post-build <post_build>#

Post-build script.

--conda-env-name <conda_env_name>#

Name of conda environment to install packages into. Only use when using –container, and the image expects commands to run in a conda environment not named “coiled”

--private#

Flag to set software environment private.

--force-rebuild#

Skip checks for an existing software environment build.

-e, --environ <environ>#

Custom environment variable(s).

--account <account>#

Account to use for creating this software environment.

coiled env delete#

Delete a Coiled software environment

coiled env delete [OPTIONS] NAME

Arguments

NAME#

Required argument

coiled env list#

List the Coiled software environments in an account

coiled env list [OPTIONS] [ACCOUNT]

Arguments

ACCOUNT#

Optional argument

coiled env inspect#

View the details of a Coiled software environment

coiled env inspect [OPTIONS] NAME

Arguments

NAME#

Required argument

coiled cluster ssh#

coiled cluster ssh [OPTIONS] CLUSTER

Options

--private#

Use private IP address of scheduler (default is public IP address)

--worker <worker>#

Connect to worker with specified name or private IP address (default is to connect to scheduler)

Arguments

CLUSTER#

Required argument

coiled cluster logs#

coiled cluster logs [OPTIONS] CLUSTER

Options

--account <account>#

Coiled account (uses default account if not specified)

--scheduler#

Get scheduler logs

--workers <workers>#

Get worker logs (‘any’, ‘all’, or comma-delimited list of names, states, or internal IP addresses)

--follow#

Passed directly to aws logs tail, see aws cli docs for details.

--filter <filter>#

Passed directly to aws logs tail, see aws cli docs for details.

--since <since>#

For follow, uses aws logs tail default (10m), otherwise defaults to start time of cluster.

--format <format>#

Passed directly to aws logs tail, see aws cli docs for details.

--profile <profile>#

Passed directly to aws logs tail, see aws cli docs for details.

Arguments

CLUSTER#

Required argument