API Reference#
Python API Reference#
|
Create a Dask cluster with Coiled |
|
Returns cluster logs as a dictionary, with a key for the scheduler and each worker. |
|
Create a software environment |
|
Delete a cluster |
|
Delete a software environment |
|
Run a diagnose check aimed to help support with any issues. |
|
Retrieve Billing information. |
|
Get a list of all recent notifications. |
|
Retrieve solved spec for a Coiled software environment |
|
Create a Coiled software environment locally |
|
List clusters |
|
Get a list of used cores. |
List allowed GPU Types. |
|
|
List allowed instance types for the cloud provider configured on your account. |
|
A dictionary with the following key/value pairs |
|
A dictionary with the following key/value pairs plus any pairs in |
|
|
Get information about local versions. |
|
|
List performance reports stored on Coiled Cloud |
|
List software environments |
List information about your user. |
|
|
Generates a static performance report and saves it to Coiled Cloud |
|
Configure account level settings for cloud provider and container registry. |
Software Environments#
- coiled.create_software_environment(name=None, *, account=None, conda=None, pip=None, container=None, log_output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, post_build=None, conda_env_name=None, backend_options=None, private=False, force_rebuild=False, environ=None, use_entrypoint=True)[source]#
Create a software environment
- Parameters
name (
Optional
[str
]) – Name of software environment. Name can’t contain uppercase letters.account (
Optional
[str
]) – The account in which to create the software environment, if not given in the name.conda (
Union
[list
,dict
,str
,None
]) – Specification for packages to install into the software environment using conda. Can be a list of packages, a dictionary, or a path to a conda environment YAML file.pip (
Union
[list
,str
,None
]) – Packages to install into the software environment using pip. Can be a list of packages or a path to a pip requirements file.container (
Optional
[str
]) – Docker image to use for the software environment. Must be the name of a docker image on Docker hub. Defaults tocoiled/default
.post_build (
Union
[list
,str
,None
]) – List of commands or path to a local executable script to run after pip and conda packages have been installed.log_output – Stream to output logs to. Defaults to
sys.stdout
.conda_env_name (
Optional
[str
]) – Name of conda environment to install packages into. Note that this should only be used when specifying a non-default value forcontainer
and when the non-default Docker image used expects commands to run in a conda environment not named “coiled”. Defaults to “coiled”.backend_options (
Optional
[Dict
]) – Dictionary of backend specific options (e.g.{'region': 'us-east-2'}
). Any options specified with this keyword argument will take precedence over those stored in thecoiled.backend-options
configuration value.private (
bool
) – Whether this software environment is private or public. Defaults toFalse
force_rebuild (
bool
) – By default, if an existing software environment with the same name and dependencies already exists, a rebuild is aborted. If this is set to True, those checks are skipped and the environment will be rebuilt. Defaults toFalse
environ (
Optional
[Dict
]) – Dictionary of environment variables.use_entrypoint (
bool
) – Whether to use (or override) entrypoint set on container.
- Return type
None
- coiled.delete_software_environment(name, account=None)[source]#
Delete a software environment
- Parameters
name – Name of software environment to delete.
- coiled.get_software_info(name, account=None)[source]#
Retrieve solved spec for a Coiled software environment
- Parameters
name (
str
) – Software environment name- Returns
Coiled software environment information
- Return type
results
- coiled.inspect(name)[source]#
View the details of a Coiled software environment
- Parameters
name (
str
) –Identifier of the software environment to use, in the format (<account>/)<name>. If the software environment is owned by the same account as that passed into “account”, the (<account>/) prefix is optional.
For example, suppose your account is “wondercorp”, but your friends at “friendlycorp” have an environment named “xgboost” that you want to use; you can specify this with “friendlycorp/xgboost”. If you simply entered “xgboost”, this is shorthand for “wondercorp/xgboost”.
The “name” portion of (<account>/)<name> can only contain ASCII letters, hyphens and underscores.
Examples
>>> import coiled >>> coiled.inspect("coiled/default")
- coiled.install(name)[source]#
Create a Coiled software environment locally
- Parameters
name –
Identifier of the software environment to use, in the format (<account>/)<name>. If the software environment is owned by the same account as that passed into “account”, the (<account>/) prefix is optional.
For example, suppose your account is “wondercorp”, but your friends at “friendlycorp” have an environment named “xgboost” that you want to use; you can specify this with “friendlycorp/xgboost”. If you simply entered “xgboost”, this is shorthand for “wondercorp/xgboost”.
The “name” portion of (<account>/)<name> can only contain ASCII letters, hyphens and underscores.
Examples
>>> import coiled >>> coiled.install("coiled/default")
- coiled.list_software_environments(account=None)[source]#
List software environments
- Parameters
account – Name of the Coiled account to list software environments. If not provided, will use the
coiled.account
configuration value.- Returns
Dictionary with information about each software environment in the specified account. Keys in the dictionary are names of software environments, while the values contain information about the corresponding software environment.
Clusters#
- class coiled.Cluster(name=None, *, software=None, n_workers=4, worker_class=None, worker_options=None, worker_vm_types=None, worker_cpu=None, worker_memory=None, worker_disk_size=None, worker_gpu=None, worker_gpu_type=None, scheduler_class=None, scheduler_options=None, scheduler_vm_types=None, scheduler_cpu=None, scheduler_memory=None, asynchronous=False, cloud=None, account=None, shutdown_on_close=None, use_scheduler_public_ip=None, credentials='account', timeout=None, environ=None, tags=None, backend_options=None, show_widget=True, configure_logging=False, wait_for_workers=None)#
Create a Dask cluster with Coiled
- Parameters
n_workers (
int
) – Number of workers in this cluster. Defaults to 4.name (
Optional
[str
]) – Name to use for identifying this cluster. Defaults toNone
.software (
Optional
[str
]) – Name of the software environment to use.worker_class (
Optional
[str
]) – Worker class to use. Defaults to “dask.distributed.Nanny”.worker_options (
Optional
[dict
]) – Mapping with keyword arguments to pass toworker_class
. Defaults to{}
.worker_vm_types (
Optional
[list
]) – List of instance types that you would like workers to use, default instance type selected contains 2 cores. You can use the commandcoiled.list_instance_types()
to se a list of allowed types.worker_cpu (
Union
[int
,List
[int
],None
]) – Number of CPUs allocated for each worker, you may specify a range of cores by using a list of two elements, for example:worker_cpu=[2, 8]
.worker_memory (
Union
[str
,List
[str
],None
]) – Amount of memory to allocate for each worker, Coiled will use a +/- 10% buffer from the memory that you specify. You may specify a range of memory by using a list of two elements, for example:worker_memory=["2GiB", "4GiB"]
.worker_disk_size (
Optional
[int
]) – Non-default size of persistent disk attached to each worker instance, specified in GB.worker_gpu (
Optional
[int
]) – For instance types that don’t come with a fixed number of GPUs, the number of GPUs to attach. This only applies to GCP, and will default to 1 if you specifyworker_gpu_type
. Coiled currently only supports a single GPU per instance.worker_gpu_type (
Optional
[str
]) – For instance types that don’t always come with GPU, the type of GPU to attach. This only applied to GCP. Should match the way the cloud provider specifies the GPU, for example:worker_gpu_type="nvidia-tesla-t4"
.scheduler_class (
Optional
[str
]) – Scheduler class to use. Defaults to “dask.distributed.Scheduler”.scheduler_options (
Optional
[dict
]) – Mapping with keyword arguments to pass toscheduler_class
. Defaults to{}
.scheduler_vm_types (
Optional
[list
]) – List of instance types that you would like the scheduler to use, default instances type selected contains 2 cores. You can use the commandcoiled.list_instance_types()
to se a list of allowed types.scheduler_cpu (
Union
[int
,List
[int
],None
]) – Number of CPUs allocated for the scheduler, you may specify a range of cores by using a list of two elements, for example:scheduler_cpu=[2, 8]
.scheduler_memory (
Union
[str
,List
[str
],None
]) – Amount of memory to allocate for the scheduler, Coiled will use a +/-10% buffer from the memory what you specify. You may specify a range of memory by using a list of two elements, for example:scheduler_memory=["2GiB", "4GiB"]
.asynchronous (
bool
) – Set to True if using this Cloud withinasync
/await
functions or within Tornadogen.coroutines
. Otherwise this should remainFalse
for normal use. Default isFalse
.cloud (
Optional
[CloudBeta
]) – Cloud object to use for interacting with Coiled. This object contains user/authentication/account information. If this is None (default), we look for a recently-cached Cloud object, and if none exists create one.account (
Optional
[str
]) – Name of Coiled account to use. If not provided, will default to the user account for thecloud
object being used.shutdown_on_close – Whether or not to shut down the cluster when it finishes. Defaults to True, unless name points to an existing cluster.
use_scheduler_public_ip (
Optional
[bool
]) – Boolean value that determines if the Python client connects to the Dask scheduler using the scheduler machine’s public IP address. The default behaviour when set to True is to connect to the scheduler using its public IP address, which means traffic will be routed over the public internet. When set to False, traffic will be routed over the local network the scheduler lives in, so make sure the scheduler private IP address is routable from where this function call is made when setting this to False.credentials (
Optional
[str
]) – Which credentials to use for Dask operations and forward to Dask clusters – options are “account”, “local”, or “none”. The default behavior is to prefer credentials associated with the Coiled Account, if available, then try to use local credentials, if available. NOTE: credential handling currently only works with AWS credentials.timeout (
Union
[int
,float
,None
]) – Timeout in seconds to wait for a cluster to start, will usedefault_cluster_timeout
set on parent Cloud by default.environ (
Optional
[Dict
[str
,str
]]) – Dictionary of environment variables.backend_options (
Union
[AWSOptions
,GCPOptions
,None
]) – Dictionary of backend specific optionstags (
Optional
[Dict
[str
,str
]]) – Dictionary of tags.wait_for_workers – Whether or not to wait for a number of workers before returning control of the prompt back to the user. Usually, computations will run better if you wait for most workers before submitting tasks to the cluster. You can wait for all workers by passing
True
, or not wait for any by passingFalse
. You can pass a fraction of the total number of workers requested as a float(like 0.6), or a fixed number of workers as an int (like 13). If None, the value fromcoiled.wait-for-workers
in your Dask config will be used. Default: 0.3. If the requested number of workers don’t launch within 10 minutes, the cluster will be shut down, then a TimeoutError is raised.
- adapt(Adaptive=<class 'coiled.cluster.CoiledAdaptive'>, **kwargs)[source]#
Dynamically scale the number of workers in the cluster based on scaling heuristics.
- Parameters
minimum (int) – Minimum number of workers that the cluster should have while on low load, defaults to 1.
maximum (int) – Maximum numbers of workers that the cluster should have while on high load. If maximum is not set, this value will be based on your core count limit. This value is also capped by your core count limit.
wait_count (int) – Number of consecutive times that a worker should be suggested for removal before the cluster removes it, defaults to 60.
interval (timedelta or str) – Milliseconds between checks, default sto 5000 ms.
target_duration (timedelta or str) – Amount of time we want a computation to take. This affects how aggressively the cluster scales up, defaults to 5s.
- Return type
- property asynchronous#
Are we running in the event loop?
- get_logs(scheduler=True, workers=True)[source]#
Return logs for the scheduler and workers :type scheduler:
bool
:param scheduler: Whether or not to collect logs for the scheduler :type scheduler: boolean :type workers:bool
:param workers: Whether or not to collect logs for the workers :type workers: boolean- Returns
logs – A dictionary of logs, with one item for the scheduler and one for the workers
- Return type
Dict[str]
- async recommendations(target)[source]#
Make scale up/down recommendations based on current state and target
- Return type
dict
- scale(n)[source]#
Scale cluster to
n
workers- Parameters
n (
int
) – Number of workers to scale cluster size to.- Return type
Optional
[Awaitable
[None
]]
- async scale_up(n)#
Scales up to a target number of
n
workersIt’s documented that scale_up should scale up to a certain target, not scale up BY a certain amount:
https://github.com/dask/distributed/blob/main/distributed/deploy/adaptive_core.py#L60
- Return type
None
- sync(func, *args, asynchronous=None, callback_timeout=None, **kwargs)[source]#
Call func with args synchronously or asynchronously depending on the calling context
- Return type
Union
[TypeVar
(_T
),Awaitable
[TypeVar
(_T
)]]
- async workers_to_close(target)[source]#
Determine which, if any, workers should potentially be removed from the cluster.
Notes
Cluster.workers_to_close
dispatches to Scheduler.workers_to_close(), but may be overridden in subclasses.- Return type
List of worker addresses to close, if any
See also
Scheduler.workers_to_close
- coiled.cluster_logs(cluster_id, account=None, scheduler=True, workers=True, errors_only=False)[source]#
Returns cluster logs as a dictionary, with a key for the scheduler and each worker.
Changed in version 0.2.0:
cluster_name
is no longer accepted, usecluster_id
instead.
- coiled.delete_cluster(name, account=None)[source]#
Delete a cluster
- Parameters
name (
str
) – Name of cluster to delete.
- coiled.list_clusters(account=None)[source]#
List clusters
- Parameters
account – Name of the Coiled account to list clusters. If not provided, will use the
coiled.account
configuration value.- Returns
Dictionary with information about each cluster in the specified account. Keys in the dictionary are names of clusters, while the values contain information about the corresponding cluster.
- coiled.list_core_usage(account=None)[source]#
Get a list of used cores.
Returns a table that shows the limit of cores that the user can use and a breakdown of the core usage split up between account, user and clusters.
- Parameters
account (
Optional
[str
]) – Name of the Coiled account to list core usage. If not provided, will use the coiled.account configuration value.json – If set to
True
, it will return this list in json format instead of a table.
- Return type
dict
- coiled.list_gpu_types()[source]#
List allowed GPU Types.
For AWS the GPU types are tied to the instance type, but for GCP you can add different GPU types to GPU enabled instances. Please refer to Selecting GPU Types for more information.
- Parameters
json – if set to
True
, it will return this list in json format instead of a table.- Return type
Dict
- coiled.list_instance_types(backend=None, min_cores=None, min_gpus=None, min_memory=None, cores=None, memory=None, gpus=None)[source]#
List allowed instance types for the cloud provider configured on your account.
This command allows you to get all instance types available for a backend or a filtered list of instance types that match your requirements by using the available keyword arguments. Please refer to Selecting Instance Types for more information.
- Parameters
backend – Relevant cloud provider (aws or gcp) to get a list of allowed instance types. If not provided the list will show the instances for your account cloud provider.
min_cores – Filter results on the minimum number of required cores
min_gpus – Filter results on the minimum number of required GPUs
min_memory – Filter results on the minimum amount of memory
cores – The exact number of cores to filter for example
cores=1
or a list containg the minimum and maximum amount of cores to filter instances by, for examplecores=[2,8]
.memory – The exact amount of memory or a list containing the minimum and maximum amount of memory to filter instances by.
gpus – The exaxct number of gpus to filter or a list containing the minimum and maximum number of GPUS to filter instances by.
- class coiled.BackendOptions(*args, **kwargs)[source]#
A dictionary with the following key/value pairs
- Parameters
region_name (Optional[str]) – Region name to launch cluster in. For example: us-east-2
zone_name (Optional[str]) – Zone name to launch cluster in. For example: us-east-2a
firewall (Optional[FirewallOptions]) – Allows you to specify firewall for scheduler; see
FirewallOptions
for details.ingress (Optional[List[FirewallOptions]]) – Allows you to specify multiple CIDR blocks (and corresponding ports) to open for ingress on the scheduler firewall.
spot (Optional[bool]) – Whether to request spot instances.
spot_on_demand_fallback (Optional[bool]) – If requesting spot, whether to request non-spot instances if we get fewer spot instances than desired.
- class coiled.AWSOptions(*args, **kwargs)#
A dictionary with the following key/value pairs plus any pairs in
BackendOptions
- Parameters
keypair_name (Optional[str]) – AWS Keypair to assign worker/scheduler instances
- class coiled.GCPOptions(*args, **kwargs)#
- class coiled.FirewallOptions(*args, **kwargs)#
A dictionary with the following key/value pairs
- Parameters
ports (List[int]) – List of ports to open to cidr on the scheduler. For example,
[22, 8786]
opens port 22 for SSH and 8786 for client to Dask connection.cidr (str) – CIDR block from which to allow access. For example
0.0.0.0/0
allows access from any IP address.
Performance Reports#
- coiled.list_performance_reports(account=None)[source]#
List performance reports stored on Coiled Cloud
Returns a list of dicts that contain information about Coiled Cloud hosted performance reports
- Parameters
account – Associated account for which the user wishes to get reports from. If not specified, current / default account will be utilized.
- Return type
List
[Dict
]
- coiled.performance_report(filename='dask-report.html', private=False, account=None)[source]#
Generates a static performance report and saves it to Coiled Cloud
This context manager lightly wraps Dask’s performance_report. It generates a static performance report and uploads it to Coiled Cloud. After uploading, it prints out the url where the report is hosted. For a list of hosted performance reports, utilize coiled.list_performance_reports(). Note each user is limited to 5 hosted reports with each a maximum file size of 10mb.
- Parameters
filename – The file name of the performance report file.
private – If set to
True
, the uploaded performance report is only accessible to logged in Coiled users who are members of the current / default or specified account.account – Associated the account which user wishes to upload to. If not specified, current / default account will be utilized.
- Return type
Generator
[None
,None
,None
]
Backend#
- coiled.set_backend_options(use_coiled_defaults=False, account=None, backend='aws', customer_hosted=False, create_vpc=False, firewall=None, network=None, aws_region='us-east-1', aws_access_key_id=None, aws_secret_access_key=None, gcp_service_creds_file=None, gcp_service_creds_dict=None, gcp_project_id=None, gcp_region=None, gcp_zone=None, instance_service_account=None, zone=None, registry_type='ecr', registry_namespace=None, registry_access_token=None, registry_uri='docker.io', registry_username=None, log_output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, **kwargs)[source]#
Configure account level settings for cloud provider and container registry.
This method configures account level backend settings for cloud providers, container registries, and setting up an account-level VPC for running clusters and other Coiled managed resources.
- Parameters
use_coiled_default – Boolean to reset backend options to Coiled default settings which are for the AWS VM backend with default region of us-east-1.
account (
Optional
[str
]) – Coiled account to configure if user has access. If not specified, current / default account will be utilized.backend_type – Supported backends such as AWS VM (vm_aws) and GCP VM (vm_gcp).
firewall (
Optional
[Dict
]) – Specification for your firewall/security group, this dictionary takes a list of ports and a cidr block value which will be used to configure your firewall/security group that Coiled creates when creating Clusters. This value can be used as {“ports” [22], “cidr”: “0.0.0.0/0”}, you don’t need to specify both if you don’t wish.network (
Optional
[Dict
]) – Specification for your network/subnets, dictionary can take ID(s) for existing network and/or subnet(s).customer_hosted (
bool
) – Set to True when setting up a backend that will be hosted in your own cloud provider. Note that supplying valid credentials for cloud provider is required for Coiled to manage the setup, configure and access policies for users of this account.create_vpc (
bool
) – Deprecated; usecustomer_hosted
instead.aws_region (
str
) – The region which Coiled cloud resources will be deployed to and where other resources such as the docker registry are located or where a specified VPC will be created.aws_access_key_id (
Optional
[str
]) – For AWS support backend, this argument is required to create or use an existing Coiled managed VPC.aws_secret_access_key (
Optional
[str
]) – For AWS support backend, this argument is required to create or use an existing Coiled managed VPC.use_scheduler_public_ip – Determines if the client connects to the Dask scheduler using it’s public or internal address.
gcp_service_creds_file (
Optional
[str
]) – A string filepath to a Google Cloud Compute service account json credentials file used for creating and managing a Coiled VPC.gcp_service_creds_dict (
Optional
[dict
]) – A dictionary of the contents of a Google Cloud Compute service account json credentials file used for creating a VPC to host Coiled Cloud related assets.gcp_project_id (
Optional
[str
]) – The Google Cloud Compute project id in which a VPC will be created to host Coiled Cloud related assets.gcp_region (
Optional
[str
]) – The Google Cloud Compute region name in which a VPC will be created.gcp_zone (
Optional
[str
]) – The Google Cloud Compute zone name in which clusters will be create (Zone should be <region>-<zone letter>) for example zone C forus-east1
should be specified asus-east1-c
. Deprecated;zone
will override this.instance_service_account (
Optional
[str
]) – The service account to be attached to the instances created by Coiled. This should be a scoped service instance with only the permissions needed to run your computations.zone (
Optional
[str
]) – Optional; used to specify zone to use for clusters (for either AWS or GCP).registry_type (
Literal
[‘ecr’, ‘docker_hub’, ‘gar’, ‘acr’]) – Custom software environments are stored in a docker container registry. By default, container images will be stored in AWS ECR. Users are able to store contains on a private registry by providing additional configuration registry_* arguments and specifying registry_type=’docker_hub’. To use Google Artifact Registry, pass registry_type=’gar’, gcp_project_name, gcp_region_name, and one of gcp_service_creds_dict or gcp_service_creds_file.registry_uri (
str
) – The container registry URI. Defaults to docker.io. Only required if registry_type=’docker_hub’.registry_username (
Optional
[str
]) – A registry username (should be lowercased). Only required if registry_type=’docker_hub’.registry_namespace (
Optional
[str
]) – A namespace for storing the container images. Defaults to username if not specified. More information about docker namespaces can be found here: https://docs.docker.com/docker-hub/repos/#creating-repositories. Only required if registry_type=’docker_hub’.registry_access_token (
Optional
[str
]) – A token to access registry images. More information about access tokens ca be found here: https://docs.docker.com/docker-hub/access-tokens/. Only required if registry_type=’docker_hub’.
Information#
- coiled.diagnostics(account=None)[source]#
Run a diagnose check aimed to help support with any issues.
This command will call others to dump information that could help in troubleshooting issues. This command will return a json that will make it easier for you to share with the Coiled support team if needed.
- Parameters
account (
Optional
[str
]) – Name of the Coiled account to list core usage. If not provided, will use the coiled.account configuration value.- Return type
dict
- coiled.get_billing_activity(account=None, cluster=None, cluster_id=None, start_time=None, end_time=None, kind=None, page=None)[source]#
Retrieve Billing information.
- Parameters
account (
Optional
[str
]) – The account to retrieve billing information from. If not provided, will default toCloud.account
configuration value.cluster (
Optional
[str
]) – Cluster name. Filter billing events to this cluster. Defaults toNone
.cluster_id (
Optional
[int
]) – Cluster id. Filter billing events to this cluster by id. Defaults toNone
.start_time (
Optional
[str
]) – Filter events after this datetime (isoformat). Defaults toNone
.end_time (
Optional
[str
]) – Filter events before this datetime (isoformat). Defaults toNone
.kind (
Optional
[Literal
[‘instance’, ‘monthly_grant’, ‘manual_adjustment’, ‘payg_payment’]]) – Filter events to this kind of event. Defaults toNone
.page (
Optional
[int
]) – Grab events from this page. Defaults toNone
.
- Return type
Dict
- coiled.get_notifications(json=False, account=None, limit=100, level=0, event_type=None)[source]#
Get a list of all recent notifications.
- Parameters
account (
Optional
[str
]) – Name of the Coiled account to list notifications. If not provided, will use the coiled.account configuration value.json (
bool
) – If set toTrue
, it will return this list in json format instead of a table.limit (
int
) – The max number of notifications to return.level (
Union
[int
,str
]) – A constant from the standard python logging library (e.g.,logging.INFO
), or a string of one of the following:debug
,info
,warning
,error
, orcritical
. This will be used to filter the returned notifications.event_type (
Optional
[Literal
[‘add_role_to_profile’, ‘attach_gateway_to_router’, ‘attach_subnet_to_router’, ‘create_vm’, ‘create_machine_image’, ‘create_schedulercreate_worker’, ‘delete_machine_image’, ‘create_fw_rule’, ‘create_fw’, ‘create_network_cidr’, ‘create_subnet’, ‘create_network’, ‘create_log_sink’, ‘create_router’, ‘create_iam_role’, ‘create_log_bucket’, ‘create_storage_bucket’, ‘create_instance_profile’, ‘check_log_sink_exists’, ‘check_or_attach_cloudwatch_policy’, ‘delete_vm’, ‘delete_route’, ‘get_firewall’, ‘get_network’, ‘get_subnet’, ‘get_policy_arn’, ‘get_log_group’, ‘gcp_instance_create’, ‘net_gateways_get_or_create’, ‘scale’]]) – The event_type that you wish to get notifications for. For example, you might want to see onlyvm_event
types.
- Return type
Optional
[List
[dict
]]
- coiled.list_local_versions()[source]#
Get information about local versions.
Returns the versions of Python, Coiled, Dask and Distributed that are installed locally. This information could be useful when troubleshooting issues.
- Parameters
json – If set to
True
, it will return this list in json format instead of a table.- Return type
dict
Command Line API Reference#
coiled login#
Configure your Coiled account credentials
coiled login [OPTIONS]
Options
- -s, --server <server>#
Coiled server to use
- -t, --token <token>#
Coiled user token
- -a, --account <account>#
Coiled account
- --retry, --no-retry#
Whether or not to automatically ask for a new token if an invalid token is entered
coiled install#
Create Coiled conda software environment locally
coiled install [OPTIONS] NAME
Arguments
- NAME#
Required argument
coiled env create#
Create a Coiled software environment
coiled env create [OPTIONS]
Options
- -n, --name <name>#
Name of software environment, it must be lowercase.
- --container <container>#
Base docker image to use.
- --conda <conda>#
Conda environment file.
- --pip <pip>#
Pip requirements file.
- --post-build <post_build>#
Post-build script.
- --conda-env-name <conda_env_name>#
Name of conda environment to install packages into. Only use when using –container, and the image expects commands to run in a conda environment not named “coiled”
- --private#
Flag to set software environment private.
- --force-rebuild#
Skip checks for an existing software environment build.
- -e, --environ <environ>#
Custom environment variable(s).
- --account <account>#
Account to use for creating this software environment.
coiled env delete#
Delete a Coiled software environment
coiled env delete [OPTIONS] NAME
Arguments
- NAME#
Required argument
coiled env list#
List the Coiled software environments in an account
coiled env list [OPTIONS] [ACCOUNT]
Arguments
- ACCOUNT#
Optional argument
coiled env inspect#
View the details of a Coiled software environment
coiled env inspect [OPTIONS] NAME
Arguments
- NAME#
Required argument