Dask Clusters#
Dask is a general purpose library for parallel computing. Dask can be used on its own to parallelize Python code, or with integrations to other popular libraries to scale out common workflows.
Coiled makes it easy to deploy Dask clusters on the cloud. Coiled lets you:
Run near your cloud data
Use any cloud hardware (e.g. GPU, bigger machine)
Scale to thousands of cloud VMs
Quickstart#
Start a Dask cluster on the cloud with a few lines of code:
from coiled import Cluster
cluster = Cluster()
client = cluster.get_client()
Simpler examples work well with a local Dask cluster if you just want to experiment.
from dask.distributed import LocalCluster
cluster = LocalCluster(processes=False)
client = cluster.get_client()
Once you have a Dask cluster you can then run Python code on that cluster. Here is the simplest code you could run:
def inc(x):
return x + 1
future = client.submit(inc, 10)
future.result() # returns 11
Configuration#
Manage Hardware#
You can select the number of workers, region and any VM type available on your cloud (see VM Size and Type). For example:
cluster = coiled.Cluster(
n_workers=100,
region="us-east-2",
worker_memory="16 GiB",
)
Manage Software#
By default, Coiled will automatically synchronize your local software environment. This works well in most cases, but you can also specify an manual software environment:
cluster = coiled.Cluster(
software="my-software-environment",
)
Or a Docker image:
cluster = coiled.Cluster(
container="daskdev/dask:latest"
)
Examples#
Here are examples using Coiled Cluster:
Learn More#
For more information about Dask, visit:
API#
- coiled.Cluster(name=None, *, software=None, container=None, ignore_container_entrypoint=None, n_workers=None, worker_class=None, worker_options=None, worker_vm_types=None, worker_cpu=None, worker_memory=None, worker_disk_size=None, worker_disk_throughput=None, worker_gpu=None, worker_gpu_type=None, scheduler_options=None, scheduler_vm_types=None, scheduler_cpu=None, scheduler_memory=None, scheduler_disk_size=None, scheduler_gpu=None, asynchronous=False, cloud=None, account=None, workspace=None, shutdown_on_close=None, idle_timeout=None, no_client_timeout=<object object>, use_scheduler_public_ip=None, use_dashboard_https=None, dashboard_custom_subdomain=None, credentials='local', credentials_duration_seconds=None, timeout=None, environ=None, tags=None, send_dask_config=True, unset_single_threading_variables=None, backend_options=None, show_widget=None, custom_widget=None, configure_logging=None, wait_for_workers=None, package_sync=None, package_sync_strict=False, package_sync_conda_extras=None, package_sync_ignore=None, package_sync_only=None, package_sync_fail_on='critical-only', package_sync_use_uv_installer=True, private_to_creator=None, use_best_zone=True, allow_cross_zone=False, compute_purchase_option=None, spot_policy=None, extra_worker_on_scheduler=None, _n_worker_specs_per_host=None, scheduler_port=None, allow_ingress_from=None, allow_ssh_from=None, allow_ssh=None, allow_spark=None, open_extra_ports=None, jupyter=None, mount_bucket=None, host_setup_script=None, region=None, arm=None, batch_job_ids=None, batch_job_container=None, pause_on_exit=None)
Create a Dask cluster with Coiled
- Parameters:
n_workers (
Union
[int
,List
[int
],None
]) – Number of workers in this cluster. Can either be an integer for a static number of workers, or a[min, max]
list specifying the lower and upper bounds for adaptively scaling up/down workers depending on the amount of work submitted. For adaptive scaling, you can also specify an initial number of workers as[min, initial, max]
. Defaults ton_workers=[4, 20]
which adaptively scales between 4 and 20 workers, and initially starts with 4 workers.name (
str
|None
) – Name to use for identifying this cluster. Defaults toNone
.software (
str
|None
) – Name of the software environment to use; this allows you to use and re-use existing Coiled software environments. Specifying this argument will disable package sync, and it cannot be combined withcontainer
.container (
str
|None
) – Name or URI of container image to use; when using a pre-made container image with Coiled, this allows you to skip the step of explicitly creating a Coiled software environment from that image. Specifying this argument will disable package sync, and it cannot be combined withsoftware
.ignore_container_entrypoint (
bool
|None
) – Ignore entrypoint for specified Docker container (likedocker run --entrypoint
); default is to use the entrypoint (if any) set on the image.worker_class (
str
|None
) – Worker class to use. Defaults todistributed.nanny.Nanny
.worker_options (
dict
|None
) – Mapping with keyword arguments to pass toworker_class
. Defaults to{}
.worker_vm_types (
list
|None
) – List of instance types that you would like workers to use, default instance type selected contains 4 cores. You can use the commandcoiled.list_instance_types()
to see a list of allowed types.worker_cpu (
Union
[int
,List
[int
],None
]) – Number, or range, of CPUs requested for each worker. Specify a range by using a list of two elements, for example:worker_cpu=[2, 8]
.worker_memory (
Union
[str
,List
[str
],None
]) – Amount of memory to request for each worker, Coiled will use a +/- 10% buffer from the memory that you specify. You may specify a range of memory by using a list of two elements, for example:worker_memory=["2GiB", "4GiB"]
.worker_disk_size (
Union
[int
,str
,None
]) – Non-default size of persistent disk attached to each worker instance, specified as string with units or integer for GiB.worker_disk_throughput (
int
|None
) – EXPERIMENTAL. For AWS, non-default throughput (in MB/s) for EBS gp3 volumes attached to workers.worker_gpu (
Union
[int
,bool
,None
]) – Number of GPUs to attach to each worker. Default is 0,True
is interpreted as 1. Note that this is ignored if you’re explicitly specifying an instance type which includes a fixed number of GPUs.worker_gpu_type (
str
|None
) – For GCP, this lets you specify type of guest GPU for instances. Should match the way the cloud provider specifies the GPU, for example:worker_gpu_type="nvidia-tesla-t4"
. By default, Coiled will request NVIDIA T4 if GPU type isn’t specified. For AWS, if you want GPU other than T4, you’ll need to explicitly specify the VM instance type (e.g.,g6.xlarge
for instance with one NVIDIA L4 GPU).scheduler_options (
dict
|None
) – Mapping with keyword arguments to pass to the Scheduler__init__
. Defaults to{}
.scheduler_vm_types (
list
|None
) – List of instance types that you would like the scheduler to use, default instances type selected contains 4 cores. You can use the commandcoiled.list_instance_types()
to se a list of allowed types.scheduler_cpu (
Union
[int
,List
[int
],None
]) – Number, or range, of CPUs requested for the scheduler. Specify a range by using a list of two elements, for example:scheduler_cpu=[2, 8]
.scheduler_memory (
Union
[str
,List
[str
],None
]) – Amount of memory to request for the scheduler, Coiled will use a +/-10% buffer from the memory what you specify. You may specify a range of memory by using a list of two elements, for example:scheduler_memory=["2GiB", "4GiB"]
.scheduler_gpu (
bool
|None
) – Whether to attach GPU to scheduler; this would be a single NVIDIA T4. The best practice for Dask is to have a GPU on the scheduler if you are using GPUs on your workers, so if you don’t explicitly specify, Coiled will follow this best practice and give you a scheduler GPU just in case you haveworker_gpu
set.asynchronous (
bool
) – Set to True if using this Cloud withinasync
/await
functions or within Tornadogen.coroutines
. Otherwise this should remainFalse
for normal use. Default isFalse
.cloud (
CloudV2
|None
) – Cloud object to use for interacting with Coiled. This object contains user/authentication/account information. If this is None (default), we look for a recently-cached Cloud object, and if none exists create one.account (
str
|None
) – DEPRECATED. Useworkspace
instead.workspace (
str
|None
) – The Coiled workspace (previously “account”) to use. If not specified, will check thecoiled.workspace
orcoiled.account
configuration values, or will use your default workspace if those aren’t set.shutdown_on_close (
bool
|None
) – Whether or not to shut down the cluster when it finishes. Defaults to True, unless name points to an existing cluster.idle_timeout (
str
|None
) – Shut down the cluster after this duration if no activity has occurred. E.g. “30 minutes” Default: “20 minutes”no_client_timeout (
str
|None
|object
) – Shut down the cluster after this duration after all clients have disconnected. Whenshutdown_on_close
isFalse
this is disabled, sinceshutdown_on_close=False
usually means you want to keep cluster up after disconnecting so you can later connect a new client. Default: “2 minutes”, oridle_timeout
if there’s a non-default idle timeoutuse_scheduler_public_ip (
bool
|None
) – Boolean value that determines if the Python client connects to the Dask scheduler using the scheduler machine’s public IP address. The default behaviour when set to True is to connect to the scheduler using its public IP address, which means traffic will be routed over the public internet. When set to False, traffic will be routed over the local network the scheduler lives in, so make sure the scheduler private IP address is routable from where this function call is made when setting this to False.use_dashboard_https (
bool
|None
) – When public IP address is used for dashboard, we’ll enable HTTPS + auth by default. You may want to disable this if using something that needs to connect directly to the scheduler dashboard without authentication, such as jupyter dask-labextension<=6.1.0.credentials (
str
|None
) – Which credentials to use for Dask operations and forward to Dask clusters – options are “local”, or None. The default behavior is to use local credentials if available. NOTE: credential handling currently only works with AWS credentials.credentials_duration_seconds (
int
|None
) – For “local” credentials shipped to cluster as STS token, set the duration of STS token. If not specified, the AWS default will be used.timeout (
Union
[int
,float
,None
]) – Timeout in seconds to wait for a cluster to start, will usedefault_cluster_timeout
set on parent Cloud by default.environ (
Optional
[Dict
[str
,str
]]) – Dictionary of environment variables. Values will be transmitted to Coiled; for private environment variables (e.g., passwords or access keys you use for data access),send_private_envs()
is recommended.send_dask_config (
bool
) – Whether to send a frozen copy of localdask.config
to the cluster.unset_single_threading_variables (
bool
|None
) – By default, Dask sets environment variables such asOMP_NUM_THREADS
andMKL_NUM_THREADS
so that relevant libraries use a single thread per Dask worker (by default there are as many Dask workers as CPU cores). In some cases this is not what you want, so this option overrides the default Dask behavior.backend_options (
Union
[AWSOptions
,GCPOptions
,None
]) – Dictionary of backend specific options.show_widget (
bool
|None
) – Whether to use the rich-based widget display. By default, the widget will show in IPython/Jupyter; specifyshow_widget=True
to make widget show even when not in IPython/Jupyter (for example, when making cluster in code invoked via CLI). For use cases involving multiple Clusters at once,show_widget=False
is recommended.custom_widget (
ClusterWidget
|None
) – Use the rich-based widget display outside of IPython/Jupyter (Default: False)tags (
Optional
[Dict
[str
,str
]]) – Dictionary of tags. Can also be set using thecoiled.tags
Dask configuration option. Tags specified for cluster using keyword argument take precedence over those from Dask configuration.wait_for_workers (
Union
[int
,float
,bool
,None
]) – Whether to wait for a number of workers before returning control of the prompt back to the user. Usually, computations will run better if you wait for most workers before submitting tasks to the cluster. You can wait for all workers by passingTrue
, or not wait for any by passingFalse
. You can pass a fraction of the total number of workers requested as a float(like 0.6), or a fixed number of workers as an int (like 13). If None, the value fromcoiled.wait-for-workers
in your Dask config will be used. Default: 0.3. If the requested number of workers don’t launch within 10 minutes, the cluster will be shut down, then a TimeoutError is raised.package_sync (
Union
[bool
,List
[str
],None
]) – DEPRECATED – Always enabled whencontainer
andsoftware
are not given. Synchronize package versions between your local environment and the cluster. Cannot be used with thecontainer
orsoftware
options. Passing specific packages as a list of strings will attempt to synchronize only those packages, use with caution. (Deprecated: usepackage_sync_only
instead.) We recommend reading the additional documentation for this featurepackage_sync_conda_extras (
Optional
[List
[str
]]) – A list of conda package names (available on conda-forge) to include in the environment that are not in your local environment. Use with caution, as this can lead to dependency conflicts with local packages. Note, this will only work for conda package with platform-specific builds (i.e., not “noarch” packages).package_sync_ignore (
Optional
[List
[str
]]) – A list of package names to exclude from the environment. Note their dependencies may still be installed, or they may be installed by another package that depends on them!package_sync_only (
Optional
[List
[str
]]) –A list of package names to only include from the environment. Use with caution. We recommend reading the additional documentation for this feature
package_sync_strict (
bool
) – Only allow exact packages matches, not recommended unless your client platform/architecture matches the cluster platform/architecturepackage_sync_use_uv_installer (
bool
) – Useuv
to install pip packages when building the software environment. This should only be disabled if you are experiencing issues withuv
and need to usepip
instead. (Default: True)private_to_creator (
bool
|None
) – Only allow the cluster creator, not other members of team account, to connect to this cluster.use_best_zone (
bool
) – Allow the cloud provider to pick the zone (in your specified region) that has best availability for your requested instances. We’ll keep the scheduler and workers all in a single zone in order to avoid any cross-zone network traffic (which would be billed).allow_cross_zone (
bool
) – Allow the cluster to have VMs in distinct zones. There’s a cost for cross-zone traffic (usually pennies per GB), so this is a bad choice for shuffle-heavy workloads, but can be a good choice for large embarrassingly parallel workloads.spot_policy (
Optional
[Literal
['on-demand'
,'spot'
,'spot_with_fallback'
]]) – Purchase option to use for workers in your cluster, options are “on-demand”, “spot”, and “spot_with_fallback”; by default this is “on-demand”. (Google Cloud refers to this as “provisioning model” for your instances.) Spot instances are much cheaper, but can have more limited availability and may be terminated while you’re still using them if the cloud provider needs more capacity for other customers. On-demand instances have the best availability and are almost never terminated while still in use, but they’re significantly more expensive than spot instances. For most workloads, “spot_with_fallback” is likely to be a good choice: Coiled will try to get as many spot instances as we can, and if we get less than you requested, we’ll try to get the remaining instances as on-demand. For AWS, when we’re notified that an active spot instance is going to be terminated, we’ll attempt to get a replacement instance (spot if available, but could be on-demand if you’ve enabled “fallback”). Dask on the active instance will attempt a graceful shutdown before the instance is terminated so that computed results won’t be lost.scheduler_port (
int
|None
) – Specify a port other than the default (443) for communication with Dask scheduler. Usually the default is the right choice; Coiled supports using 443 concurrently for scheduler comms and for scheduler dashboard.allow_ingress_from (
str
|None
) – Control the CIDR from which cluster firewall allows ingress to scheduler; by default this is open to any source address (0.0.0.0/0). You can specify CIDR, or “me” for just your IP address.allow_ssh_from (
str
|None
) – Allow connections to scheduler over port 22 (used for SSH) for a specified IP address or CIDR.allow_ssh (
bool
|None
) – Allow connections to scheduler over port 22, used for SSH.allow_spark (
bool
|None
) – Allow (secured) connections to scheduler on port 15003 used by Spark Connect. By default, this port is open.jupyter (
bool
|None
) – Start a Jupyter server in the same process as Dask scheduler. The Jupyter server will be behind HTTPS with authentication (unless you disableuse_dashboard_https
, which we strongly recommend against). Note thatjupyterlab
will need to be installed in the software environment used on the cluster (or in your local environment if using package sync). Once the cluster is running, you can usejupyter_link
to get link to access the Jupyter server.mount_bucket (
Union
[str
,List
[str
],None
]) – Optional name or list of names of buckets to mount. For example,"s3://my-s3-bucket"
will mount the S3 bucketmy-s3-bucket
, using your forwarded AWS credentials, and"gs://my-gcs-bucket"
will mount the GCS bucketmy-gcs-bucket
using your forwarded Google Application Default Credentials. Buckets are mounted to subdirectories in both/mount
and./mount
(relative to working directory for Dask), subdirectory name will be taken from bucket name. By default, mounting times out after 30 s. You can manually configure a different timeout using thecoiled.mount-bucket.timeout
configuration value.host_setup_script (
str
|None
) – Script to run on the host VM during the setup process. You can either specify as text of the script to run, or as path to a local script file to copy and run.region (
str
|None
) – The cloud provider region in which to run the cluster.arm (
bool
|None
) – Use ARM instances for cluster; default is x86 (Intel) instances.pause_on_exit (
bool
|None
) – Pause the cluster instead of shutting it down when exiting.