Google Cloud Platform (GCP)#

IAM custom role#

Coiled requires a limited set of IAM permissions to provision infrastructure and compute resources in your GCP account (see the guide on creating a service account).

For accessing data, you can use a more limited set of IAM permissions to access your data while running a computation (see guide on creating a service account for data access). You can use the following YAML file for the IAM role, which has scope for submitting logs and accessing Google Storage, adding or removing permissions as needed.


Each Google Cloud resource type has pre-defined quotas, which are the maximum number of resources you can create for a given resource type. You can view existing quotas and request increases from your Google Cloud console. If you have received error messages such as Quota <resource-type> exceeded, you may want to request an increase. In particular, the following resource types often have insufficient quotas:

  • Persistent disk SSD (GB) (see the Google Cloud documentation on Disk quotas). You may need to request an increase if you see the Quota 'SSD_TOTAL_GB' exceeded error message.

  • In-use external IP addresses (see the Google Cloud documentation on External IP addresses). You may need to request an increase if you see the Quota 'IN_USE_ADDRESSES' exceeded error message.

  • CPU (see the Google Cloud documentation on CPU quota). You may need to request an increase if you see the Quota 'CPUS' exceeded error message.

Backend options#

There are several GCP-specific options that you can specify (listed below) to customize Coiled’s behavior.





GCP region to create resources in



GCP zone to create resources in



Whether or not to use preemptible instances for cluster workers



Ports and CIDR block for the security groups that Coiled creates

{"ports": [22, 8787, 8786], "cidr": ""}

You can specify backend options directly in Python:

import coiled

cluster = coiled.Cluster(
    backend_options={"region_name": "us-central1", "preemptible": False}

Or save them to your Coiled configuration file:

# ~/.config/dask/coiled.yaml

    region_name: us-central1

GPU support#

Coiled supports running computations with GPU-enabled machines if your account has access to GPUs. See the GPU best practices documentation for more information on using GPUs with GCP.

Coiled logs#

If you are running Coiled on your GCP account, cluster logs will be saved within your GCP account. Coiled will send logs to GCP Logging and GCP BigQuery (if BigQuery is enabled in the project).

We send logs to GCP Logging so that you can easily view logs with GCP Logs Explorer, and we use GCP Cloud Storage/GCP BigQuery to back the logs views we display on the Cluster Dashboard.


Coiled will only use BigQuery if you have BigQuery enabled in your project and if you have the following permissions in your service account: bigquery.datasets.create, bigquery.datasets.get, bigquery.datasets.update and

Log Storage

Storage time

GCP Logging

30 days

GCP BigQuery dataset (Coiled v2)

10 days

When you configure your backend to use GCP, Coiled creates a bucket named coiled-logs GCP Logging.


When Coiled is configured to run in your own GCP account, you can customize the firewall ingress rules for resources that Coiled creates in your GCP account.

By default, Dask schedulers created by Coiled will be reachable via ports 8787 and 8786 from any source network. This is consistent with the default ingress rules that Coiled configures for its GCP firewalls:











Ports 8787 and 8786 are used by the Dask dashboard and Dask protocol respectively. Port 22 optionally supports incoming SSH connections to the virtual machine.

Configuring firewall rules#

While allowing incoming connections on the default Dask ports from any source network is convenient, you might want to configure additional security measures by restricting incoming connections. This can be done by using coiled.set_backend_options() or by using the backend_options.