Logging#

Coiled stores instance, scheduler, and worker logs in your cloud provider account using Amazon CloudWatch and Google Cloud Logging (see the sections on AWS and GCP). While you can use any of your existing log management systems to access your logs, Coiled also offers a few ways to make this easier.

Note

This page covers cluster and instance logs. To learn more about understanding your Dask computations, see our documentation on Analytics and Performance Reports.

Coiled cloud#

Regardless of whether you are launching a Coiled cluster interactively or from a Python script, you can see your logs from the cluster dashboard page of your Coiled account at https://cloud.coiled.io/<account-name>/clusters:

Screenshot of the cluster dashboard page on Coiled cloud.

Cluster dashboard (click to enlarge)#

When you click on the name of a given cluster, you’ll be redirected to the cluster details page at https://cloud.coiled.io/<account-name>/clusters/<cluster_id>/details>:

Screenshot of the cluster details page on Coiled cloud.

Cluster details (click to enlarge)#

Here you can see the current cluster state and download instance-specific logs for the scheduler or workers by clicking “download logs”.

Note

You can also pull the logs for the scheduler and each worker using coiled.cluster_logs().

As you scroll down, you can see the logs for the cluster state history:

Screenshot of cluster state history.

Cluster state history (click to enlarge)#

Interactive session#

Within an interactive session, e.g. IPython or Jupyter Notebook, there is a dynamic widget loaded when you first create the cluster:

Terminal dashboard displaying the Coiled cluster status overview, configuration, and worker states.

The widget has three panels showing an overview of the Coiled cluster, the configuration, and Dask worker states with progress bars for how many workers have reached a given state. You can also use the link at the top to view the cluster details page mentioned above.

Python script#

Coiled uses the Python standard logging module for logging changes in cluster, scheduler, and worker state. The default level is WARNING, but you can control the logging verbosity by setting the logging level, the DEBUG and INFO levels being the most verbose. See the Python logging docs for more on logging levels. Here is an example for how this can be configured from within a Python script:

import logging
from coiled import Cluster

logging.basicConfig(level=logging.INFO)
logging.getLogger("coiled").setLevel(logging.INFO)

cluster = Cluster()
cluster.close()

The above snippet will print the logs to the console, but you can also choose to save logs to a file by changing the parameters passed to basicConfig() (see this tutorial on logging to a file).

Next steps#

For more advanced options in debugging your Dask computations, see the Dask documentation on logging.