Coiled Client Release Notes#
These release notes are related to updates to the Coiled
Python package.
1.70.0#
Add
--package-sync-conda-extras
and--package-sync-strict
options tocoiled run
andcoiled batch run
.
1.69.0#
Add
coiled.batch.run()
Python API for running Coiled Batch Jobs, for example,coiled.batch.run("echo Hello")
orcoiled.batch.run("script.py", container="ubuntu:latest", ntasks=10)
.For CLIs that allow you to specify environment variables (
coiled run
,coiled batch run
, andcoiled notebook
), previously you had to specifyKEY=VALUE
. Now you can also specifyKEY
to read the value fromKEY
local environment variable, orKEY=$OTHER_KEY
to read the value fromOTHER_KEY
local environment variable.For
coiled batch run
, print the command that will be run for each task.For
coiled batch run
, support%n
syntax for--array
to limit number of parallel nodes (VMs) to use, e.g.,coiled batch run --array 0-12:3%2 ...
runs jobs with IDs 0, 3, 6, 9, and 12 on 2 VMs.
1.68.1#
For
coiled batch run --ntasks n
, number task IDs from 0 ton-1
(rather than 1 ton
).
1.68.0#
Add
--secret-env
option tocoiled batch run
for sensitive environment variables that you’d like us to store temporarily only for the duration of your batch job.Support
=
as separator between--index-url
or--extra-index-url
and URL when creating software environments fromrequirements.txt
files.Better error message when workspace name is specified but slug is expected.
Bug fix to
coiled cluster azure-logs
CLI so that all pages of logs will be fetched.
1.67.0#
Timeout (default 1 minute) when creating wheel from local package for package sync.
Add
arm
kwarg tocreate_software_environment()
and--arm
flag tocoiled env create
CLI (this can be used instead ofarchitecture
kwarg or--architecture
CLI option).Skip forwarding of local dask config if it is not JSON serializable.
Add support for
PIP_PYPI_URL
environment variable as one of the ways to specify index URL.No longer open port 22 by default on VM for
coiled batch run
.Handle log lines in
coiled logs
CLI thatrich
can’t print; when there’s a markup exception, we’ll use normal printing. If you want to avoidrich
for all log lines, you can specify--no-color
.For
coiled batch run
, use single VM node when there’s a single task (rather than scheduler + worker nodes); add--tag
option for tagging runs.
1.66.1#
Minor tweak to
coiled batch run
header option parsing.
1.66.0#
Handle another
FileNotFoundError
that can occur if directory structure changes while running scan for package sync.Add help text
coiled batch run
options; apply VM options for scheduler node (as well as workers) if you’ve specified that task(s) should run on scheduler.
1.65.0#
Enhancements to
coiled batch run
for jobs that involve coordination between scheduler and worker nodes:--wait-for-ready-cluster
flag so that tasks are only assigned when entire cluster is ready;--task-on-scheduler
flag to assign a single task to the scheduler node; and environment variables such asCOILED_BATCH_PROCESS_TYPE
(“scheduler” or “worker”),COILED_BATCH_SCHEDULER_ADDRESS
(scheduler node IP address),COILED_BATCH_LOCAL_ADDRESS
(IP adddress of node where task is running), andCOILED_BATCH_READY_WORKERS
(a comma delimited list of IP addresses).
1.64.1#
Handle
FileNotFoundError
that can occur if directory structure changes while running scan for package sync.
1.64.0#
Add
--keep-existing-access
tocoiled setup azure
CLI.Small change to support using package sync with customer-hosted storage accounts in Azure.
1.63.0#
For package sync, correctly parse
extra-index-url
specified under[tool.uv.pip]
as a list rather than string.
1.62.0#
Fix to
coiled setup azure
bug that affected some Windows users because of the way the inline JSON was formatted.Change to
coiled.google.credentials.CoiledShippedCredentials
class so that forwarding of the OAuth2 token for Google Cloud data-access credentials is more reliable even when Dask workers have blocked event loops.Add
coiled logs
as an alias for existingcoiled cluster logs
CLI, and allow specifying cluster ascoiled logs <cluster>
without--cluster
.Tweaks to the experimental
coiled batch
CLI.
1.61.0#
No changes.
1.60.0#
Small changes to the very experimental
coiled batch run
CLI; use at your own risk.
1.59.0#
Add a very experimental
coiled batch run
CLI; use at your own risk.
1.58.1#
Fix issue where starting a cluster with package sync from the
/
directory would cause the software environment build to fail.
1.58.0#
During
coiled setup gcp
, Coiled will attempt to temporarily adjust theiam.allowedPolicyMemberDomains
organization policy so that the Google service account used for the Log Sink can be granted permission. This is required for newer project, and temporarily disabling the constraint is the approach that Google recommends.
1.57.0#
Add
coiled cluster address
CLI for getting the hostname or IP address of a cluster.Bug fix so that the countdown for “no client shutdown” is not triggered by mounting a bucket (i.e., the
mount_bucket
keyword argument or--mount-bucket
CLI option).The
extra_kwargs
keyword argument to@coiled.function
now takes precedence over the default values used for@coiled.function
clusters (e.g.,extra_worker_on_scheduler
).
1.56.0#
Shorter cluster creation error tracebacks for a few types of errors.
1.55.0#
Fix issue where the
package_sync_only
argument toCluster()
would be ignored if the deprecatedpackage_sync
argument was not also a list.Add
package_sync_conda_extras
option toCluster()
for specifying names of conda packages not present in your local environment that you would like on your cluster.
1.54.0#
Bug fix for
coiled cluster azure-logs
CLI that affected large clusters.
1.53.0#
Bug fix so that the 2 minute “no client” timeout won’t start immediately counting down for adaptive clusters. It will now start counting down only after the user connects a Dask client (as is already the case for non-adaptive clusters).
1.52.0#
When using
mount_bucket
keyword argument for clusters, the bucket is now mounted on all workers (as well as the scheduler).
1.51.0#
Minor internal changes to
coiled setup aws
.
1.50.0#
Correctly classify the (not yet public) C4A series on Google Cloud as ARM.
For
coiled setup aws
, minor fix to the scoping oniam:PassRole
.
1.49.0#
Add
--no-credential-forwarding
option tocoiled run
to disable the default forwarding of local credentials. Use this if you instead want to (e.g.) use AWS Instance Profile for code you’re running.
1.48.0#
For
coiled setup aws
, reduce and scope the permissions in the ongoing role used by Coiled.Bug fix to remove any extra whitespace around index URLs used for package sync.
1.47.0#
Minor change to avoid deprecation warning from Dask (about
idempotent
kwarg) when using Spark on Coiled.
1.46.0#
Change default cluster behavior to shut down 2 minutes (by default) after all clients have disconnected. You can control this by specifying
no_client_timeout
, e.g.,Cluster(..., no_client_timeout=None)
to disable, orCluster(..., no_client_timeout="30 minutes")
to keep cluster running longer if you expect to re-connect to the cluster.
1.45.0#
Bug fix to enable forwarding OAuth2 token from local Google Cloud credentials when local credentials are service account key.
Bug fix when printing
mutagen
sync errors, plus telemetry so Coiled is aware of these errors.
1.44.0#
Idle timeout can be specified using
distributed.scheduler.idle-timeout
Dask config value, as well asCluster(idle_timeout=...)
keyword argument. The kwarg takes precedence.For
coiled run
andcoiled notebook
, interpret--disk-size <int>
as GB (as docs claimed was the case).
1.43.0#
By default, use port 443 for both Dask scheduler comms and the scheduler dashboard. Traffic is routed as appropriate, both scheduler and dashboard are secure by default. If you’re using custom networking, either make sure your security group/firewall for the scheduler allows ingress to 443 from the client, or use
Cluster(..., scheduler_port=8786)
to continue to use port 8786 for scheduler comms (the previous default).
1.42.0#
Add more tracking around client cluster scaling calls.
1.41.0#
When specifying both the
worker_cpu
andworker_memory
, or thescheduler_cpu
andscheduler_memory
arguments toCluster
, for AWS Coiled will now include some of the more recentr
andc
instance types (as well as future types).Add
unset_single_threading_variables
kwarg toCluster()
for overriding the default Dask behavior of setting variables such asOMP_NUM_THREADS
to1
.Fix compatibility issue with
prefect>2.19.9
.
1.40.0#
For adaptive scaling, when there are requested workers that Coiled is unable to provision and more workers are still desired, adaptive will now re-request those workers. This is likely to help when the workers couldn’t be provisioned because of cloud availability limits. The interval at which adaptive will retry is configured using the
coiled.adaptive.update-plan-interval
Dask config value.
1.39.0#
Add client telemetry so Coiled can track timing and success of various client-side activities, such as preparing to sync the software environment or forwarding local credentials to cluster. No sensitive data is collected, telemetry can be disabled by setting
coiled.send-client-events
(in Dask config) toFalse
.Fix link to Google Cloud quotas when using
coiled setup gcp --quotas
.
1.38.0#
When building software environments, we now use
uv
to install packages from PyPI by default. This behavior can be disabled by setting thepackage_sync_use_uv_installer
option forcoiled.Cluster()
toFalse
.
1.37.0#
Add support for mounting S3 and GCS buckets as volumes on cloud VMs. You specify buckets using
mount_bucket
keyword argument forcoiled.Cluster()
, or--mount-bucket
CLI option forcoiled run
andcoiled notebook
. This feature is under active development, feedback welcome!Default idle timeout for Coiled Functions is now 6 hours (previously: 24). Clusters for Functions still shut down much quicker if a client is no longer attached.
Unpin
backoff
to avoid conflicts with any libraries that pin it.Workaround to create temporary directory (when needed) inside current working directory if unable to create on in the default locations.
1.36.0#
Add
extra_kwargs
keyword argument to@coiled.function
for passing any extra arbitrary keyword arguments to the underlyingcoiled.Cluster
class.With
dask=>2024.7.1
, usingmap
with@coiled.function
no longer submits tasks in batches because submitting without batching is no longer slow.Add
coiled cluster azure-logs
CLI command to pull logs from Azure. This is under active development; it’s likely that there will be breaking changes in the future to this CLI command.
1.35.0#
Add
--ignore-container-entrypoint
flag tocoiled env create
CLI for creating a software environment from a container with an entrypoint that you want Coiled to ignore.Fixed incompatibility with
importlib_metadata >=8.0
.
1.34.3#
Fixed issue where toml configuration files would only be found in current working directory.
Use a default value of 20 minutes for
distributed.scheduler.no-workers-timeout
, so that clusters with tasks that cannot be assigned a worker will not be stuck forever and will automatically shut down. This functionality relies ondistributed>=2024.1.0
.Fixed issue where default logging was extremely minimal when not creating a cluster in a notebook.
coiled cluster logs
CLI will now pull multiple pages of logs if needed, and will show label specified with--label
.
1.34.2#
Shorter traceback for workspace permission error.
1.34.1#
Better handling of rate limits in cluster state checks.
1.34.0#
Add support for Prefect 3.
1.33.0#
Add
coiled cluster list
CLI command.Tweak error message when
Cluster(software=...)
is invalid software environment name format.
1.32.0#
When using
coiled run
to start additional clusters (for example, when using@coiled.function
in the script you’re running viacoiled run
), only re-use the software environment if the VM architecture matches.
1.31.0#
By default, transmit and capture all Dask events to control-plane. Which events to send is determined by
coiled.analytics.events.allow
Dask config.
1.30.0#
coiled run
by default will no longer re-use VM from differentcoiled run
invocations. To run multiplecoiled run
commands using the same VM, specify the same name with--name <name>
for commands that you want to run on the same VM.
1.29.0#
Show progress when using
coiled run --file
to upload files from local machine to cloud VM.When adaptive scaling tries to scale down but the Dask scheduler does not retire some of the workers because they have data than cannot be moved, don’t forcibly stop the worker VMs.
Warn about log volume when any loggers are set to DEBUG in Dask config.
For CLI commands like
coiled cluster ssh
orcoiled cluster stop
that by default use the most recent cluster if no cluster is specified, this is now filtered by clusters that your user created, not the most recent cluster anyone made in your workspace.Update the error message in
coiled setup aws
if you have insufficient permissions to include the permissions needed for setting up cross-account role assumption.
1.28.1#
Fix bug in
coiled setup gcp --enable-gar
that prevented adding policy binding for Coiled to access private Google Artifact Registry repositories.
1.28.0#
Fixed issue where starting a cluster with package sync enabled on Linux would cause package pins to be overly strict.
1.27.2#
Automatically use private IP address for client <-> scheduler connection if you’ve configured Coiled to not assign public IP address to scheduler.
1.27.1#
Allow configuring scheduler port using
coiled.scheduler-port
dask config setting.
1.27.0#
For
coiled notebook --sync
andcoiled run --sync
, by default edit SSH config for SSH access (which is needed under the hood), rather than using SSH agent. If you want to use SSH agent instead, you can set this in your Dask config by runningcoiled config set coiled.use_ssh_agent True
.
1.26.0#
For
coiled notebook --sync
andcoiled run --sync
, add--sync-ignore <path>
option; also, always ignore.venv
directories when syncing.
1.25.1#
When using
coiled run --port ...
to start VM with port(s) exposed, the server address is now shown in widget.Error handling for creating the Coiled config file when
DASK_CONFIG
path is a file instead of directory.
1.25.0#
Allow explicitly setting
scopes
when forwarding Google credentials when usingcoiled.credentials.google.send_application_default_credentials()
. For example, when using Google Earth Engine, you’d usescopes=["https://www.googleapis.com/auth/earthengine"]
(or some longer list of desired scopes).Add
--workspace
option tocoiled env list
andcoiled env delete
CLI.
1.24.0#
Add support for getting credentials from
keyring
for private PyPI index.Bug fix so that
@coiled.function
shows initial workers as0
rather than-1
whenn_workers
isn’t specified.Bug fix so that calling
coiled.credentials.google.send_application_default_credentials(cluster)
won’t affect any client(s) you’ve already gotten usingcluster.get_client()
.
1.23.0#
Explicitly calling
cluster.scale(n)
now turns off adaptive scaling (if adaptive was being used).Fix bug that sometimes caused package sync to not sync local code in the working directory.
1.22.0#
Forward long-lived Google credentials to your
coiled run
VM using the--forward-gcp-adc
CLI flag.
1.21.0#
Minimum and maximum number of workers for adaptive scaling can be specified using
n_workers=[min, max]
kwarg withcoiled.Cluster
andcoiled.function
. When you don’t want adaptive, use (e.g.)n_workers=10
as before to request exactly 10 workers.Add
allow_cross_zone
kwarg tocoiled.Cluster()
to allow the cluster to have VMs in distinct zones. There’s a cost for cross-zone traffic (usually pennies per GB), so this is a bad choice for shuffle-heavy workloads, but can be a good choice for large embarrassingly parallel workloads.Bug fix so that scaling down many workers at once does not result in error from very long request URL.
More accurate cloud cost estimate in
coiled notebook
andcoiled run
widgets.Adjusted retry logic for when Coiled client HTTP requests get a server error response.
1.20.0#
Minor refactoring for compatibility with
pyright==1.1.362
.Add support for setting user-specified resource tags on notebooks with
coiled notebook --tag key=value --tag another=value
.
1.19.2#
Fix the
Credentials
block for Prefect so that it’s compatible withpydantic>=2
.
1.19.1#
Fixed issue where
@coiled.function(gpu=True)
would raise anInstanceTypeError
.
1.19.0#
Re-raise any exception that happens inside code wrapped by
coiled.performance_report
as original exception, rather than genericException
.
1.18.0#
Added
--env
forcoiled notebook start
. These env vars are sent securely to the notebook server after boot and not stored on disk.
1.17.0#
Added
allow_ingress_from
argument to@coiled.function
.Added
--allow-ingress-from
argument tocoiled notebook start
.For
@coiled.function
, only open port 22 to the client IP address (not0.0.0.0/0
).Bug fix for running
coiled setup azure
in Windows Command Prompt (it doesn’t accept single quotation marks around JSON strings).
1.16.0#
Added support to
coiled run
for string commands. For example,coiled run "python myscript.py"
.When requesting GPU cluster without specifying CPU or memory limits, we’ll default to small instance types (1 to 8 vCPUs).
Fixed an issue where
pytorch
would be missing on GPU clusters for some conda environments that did not include other packages that relied onpytorch
.The coiled client now explicitly sets
PIP_REQUIRE_VIRTUALENV=false
when callingpip wheel
andpip config
to avoid issues whenPIP_REQUIRE_VIRTUALENV=true
is set in the local environment.All GPU package sync clusters will now automatically include
pynvml
in the environment to enable GPU performance metrics.
1.15.0#
Added
send_application_default_credentials(cluster)
for sending Google Application Default Credentials to the cluster. These are longer-lived, revocable credentials, and can be automatically used by many libraries such asgcsfs
. See Long-lived (revocable) Application Default Credentials for more details.Package sync will now automatically fix common package issues that used to be warnings (e.g., using sklean instead of scikit-learn).
Added
--root
tocoiled run
for executing commands as root. For example, you could usecoiled run --interactive bash --root --keepalive 5m
to interactively install software withapt-get
.Added
CoiledShippedCredentials
for refreshable Google Cloud credentials using the temporary OAuth2 token shipped by Coiled to the cluster. See Personal OAuth2 tokens for more details.
1.14.0#
Package sync will now automatically switch to the GPU-enabled
torch
conda package if you are using a GPU cluster.Add
--sync
tocoiled run
for syncing files from local working directory to/scratch/synced
on the cloud VM.Bug fix so that
@coiled.function(gpu=True)
will add GPU to any additional workers (not just the first).
1.13.1#
Another (better) fix for how environment variables such as
OMP_NUM_THREADS
are unset for CLI jobs.
1.13.0#
When starting Spark, you can now specify memory limits relative to available VM memory. For example,
cluster.get_spark(executor_memory_factor=0.9, worker_memory_factor=0.8)
.
1.12.4#
coiled run --file
will now report an error before cluster creation if the file does not exist.Fixed typo in
coiled setup aws --quotas
output
1.12.3#
Fix bug for
coiled run --env
and--tag
introduced in 1.12.2.Fix a bug for how environment variables such as
OMP_NUM_THREADS
are unset for notebooks and CLI jobs. They were being set to""
, not they’re unset before user-code runs. Some libraries, such astorch
, would give an error for""
.Fix common dependency conflict between
aiobotocore
andbotocore
.
1.12.2#
Support for setting
shm_size
for Docker (sometimes needed for training on GPUs). For example,Cluster(..., backend_options={"docker_shm_size":"1gb"})
.Fix to
coiled run [...] --detach
so that VM will stay up while your code is running, and by default shut down after your code is no longer running. To keep VM running for re-use, specify a keepalive value, for example--keepalive 10m
.
1.12.1#
For the temporary AWS token shipped from client to cluster for data access, format the expiration datetime string to match format expected by some non-Python AWS libraries.
For
coiled run path/to/file.py --file /another/dir/to/upload/
, preserve path structure for the implicitly specified file and for explicitly specified directories outside current working directory.
1.12.0#
Dropped support for Python 3.7.
We’re using the name “workspace” for what we used to call an “account”, so throughout the client we’re deprecating
account=
(in Python API) and--account
(in CLI). You should useworkspace=
and--workspace
instead (althoughaccount
will continue to work for now).Various improvements to make
coiled setup azure
more user-friendly.Added kwarg for passing config when getting a
SparkSession
for Spark running on Coiled. For example,cluster.get_spark(spark_connect_config={"spark.scheduler.mode":"FAIR"})
to set the scheduling mode.Home directory will now be abbreviated as
~
in the package sync widgets.
1.11.0#
Allow user-specified custom subdomain for hostname. For regular coiled clusters, use
dashboard_custom_subdomain='foo'
to have the scheduler dashboard accessible atfoo.<account slug>.dask.host
. Forcoiled run
, use--subdomain foo
.Fixed bug that prevented referencing cluster by name with
coiled cluster logs
orcoiled cluster ssh
1.10.0#
For
coiled run
andcoiled notebook
, default to opening port 22 on the VM only to the public IP address from where the command is run. This can be controlled with--allow-ssh-from
(run) and--allow-sync-from
(notebook). Coiled relies on SSH in the implementation of bothrun
andnotebook
, and uses secure, unique-to-each-VM key pairs for SSH authentication. For normal Coiled clusters, you can also explicitly specify different SSH ingress CIDR usingCluster(allow_ssh_from=...)
.
1.9.0#
Added support for authenticating with Coiled using a Prefect Block.
Fixed bugs in
coiled setup aws
CLI when setting up cross-account role assumption. Default region is now set as desired, and there’s now a check to ensure that permissions have propagated before Coiled tries to use them.
1.8.0#
Added support in
coiled run
for uploading directories (as well as files) using--file <dir>
coiled setup aws
CLI now uses cross-account role assumption for Coiled’s access to your AWS account. For existing users, we recommend transitioning to role assumption instead of using AWS access keys. (You can check which method you’re using under Cloud Provider Credentials.)
1.7.0#
Fixed a bug where the notebook created using
coiled notebook start --sync
would not be able to see updated code in the local working directory (package sync was creating a wheel of the directory and that overrode the sync’ed python source files).For
coiled run --file foo/bar/file.txt
, the directory structure will no longer be lost, so you can run (e.g.)coiled run cat foo/bar/file.txt --file foo/bar/file.txt
(where previously you would have wantedcoiled run cat file.txt --file foo/bar/file.txt
).Add
--file
argument tocoiled prefect serve
.
1.6.0#
When using
map()
with@coiled.function
-decorated function, you can now specifyerrors="skip"
to map function calls that fail toNone
rather than raising an exception when you try to get result.Fixed issue where package sync errors would get displayed multiple times.
Added local Python environment to package sync and notebook widgets, so you can now tell which Python environment is being synced.
1.5.0#
For Google Cloud, if you specify cpu or memory but not both (e.g.,
Cluster(scheduler_cpu=4)
) we’ll now only match this on “balanced” instance types. This was already the behavior for AWS. If you want to include memory and/or compute optimized instances, you can specify both cpu and memory, or explicitly specify"*"
for one of these (e.g.,Cluster(scheduler_cpu=4, scheduler_memory="*")
).Add support for setting user-specified resource tags with
coiled run --tag key=value --tag another=value
and@coiled.function(..., tags={"key": "value", "another": "value"})
.Bug fix so that when Coiled Function is running locally, e.g.,
@coiled.function(local=true)
, adaptive scaling will use a reasonable upper limit based on your local machine.
1.4.0#
Add option to save generated bash script when using Azure setup. This feature is experimental, so please contact us before attempting to use it.
1.3.14#
Bug fix so that
use_scheduler_public_ip=False
anduse_dashboard_https=True
can be used together.
1.3.10#
Bug fix so that
@coiled.function
used from withincoiled run
orcoiled prefect serve
will re-use the software environment.
1.3.9#
Bug fix for
coiled notebook --sync
that affects anyone not using their own Docker image.Bug fix for package sync environments that will prevent wheels from being built out of
site-packages
directories.
1.3.5#
Fix issue with using Coiled clusters from VSCode notebooks.
1.3.0#
Add ability to use custom PyPI URLs with package sync. Just set the
global.extra-index-url
orglobal.index-url`
settings by runningpip config set 'global.extra-index-url' "YOUR_CUSTOM_URL"
.Deprecated
package_sync
argument toCluster
, because it is always enabled by default. Usepackage_sync_only
for restricting the list of packages. Specifycontainer
orsoftware
args to disable package sync.
1.2.0#
Add ability to include local code in a manually created software environment via the
--include-local-code
flag forcoiled env create
and theinclude_local_code
argument forcoiled.create_software_environment
.
1.1.14#
Add ability to use editable packages with package sync from clusters or notebooks that start other clusters.
1.1.12#
Notify users sooner when they have not configured a cloud provider and try to create a cluster.
1.1.10#
By default, don’t include burstable AWS instance types (e.g.,
t3
) in types you’ll get if you specify desired cores and memory. You can still explicitly request these instance types if desired.For
@coiled.function
, use “spot_with_fallback” as the default spot_policy.Declare extra optional dependencies for using
coiled notebook
to simplify installation by allowingpip install coiled[notebook]
.
1.1.9#
Bug fix for package sync when starting Coiled cluster from inside Celery task.
1.1.7#
Bug fix for shipping refreshable temporary AWS credentials when we’re unable to determine expiration of local AWS session credentials.
1.1.6#
Change the default idle timeout for Coiled Notebooks from “24 hours” to “4 hours”.
1.1.5#
Fix path issues when using
coiled setup gcp
on Windows
1.1.2#
Package sync will now use httpx with HTTP/2 for uploading wheels to S3 to fix a crash when uploading files larger than 2GB.
1.1.1#
coiled prefect serve
is an easy way to run your Prefect flow on Coiled. See Prefect for more details.
1.1.0#
coiled run
now uses SSH under the hood, instead of relying on dask client. This adds support for--interactive
and--detach
, and means that local and remote versions of dask don’t need to match (this matters if you’re using--container
). Note thatfabric
is now a client dependency.
1.0.0#
This has no changes from version 0.9.39. We’re just updating the major version to 1 as an overdue acknowledgement of stability.
0.9.35#
Reattaching to an existing cluster is now ~35% faster.
Improvements to how temporary AWS credentials are shipped from your local machine to the cloud. The upshot is that code running in the cloud will automatically pick up refreshed credentials as needed, rather than sometimes getting errors from trying to use expired credentials.
0.9.34#
Fix to
coiled setup gcp
for when there’s an existing policy in GCP project with condition set.
0.9.33#
Fixed package sync KeyError when handling old conda packages that do not have the “constrains” key specified in their metadata.
0.9.32#
Package sync will now default to looking for conda packages on
conda-forge
if they were installed directly from an archive file (e.g.conda install /path/to/miniconda/pkgs/some_package.conda
).coiled run
CLI now captures what code ran (unless configured not to withcoiled.analytics.computation.code.transmit
in your dask config).
0.9.15#
Package sync will no longer make a
coiled_local_*
wheel out of your working directory if that directory is a git repository that you already installed viapip
.
0.9.14#
Coiled can now ship an OAuth2 token for Google data access if you have Google Application Default Credentials configured locally. See Personal OAuth2 tokens for more details.
0.9.10#
Package sync will now upload software environments to the same region as the cluster, which can improve performance.
0.9.8#
When you don’t specify instance types, for AWS the second-choice instance type will now be
m5.xlarge
(older generation non-burstable) rather thant3.xlarge
(similar hardware tom5
but also burstable). The first-choice we request from AWS remainsm6i.xlarge
. If you want to use burstable instance types, you can ask for them explicitly like this:coiled.Cluster(worker_vm_types=["t3.xlarge", "t2.xlarge"])
.coiled run
CLI and@coiled.function
decorator will not setMKL_NUM_THREADS
,OMP_NUM_THREADS
, orOPENBLAS_NUM_THREADS
to 1 by default (Dask sets these to 1 by default). Setting these to 1 makes sense when you have as many dask worker threads as you have cores, but forcoiled run
and@coiled.function
you often want to run a single task that has access to the full resources of the machine.
0.9.6#
Empty string environ values such as
cluster = coiled.Cluster(environ={"OMP_NUM_THREADS": ""})
can now be used to disable default Dask values forMKL_NUM_THREADS
,OMP_NUM_THREADS
, orOPENBLAS_NUM_THREADS
.
0.9.5#
Fixed a bug that would create extra
coiled_local_
wheels when package sync ran inside a virtualenv.
0.9.4#
Restore Python 3.7 support.
Renamed
@coiled.run(...)
to@coiled.function(...)
.
0.9.0#
Renamed
coiled.magic
module tocoiled.capture_environment
When shipping local AWS credentials to cluster (using temporary STS token), default region is set on the cluster to the default AWS region for local AWS sessions. If a different region is desired, you can explicitly set region in your code.
0.8.14#
Increase the default time before adaptive shuts down workers from 1 minute to 2 minutes. This is implemented by switching the default
wait_count
inCluster.adapt
from 12 to 24, so that a worker must be suggested for shutdown during 24 consecutive 5-second intervals. Both theinterval
andwait_count
can be controlled by users inCluster.adapt
. We expect this is a better experience for interactive uses of adaptive by avoiding scaledown while waiting for user input. If you want the old behavior, specifycluster.adapt(wait_count=12)
.
0.8.13#
Resolve an issue that would cause
Cluster
to wait on workers that could never arrive.
0.8.4#
Improvements to the package sync code that scans for all importable Python files, it now ignores more files that can be safely ignored.
0.8.3#
Fixed a bug that was ignoring
--account
option oncoiled login
.
0.8.2#
Fixed a bug with
coiled setup aws
.
0.8.1#
coiled login
lets you grant access with a single click from your Coiled web account; you no longer need to manually create then copy and paste an API token. If you do wish to log in locally with an existing token, usecoiled login --token <token>
.Resolved a crash on Windows with package sync when
dask_labextension
is present in the environment.
0.8.0#
If no cluster size is specified, adaptive scaling is now turned on, with a minimum of 4 workers and maximum of 20. However, recommend that users specify their cluster size and/or adaptive settings explicitly.
Package sync will now automatically create wheels from all importable Python packages in
sys.path
and your current working directory, even if you do not have asetup.py
orpyproject.toml
file.Package sync will now issue warnings if the wheels being built are missing Python files that would have been importable locally.
Added a new
@coiled.run
decorator for running functions in the cloud. See the Serverless Functions documentation for more details.
0.7.9#
Fixed a bug in
coiled env list
where the links to the software environment were incorrect.
0.7.0#
Drop support for Python 3.7
When the
package_sync_ignore
kwarg forcoiled.Cluster()
is set, package sync strict mode is no longer automatically enabled even if the client and cluster architectures match.coiled notebook start
now generates a unique name by default instead of determining the name from the current working directory.If the user has not configured any log handler (and is not in iPython/Jupyter), Coiled now attaches a StreamHandler to the logger named “COILED” and sets its level to INFO. This allows us to show important information (e.g. the cluster name) to users who have not set up info-level logging for themselves. Users configuring their own logging are unaffected. To disable this, pass use
coiled.Cluster(..., configure_logging=False)
.Some less important logging info has been changed from INFO to DEBUG.
coiled cluster ssh --dask
lets you attach directly to shell inside container running Dask (rather than host machine).
0.6.10#
coiled notebook start
will now block your terminal by default (likejupyter lab
does), use--no-block
to not block.
0.6.9#
Shorter tracebacks for some client errors, some of internal (mostly async-related) functions are no longer included.
0.6.8#
Renamed
coiled notebook up
/coiled notebook down
tocoiled notebook start
/coiled notebook stop
.
0.6.7#
Fixed an issue where
coiled package-sync scan
would output data in the wrong columns when reporting that a wheel could not be created of local importable Python code.
0.6.5#
Add
container
keyword argument tocoiled.Cluster()
so that you can skip the step of explicitly creating a Coiled software environment when using a pre-made container image.coiled.analytics.list_computations
now takescluster_id
orscheduler_id
. (This is minor breaking change to API since previouscluster_id
kwarg was in fact interpreted as scheduler id.)Fixed an issue resolving software environments when package name was different for conda and pypi.
coiled cluster ssh
with no cluster specified now connects you to most recent cluster.
0.6.3#
By default Coiled will now add GPU to scheduler if workers have GPU, as this is the recommended best practice. You can explicitly set
scheduler_gpu=False
if you want to override this default.Package sync will now set CUDA version (used when installing packages from Conda) when syncing to a cluster with GPU.
0.6.1#
Add
arm
keyword argument tocoiled.Cluster()
so you don’t need to explicitly specify instance types to make a cluster with ARM instances.
0.6.0#
create_software_environment
and Package Sync now support clusters with ARM instances.
0.5.19#
region
is now an optional kwarg onCluster(...)
, making it more convenient to run a cluster in a different region from your default region. Including region inbackend_options
is still supported.
0.5.17#
There’s a new CLI utility for setting Dask config related to Coiled:
coiled config set
. For instance, you could runcoiled config set account foo
to set your default account tofoo
, or runcoiled config set distributed.diagnostics.computations.nframes 3
to have Coiled analytics collect up to three frames of code.
0.5.6#
create_software_environment
can now set CUDA version when installing packages from Conda. See GPU for usage.
0.5.5#
Added
coiled setup gcp --quotas
for checking and setting Google Cloud quotas relevant for using Coiled.
0.5.4#
coiled setup
now checks if you have authenticated your Coiled account, and if not, will runcoiled login
Bug fixed for link to package sync build logs
0.4.1#
For container-based software environments, fixed a bug introduced in 0.4.0 related to ENTRYPOINT. By default the ENTRYPOINT set in image will now be used (as it was in 0.3.x), you can set
use_entrypoint
if you want to override this default when creating software environment from a container image.
0.4.0#
Software environment builds are now much faster. The new system is based on conda instead of building an entire Docker image and is only in effect if you are using
coiled >= 0.4.0
. You can continue to use the older system of creating a Docker image by pinningcoiled<0.4.0
. If you encounter issues please reach out to support@coiled.io
0.3.0#
When no
software
argument is passed, package sync will be used instead of acoiled-runtime
Minor improvements to
coiled setup aws
Updated Package Sync documentation includes more detailed usage guidelines and a clearer overview of features
0.2.60#
use_best_zone
is now on by default It’s now guaranteed that your scheduler and workers will all be in the same zone, so this option won’t result in any interzone network traffic for your cluster.coiled cluster better-logs
(name and arguments subject to change) to pull logs from your cluster which uses your Coiled credentials.The
coiled
package has fewer dependencies so you’ll get faster installs and fewer potential conflicts.- Package sync
Fixed windows and .egg format packages
Fixed python 3.7 compatibility
Use the anaconda defaults channel for python versions instead of conda-forge. This provides a wider number of versions while being slower to have new versions compared to conda-forge
Fixed a race condition that would sometimes cause hiccups creating a package sync environment
0.2.54#
When specifying both the
worker_cpu
andworker_memory
, or thescheduler_cpu
andscheduler_memory
arguments toCluster
, Coiled will now include AWS instance types fromc5
,c6i
, andr6i
families if they match your request. This is useful if you want high-cpu or high-memory instances, where the ratio of cpu cores to memory is different than the usual “balanced” instance types.When you specify only cpu or memory (and not both), we’ll only match “balanced” instance types from
t2
,t3
,m5
andm6i
families on AWS. Previously we also includedc5
instances as possibilities.
0.2.49#
Added
use_best_zone
argument toCluster
. When you’re using this option, the cloud provider will pick the best availability zone (inside the region you’ve selected for your account). For spot or for harder-to-get instance types, this makes it much more likely that you’ll get all the workers you’ve requested. Your workers will all be in the same zone (to avoid cross-zone traffic costs), but one caveat is that it’s possible your scheduler will be in a different zone than your workers. This currently defaults toFalse
but soon we plan to have this default toTrue
.
0.2.38#
Added
coiled cluster logs
for pulling cluster logs (see API)Package sync now works for Windows
Improved documentation for known package sync limitations (see Package Sync Limitations)
0.2.31#
Added
shutdown()
method onCluster
, useful for stopping a cluster whenshutdown_on_close=False
.Added
allow_ssh
andallow_ingress_from
kwargs onCluster
to make it easier to configure cluster firewall (see How to SSH into your cluster).Added
coiled cluster ssh <cluster name or id>
for easily opening SSH connection to scheduler. This uses a keypair unique to each cluster (see How to SSH into your cluster).
0.2.29#
- A number of package sync-related fixes, including:
Fix detection of legacy egg-info metadata.
Improvements to detection of active package when multiple versions are installed.
Fix ignoring of packages
Fix detection of egg-linked packages
0.2.28#
Dask configuration (
dask.config
) from your local client environment will be collected when you start a cluster and applied to the cluster. If you don’t want local dask config sent and applied to your cluster, there’s a kwarg you can use to disable this:
coiled.Cluster(send_dask_config=False)
package_sync_strict
, aimed at production use of package sync, enforces an identical environment on the cluster where non-strict mode allows slight differences in package versions between client and cluster. Strict mode works best when your client is running on Linux x86 so that the exact same package versions are available locally and on the cluster.Bugfix for package sync sometimes using the wrong channel for a package.
0.2.13#
Removed all Coiled hosted backend logic for the
coiled.set_backend_options()
, you must now provide credentials if you want to set your backend option with this command.Removed the following parameters from
coiled.set_backend_options()
:use_coiled_defaults
,customer_hosted
,create_vpc
.Removed
acr
as a registry type since this registry is not supported.
0.2.11#
We’ve added support custom tagging on your clusters (tags on AWS, labels on GCP). This makes it easier to use your cloud providing billing interface to separately track the cloud costs of different teams or workloads. See Tags for more information.
You can specify a larger disk size using
worker_disk_size
keyword argument when creating a cluster. For example, to start a cluster in which each worker has a 200GB disk (EBS on AWS, Persistent Disk on GCP), you’d call
coiled.Cluster(worker_disk_size=200)
0.2.5#
(Coiled v2) When creating container software environments, we no longer default to overriding the
ENTRYPOINT
set on container image. If you’re using an image where Coiled should override the entrypoint, then you should setuse_entrypoint=False
kwarg when creating your software environment. For example, to create a software environment from a RAPIDS image (which use entrypoint to start a Jupyter server), you’d call
coiled.create_software_environment(
name="my-rapids-nightly",
container="rapidsai/rapidsai-nightly:cuda11.5-runtime-ubuntu20.04-py3.9",
use_entrypoint=False,
)
(Coiled v2) Fixed issue where creating a Cluster with both cpu/memory and vm_types arguments would prioritize the cpu/memory argument over vm_types. If trying to use both, the client will now raise an exception.
0.2.2#
Released May 23rd, 2022
(Coiled v2)
backend_options
now lets you specify multiple CIDR blocks to open for ingress to your scheduler using theingress
keyword; see Configure Firewalls for more information.
0.2.1#
Released May 17th, 20022
Relaxes the version constraint on
click
in thecoiled
package’s dependencies.
0.2.0#
Released May 5th, 2022
This version switches to using v2 clusters (see Manage Hardware).
0.0.78#
Released April 28, 2022
(Coiled v2) You can now use the
worker_class
argument when creating a Cluster to change the workers class of the workers created by Coiled.(Coiled v2) You can now ask for AWS Spot instances When creating a cluster.
(Coiled v2) Various improvements to the Cluster widget.
The
coiled
package now supports Python 3.10. Note that Python 3.10 is not recommended if you ar using thecoiled-runtime
package which includes a Dask version (2022.1.0) that does not support Python 3.10.The CLI command
coiled env create
used to create software environments, now accepts an--account
option to specify the account to use for the creation of that software environment.
0.0.72#
Released March 29, 2022
No user-facing changes
0.0.70#
Released March 22, 2022
Added deprecation warning for cluster configurations. This feature will soon be deprecated
0.0.69#
Released March 17, 2022
No user-facing changes included in this release.
0.0.68#
Released March 9, 2022
Sometimes fetching account credentials would fail if the server responded with a brief error code. The code will now retry to fetch Coiled credentials for your user if the server responds with an error code.
The command
coiled.list_instance_types()
will now accept exact values or a range of values for cores, memory and gpus. You can specify a range by passing a list of two values, for example: coiled.list_instance_types(cores=[2, 8]).When fetching instance types with the command coiled.list_instance_types() you can now specify memory values as you would when creating Clusters. For example: coiled.list_instance_types(memory=”8GiB”).
0.0.67#
Released February 25, 2022
Release is the same as 0.0.66, this new version was released to address some versioning issues that the team found.
0.0.66#
Released February 23, 2022
When creating a Cluster, if you specify an account with the keyword argument
account=
that is not valid, the request will fail earlier. The error message will also contain the account name that you specified.Updated the error message that the command
coiled.delete_software_environment()
returns if the software environment doesn’t exist. The error message will now contain the name of the software environment and the account.
0.0.65#
Released February 11, 2022
Fix misleading error message warning about not getting workers, when workers don’t connect to the scheduler once
wait_for_workers
completes.
0.0.64#
Released February 10, 2022
This commit was stale and removed
0.0.63#
Released February 9, 2022
Clusters created with the
coiled.Cluster
will now wait for 30% of the requested workers before returning the prompt back to the user.The method
coiled.Cluster()
accepts await_for_workers
keyword argument that allows you to increase/decrease the number of workers that need to be created before returning the prompt back. Additionally, the option to wait for workers can be toggled off.Improved validation for instance types when creating a Cluster
Added a warning message informing users to run
coiled.get_notifications(level="ERROR")
when no workers have connected to the scheduler after 10 minutes.If a Cluster can’t get any workers due to availability issues or any other reason, the
coiled.Cluster()
constructor will now return the last error message when Coiled tried to create the worker (you need to havewait_for_workers
enabled).
0.0.62#
Released January 26, 2022
The command
coiled.list_instance_types
now returns a list of all available instance types that your cloud provider allows.You can now specify a minimum number of memory, cores and gpus when using the command
coiled.list_instance_types()
.
0.0.61#
Released January 12, 2022
Fixed issue with setting loop when using a Dask version higher than 2021.11.2
0.0.60#
Released December 15, 2021
set_backend_options
no longer accepts arguments related to Azure backends.coiled.Cluster
now accepts ause_scheduler_public_ip
to configure the scheduler address the Coiled client connects to.
0.0.59#
Released December 13, 2021
Pin
Dask.distributed
to a version prior to2021.12.0
since this introduced an incompatibility withcoiled
.
0.0.58#
Released December 03, 2021
Fix a bug that prevented users’ AWS credentials from being sent to clusters.
0.0.57#
Released December 01, 2021
Add support for managing long lived API access tokens via the Coiled client.
Coiled client is tested and supported for Python version 3.7, 3.8 and 3.9. Coiled client raises an exception if you attempt to install in an environment with python versions below 3.7 or version 3.10
Removed functionality associated with Coiled Notebooks and Coiled Jobs since they have been deprecated.
0.0.56#
Released November 22, 2021
Users can specify during cluster creation whether to use the public address or the private address of the scheduler to connect to the cluster.
Python client will raise an
AccountFormatError
if the account is not a combination of lowercase letters, numbers or hyphens.
0.0.55#
Released November 11, 2021
Fixed issue that when using the command
coiled login --token
in the terminal, would show an error message saying that you have run out of credits.Updated connection timeout, which should mitigate the timeout error that sometimes was ocurring when launching clusters.
You can now customize the firewall/security group that Coiled uses by adding a
firewall
dictionary and pass it to thebackend_options
keyword argument for thecoiled.Cluster
constructor.
0.0.54#
Released October 17, 2021
You can now specify a list of instance types with the
scheduler_vm_types
/worker_vm_types
when creating a cluster using thecoiled.Cluster()
constructor.You can now select a GPU type by using the keyword argument
gpu_type
from thecoiled.Cluster()
constructor.Added a new command
coiled.list_instance_types()
to the Coiled Client which returns a list of allowed instance types that you can use while creating your Cluster.Added a new command
coiled.list_gpu_types()
to the Coiled Client which returns a list of allowed GPU types that you can use while creating your cluster.You can now specify
enable_public_http
,enable_public_ssh
anddisable_public_ingress
when using thecoiled.set_backend_options()
to have more control on the security group that Coiled created with AWS.You can now use the Clusters private IP address when interacting with your cluster by using
backend_options={"disable_public_ingress": True}
when creating a cluster with thecoiled.Cluster()
constructor or when setting your backend with the commandcoiled.set_backend_options()
.You can now remove port 22 from the AWS security group that Coiled creates in your account by setting the
enable_public_ssh
flag to False used with either thebackend_options
or when setting your backend with the commandcoiled.set_backend_options()
.
0.0.53#
Released October 13, 2021
Environment variables sent to the Cluster with the
environ=
keyword argument are now converted to strings.Added a depagination method so our list commands (for example
coiled.list_cluster_configurations()
) will now return all of the items instead of only the last 50.
0.0.52#
Released September 16, 2021
coiled.set_backend_options()
no longer supports the deprecated ECS backend.
0.0.51#
Released September 1, 2021
Coiled clusters now support adaptive scaling. To enable it, create a cluster, then run
cluster.adapt(maximum=max_number_of_workers)
.Removed an unused
region
parameter fromcoiled.Cluster()
. Cloud provider regions can be set usingbackend_options=
.coiled.create_notebook()
now takes an optionalaccount=
parameter like the rest of the API. If there is a conflict between the account specified via the name and the account specified via thaaccount
parameterm an error is raised.
0.0.50#
Released August 24, 2021
Another
aiobotocore
-related fix.
0.0.49#
Released August 20, 2021
Hotfix to support
aiobotocore==1.4.0
.
0.0.48#
Released August 17, 2021
Hotfix to relax the dependency on
typing_extensions
in order to conflict less with third-party packages.
0.0.47#
Released August 13, 2021
coiled.set_backend_options()
has changed several parameter names, and it is now possible to specify a gcp zone. A VPC will now be created if credentials are provided.'vm_aws'
is now the default backend forcoiled.set_backend_options()
in preparation for the deprecation of the'ecs'
backend.
0.0.46#
Released August 2, 2021.
Hotfix to better-specify typing-extensions dependency.
0.0.45#
Released July 28, 2021.
coiled.set_backend_options()
now supports specifying a Google Artifact Registry for storing software environments.Cluter protocols (currently either
tls
orwss
) can now be configured using the dask configuration system undercoiled.protocol
.Cluster scheduler and worker options can now be configured using the dask configuration system under
coiled.scheduler-options
andcoiled.worker-options
.
0.0.44#
Released July 15, 2021.
Users with customer-hosted accounts on Google Cloud Platform can now provide a region (
gcp_region_name
) tocoiled.set_backend_options()
.Users can now specify a
protocol
when creating a Coiled cluster. By default, clusters communicate over TLS ("tls"
), but in some restricted environments it can be useful to direct traffic through the Coiled web application over websockets ("wss"
).The command line interface for creating a software environment (
conda env create
) now accepts an optional--conda-env-name
parameter to specify the name of the conda environment into which packages will be installed (defaults tocoiled
).
0.0.43#
Released June 29, 2021.
Hotfix to remove aiostream dependency
0.0.42#
Released June 29, 2021.
coiled.set_backend_options()
now supports configuring your Coiled account to run in your own Google Cloud Plaform account.
0.0.41#
Released June 9, 2021.
New function
coiled.set_backend_options()
which allows users to set the options for an account (e.g., cloud provider, region, docker registry) from the Python client. Previously this was only available using the Coiled web application.Fixed a bug in
coiled.performance_report()
that was preventing performance data from being captured.Fixed an issue where an error building software environments could result in hanging client sessions.
coiled.Cluster()
,coiled.start_job()
,coiled.create_software_environment()
, andcoiled.create_notebook()
can now take an optionalenviron
dictionary as an argument, allowing users to pass in environment variables to clusters, jobs, software environments, and notebooks. These environment variables are not encrypted, and so should not be used to store credentials or other sensitive information.coiled.list_core_usage()
now shows additional information about how many credits your account has used for the current program period.coiled.Cluster()
no longer raises a warning if no AWS credentials can be found, since a given cluster may not want or need to use them.
0.0.40#
Released May 18, 2021.
New functions
coiled.performance_report()
andcoiled.list_performance_reports()
.coiled.performance_report()
is a context manager which captures cluster computation as a dask performance report, uploads it to Coiled, and hosts it online for later viewing.New function
coiled.get_notifications()
returns notifications from resource creation steps in your chosen cloud provider. This can be useful in debugging when resources do not launch as intended.coiled.create_software_environment()
now has an optional argumentforce_rebuild
, defaulting toFalse
, which forces a rebuild of the software environment, even if one matching the given specification already exists. There is a new corresponding flag--force-rebuild
in thecoiled env create
command line command.New functions
coiled.cluster_logs()
andcoiled.job_logs()
return logs from Coiled clusters and Coiled jobs, respectively.Cloud.logs()
has been renamed toCloud.cluster_logs()
to better distinguish it fromCloud.job_logs()
.New function
coiled.get_software_info()
returns detailed information about a Coiled software environment specification.coiled.info()
has been renamed tocoiled.diagnostics()
, and now always returns JSON-formatted diagnostic information.New function
coiled.list_user_information()
provides information about the currently logged-in user.New function
cloud.health_check()
checks the user’s connection with the Coiled Cloud application.coiled login --server <url-for-your-coiled-deployment>
now works if there is a trailing slash in the URL.coiled login --account <team_slug>
sets the user’s specified account as a config value.Previously, some
coiled
functions acceptedaccount
as an optional parameter, and others did not. Now the entire API consistently allows users to specify their account with anaccount=
keyword argument. The priority order for choosing an account to make API requests is:Accounts specified via a resource name (where applicable), e.g.
name = <account-name>/<software-environment-name>
Accounts specified via the
account=
keyword argumentAccounts specified in your Coiled configuration file (i.e.
~/.config/dask/coiled.yaml
)The default account associated with your username (as determined by the token you use to log in)
Most of the resource creation functions in the
coiled
API (e.g.,coiled.Cluster()
orcoiled.create_software_environment()
) can take a lot of optional arguments. The order of these arguments in their function invocations is not important, and so they have been turned into keyword-only arguments.
0.0.39#
Released on May 3, 2021.
Following dask/distributed, we have dropped support for Python 3.6
The arguments for
coiled.Cluster()
are now keyword-only.coiled
is now more fully type annotated, allowing for better type checking and editor integration.coiled.Cloud.logs()
now hasaccount
as an optional second parameter instead of a required first parameter to be more consistent with the rest of the API.Fixed a bug where updating the software environment in a cluster configuration did not work.
Add a
--private
flag to the command line interface forcoiled env create
.Fixed a bug where the
rich
console output fromcoiled
did not work well with the Spyder editor.Fixed a bug where the
coiled.Cloud.close()
did not properly clean up threads.
0.0.38#
Released on March 25, 2021.
Improve connection error when creating a
coiled.Cluster
where the local and remote versions ofdistributed
use different protocol versionsReturn the name of newly started jobs for use in other API calls
0.0.37#
Released on March 2, 2021.
Add core usage count interface
Make startup error more generic and hopefully less confusing
Filter clusters by descending order in
coiled.list_clusters()
Add messages to commands and status bar to cluster creation
Don’t use coiled default if software environment doesn’t exist
Handle case when trying to create a cluster with a non-existent software environment
Set minimum
click
versionSeveral documentation updates
0.0.36#
Released on February 5, 2021.
Add backend options docs
Fix CLI command install for python < 3.8
Add color to coiled login output
Fix bug with
coiled.Cluster(account=...)
De-couple container registry from backends options
0.0.35#
Released on January 29, 2021.
Flatten json object if error doesn’t have
"message"
Enable all Django middleware to run
async
Remove redundant test with flaky input mocking
Use util
handle_api_exception
to handle exceptions
0.0.34#
Released on January 26, 2021.
Update AWS IAM docs
Add
--retry
/--no-retry
option tocoiled login
Update default conda env to
coiled
instead ofbase
Add
worker_memory < "16 GiB"
to GPU exampleFix small issues in docs and add note for users in teams
Do not add python via conda if
container
in software specUse new
Status
enum
indistributed
0.0.33#
Released on January 15, 2021.
Update
post_build
to run as POSIX shellFix errors due to software environment / account name capitalization mismatches
Automatically use local Python version when creating a
pip
-only software environmentImproved support for custom Docker registries
Several documentation updates
0.0.32#
Released on December 22, 2020.
Add
boto3
dependency
0.0.31#
Released on December 22, 2020.
Add
coiled.backend-options
config valueAllow selecting which AWS credentials are used
Don’t initialize with
account
when listing cluster configurationsAdd support for using custom Docker registries
Add
coiled.cluster_cost_estimate
Several documentation updates
0.0.30#
Released on November 30, 2020.
Update API to support generalized backend options
Enable
coiled.inspect
andcoiled.install
inside Jupyter
0.0.29#
Released on November 24, 2020.
Add informative error message when AWS GPU capacity is low
Fix bug in software environment creation which caused conda packages to be uninstalled
Add notebook creation functionality and documentation
Generalize backend options
Add support for AWS Fargate spot instances
0.0.28#
Released on November 9, 2020.
Expose
private
field in list/create/updateMore docs for running in users’ AWS accounts
Add Dask-SQL example
Use examples account instead of coiled-examples
Add list of permissions for users AWS accounts
Add example to software environment usage section
Update
conda_env_name
descriptionSet default TOC level for sphinx theme
0.0.27#
Released on October 9, 2020.
Fix AWS credentials error when running in Coiled notebooks
0.0.26#
Released on October 8, 2020.
Handle AWS STS session credentials
Fix coiled depending on older aiobotocore
Only use proxied dashboard address in Jobs
Improve invalid fargate resources error message
Mention team accounts
Support AWS credentials to launch resources on other AWS accounts
Update FAQ with a note on notebooks and Azure support
Add GPU docs
Add jupyterlab example
Add community page
Add tabbed code snippets to doc landing page
Ensure job configuration description and software envs are updated
0.0.25#
Released on September 22, 2020.
Handle redirecting from
beta.coiled.io
tocloud.coiled.io
Add Prefect example
Update dashboards to go through our proxy
Add descriptions to notebooks
Update cluster documentation
Add Optuna example
0.0.24#
Released on September 16, 2020.
Support overriding cluster configuration settings in
coiled.Cluster
Don’t require region on cluster creation
Add links to OSS licenses
Add ability to upload files
Add access token for private repos
0.0.23#
Released on September 4, 2020.
Fixed bug where specifying
name
in a conda spec would cause clusters to not be launchedOpen external links in a separate browser tab in the docs
Explicitly set the number of worker threads to the number of CPUs requested if not otherwise specified
Improvements to Coiled login behavior
Update to using
coiled/default
as our default base image for software environmentsSeveral documentation updates
0.0.22#
Released on August 27, 2020.
Add AWS multi-region support
Log informative message when rebuilding a software environment Docker image
Remove link to Getting Started guide from
coiled login
outputUpdate
distributed
version pinningAdd support for running non-Dask code through Coiled
Jobs
Several documentation updates
0.0.21#
Add logs to web UI
Verify worker count during cluster creation
Raise more informative error when a solve conda spec is not available
Improve docker caching when building environments
0.0.20#
Allow ‘target’ conda env in creating software environment (#664)
Start EC2 instances in the right subnets (#689)
0.0.19#
Added support for installing pip packages with
coiled install
Support Python 3.8 on Windows with explicit
ProactorEventLoop
Updated default
coiled.Cluster
configuration to use the current Python versionUpdated dependencies to include more flexible version checking in
distributed
Don’t scale clusters that we’re re-connecting to
Added support for using custom worker and scheduler classes
0.0.18#
Released August 8, 2020.
Add
--token
option tocoiled login
Add
post_build=
option tocoiled.create_software_environment
Add back support for Python 3.6
Remove extra newline from websocket output
Remove
coiled upload
from public APIAdd
coiled env
CLI command groupSeveral documentation updates
0.0.17#
Released July 31, 2020.
Move documentation page to docs.coiled.io
Added
--version
flag tocoiled
CLIRaise an informative error when using an outdated version of the
coiled
Python APISeveral documentation updates
Added
coiled.Cluster.get_logs
methodAdded top-level
coiled.config
attributeUse fully qualified
coiled.Cluster
name in the cluster interactive IPython repr
0.0.16#
Released July 27, 2020.
Added getting started video to docs.
Added support GPU enabled workers.
Added new documentation page on configuring JupyterLab.
Added support for specifying pip, conda, and/or container inputs when creating software environments.
Remove account argument from
coiled.delete_software_environment
.Added cost and feedback FAQs.
0.0.15#
Released July 22, 2020.
Removed “cloud” namespace in configuration values.
Several documentation updates.
Added new security and privacy page to the docs.
Added
coiled upload
command for creating a Coiled software environment from a local conda environment.Added tests for command line tools.
0.0.14#
Released July 17, 2020.
0.0.13#
Released July 16, 2020.
Update “Getting Started” documentation page.
Update
coiled.create_software_environment
to use name provided byconda=
input, if provided.Send AWS credentials when making a
Cluster
object.
0.0.12#
Released July 14, 2020.
Switch to using full
coiled
Python namespace and renameCoiledCluster
tocoiled.Cluster
Raise informative error when attempting to create a cluster with a non-existent cluster configuration
Bump supported
aiobotocore
version toaiobotocore>=1.0.7
Add
coiled install
command to create conda software environments locallyRepeated calls to
Cloud.create_cluster_configuration
will now update an existing configuration
0.0.11#
Released July 9, 2020.
Don’t shut down clusters if we didn’t create them
Slim down the outputs of
list_software_environments
andlist_cluster_configurations
0.0.10#
Released July 8, 2020.
Use websockets to create clusters due to long-running requests
Avoid excess endlines when printing out status in the CLI
Allow calling coiled env create repeatedly on the same environment
0.0.9#
Released July 7, 2020.
Change default to coiled/default
Add
coiled login
CLI commandUse account namespaces everywhere, remove
account=
keywordAllow the use of public environments and configurations
0.0.8#
Released on July 1, 2020.
Update to use new API endpoint scheme
Adds
conda env create
command line interface
0.0.7#
Released on June 29, 2020.
Adds
Cloud.create_software_environment
,Cloud.delete_software_environment
, andCloud.list_software_environments
methodsAdds
Cloud.create_cluster_configuration
,Cloud.delete_cluster_configuration
, andCloud.list_cluster_configurations
methodsUpdate
Cloud
object to use a token rather than a passwordChanged name of package from
coiled_cloud
tocoiled
0.0.6#
Released on May 26, 2020.
Includes
requirements.txt
inMANIFEST.in
0.0.5#
Released on May 26, 2020.
Includes versioneer in
MANIFEST.in
0.0.4#
Released on May 26, 2020.
Adds
LICENSE
to project
0.0.3#
Released on May 21, 2020.
Deprecations#
Renamed
Cluster
toCoiledCluster