Coiled Client Release Notes#
These release notes are related to updates to the Coiled Python package.
1.128.2#
Support
coiled mpi run --upload <path> ...to upload additional file(s) required by what you’re running, in addition to scripts explicitly referenced in your command (which are uploaded automatically). For instance, if you’re in a directory withmain.pyandsub.pyandmain.pyimports fromsub.py, you’d want to runcoiled mpi run --upload sub.py python main.pyto upload both files.
1.128.1#
Improvements to
coiled mpiCLI. Adds--exclude-head,--head-vm-type,--cluster-timeout, and makes--idle-timeout(10 minute default) function correctly.
1.128.0#
Add (under active development)
coiled mpiCLI for creating and using multi-node GPU clusters with fast interconnect (i.e., cluster placement and EFA, with NCCL/UCX support).When setting up AWS with Coiled workspace, include permissions necessary for EFA security group and for using Placement Groups. EFA and Placement Groups are not default features and the additional permissions are only required if you use these features.
Add support for
use_efa,use_worker_efa, anduse_worker_placement_groupbackend options on AWS. For example, you can specifycoiled.Cluster(..., backend_options={"use_efa": True, "use_placement_group": True}.For
coiled.batch.run(), add API support for using strings or buffers instead of paths to on-disk files:host_setup_script_contentallows passing a host setup script as string,command_as_script=Truemeans thatcommandwill be interpreted as content of a script rather than single command, and files to upload (via filestore) can be passed asbuffers_to_upload=[{"relative_path": "hello.txt", "buffer": io.BytesIO(b"hello\n")}].
1.127.0#
Added support for passing individual files to
coiled batch run --uploadinstead of just directories.Added
--run-on-hostflag tocoiled batch runto run task script directly on the host VM(s), rather than inside Docker container. You’re responsible for installing any required software; this could be done using--host-setup-script setup.sh. Host VM runs Ubuntu andsudo apt install ...is supported for installing packages.Added
--filestoreargument tocoiled runto pass the names of filestores (new or existing) to use.Added
filestores_to_attachkeyword argument toCluster()for attaching filestores during cluster creation.Fixed how
ca-certificatesis updated inuvcontainers forcoiled run, so that interactive use cases don’t block waiting to confirm update.When downloading from filestores, we now use HTTP streaming and retry when needed.
Calling
coiled curlwith--json-outputnow outputs valid JSON.
1.126.0#
Add
coiled file listcommand.Bug fix to
coiled file download, as well as support for--filestore <name>and--filter partial/path/or/filename.Bug fix to
coiled quickstartsometimes failing to parse JSON that tracked what steps the user has already completed.
1.125.0#
Add
--privateCLI argument forcoiled notebook start, to prevent other workspace members from accessing a notebook.Client now warns instead of crashing when a conda package has no channel metadata, which typically only happens with pixi editable packages
1.124.0#
When uploading to existing filestore (e.g.,
coiled batch run --input-filestore existing-fs ...), don’t re-upload unmodified files which are already in filestore.Added support for
pixi auth logincredentials when using package sync.
1.123.0#
Fixed issue where mamba auth tokens specified via
--bearerwere not always being picked up. Now, both--tokenand--bearerare supported.
1.122.0#
Support for specifying “cloud env run” image using
coiled.cloud-env-imagedask config. It’s very unlikely you want to do anything with this unless we’ve suggested it to you.
1.121.0#
When
coiled batchis used with a custom container, it currently still needs adaskcontainer on the VM. The container will now be pulled from GitHub Container Registry rather than Docker Hub, and is also configurable using--dask-containeroption (orcoiled.batch.dask-containerdask config).
1.120.0#
For
coiled batch run, add support for transferring files between local machine and cloud VMs. This makes use of cloud storage in your cloud account (currently just AWS is supported, GCP and Azure to follow soon), and requires additional permissions. See--sync,--upload,--download, and--waitoptions for more details about use.Permissions delegated to Coiled when you run
coiled setup awsnow include those needed to accesscoiled-data-*buckets in your AWS account.For
coiled batch run, allow scheduler sidecar containers to be specified using--scheduler-sidecar-spec.
1.119.0#
Add support for specifying sidecars (extra docker containers) to run on scheduler and/or workers.
Add
coiled.analytics.client-spans.transmitdask config. By default,@coiled.functionwill send “span” data about each invocation. This allows you to disable sending spans, which can be especially useful when too much data is being sent.
1.118.4#
Fixed bug so that when
send_application_default_credentials()is used to send Google Application Default Credentials to your cluster, ADC credentials will also be added to any workers that are added to cluster later.
1.118.3#
Bug fix to command parsing for
coiled runandcoiled batch runso that quotes around strings won’t be lost. For example, command -p “an input string” will now correctly be executed as written, instead of command -p an input string. This fixes a bug introduced in 1.116.0.
1.118.2#
Better error message for
coiled batch runwithout command specified.
1.118.1#
For
coiled runwith a uv container anduv ...oruvx ..., automatically update theca-certificatespackage to make it easier for anything that requires updated CA certificates for validating SSL connection.
1.118.0#
Add
cluster_timeoutkeywords argument tocoiled.Cluster(). This will cause running clusters to shut down after specified timeout, even if the cluster is active.
1.117.1#
Package sync will now deprioritize certain packages that you have not explicitly installed in your environment (i.e., packages that are dependencies of packages you directly installed), which should eliminate some common conda build problems.
coiled batch runnow uses compression to send batch job request data, which is especially helpful when payload contains a large number of values to map over.Removed
setuptoolsdependency and all usages of deprecatedpkg_resourcesAPI.
1.117.0#
Add
--package-sync-ignoreoption tocoiled batch run.Fixed bug that
pause_on_exitkwarg tocoiled.Cluster()was being ignored.
1.116.0#
coiled runallows escaped environment variables in the command, which will be unescaped when the command is run on a VM. For example, you could runcoiled run echo \$COILED_CLUSTER_HOSTNAMEand this will print the value of that environment variable set on the VM (something likecluster-xyz.dask.host).Support
DASK_COILED__CLIENT_PUBLIC_IPenvironment variable (andcoiled.client_public_ipdask config) for explicitly specifying the client public IP address in client use cases that require this; when specified, we won’t use third-party services to determine the public IP address wherecoiledclient is running.
1.115.0#
Add
scheduler_disk_config/worker_disk_configtocoiled.Cluster(), and--disk-configtocoiled notebook start. For example, you can runcoiled notebook start --disk-config "{'VolumeType': 'io2', 'Iops': 3000, 'VolumeSize': 123}"to start a notebook VM with an io2 EBS volume on AWS.For
coiled notebook start, only open port for SSH by default if--syncis used.Tweak to
coiled batch run --helptext.
1.114.0#
Better handling for unexpected errors during package sync scan. For example, this now avoids crashing when user doesn’t have permissions to run
condaon their local machine and client tries to read conda config.
1.113.0#
coiled run --containernow supports containers that don’t havedaskordistributedinstalled. For example, you can runcoiled run --container astral/uv:debian-slim -- uvx pycowsay hello from uvto useuvwith a pre-builtuvimage. (Note thatbashis still a requirement in the container, so, e.g.,astral/uv:latestwould not currently work.)
1.112.1#
Fixed bug where auth token would not be inserted into conda channel URL if url contained
/t//.Improved warning log messages related to auth tokens in conda channel URLs.
1.112.0#
Add dask config settings to configure where to look for credentials used for conda channel authentication. For example, to disable
keyringas a source, you can setcoiled.package_sync.conda.cred_sources.keyringtoFalse. Other sources arenetrc,conda, andmamba.When starting a cluster, client now checks control-plane for any dask config overrides set for the Coiled user, workspace, or organization. For example, this allow us to disable specific sources of conda authentication for specific organizations when desired by the organization. (This isn’t currently user-configurable, talk to us if you have organization settings that you want applied to all users in this way.)
1.111.4#
Bug fix so that
use_best_zonedefaults toFalsewhen zone is explicitly specified, as incoiled.Cluster(..., backend_options={"region_name": "us-east5", "zone_name": "us-east5-a"}.
1.111.3#
Support filtering by GPU type in
coiled.list_instance_types(). For example,coiled.list_instance_types(gpus="l4", cores=8)will give you a list of VM types for your cloud provider with an L4 GPU and 8 vCPUs.
1.111.2#
Change
coiled.list_clusters()andcloud.list_clusters()to return only first 2000 clusters by default; you can usemax_pageskeyword argument to return more (or fewer), with 100 clusters per page.
1.111.1#
coiled.batch.run()now supportsmap_over_task_var_dictskeyword argument for mapping over multiple inputs to each task. For example,map_over_task_var_dicts=[{"FOO": 1, "BAR": 2}, {"FOO": 3, "BAR": 4}](so the first task getsFOO=1 BAR=2and the second getsFOO=3 BAR=4set as environment variables).
1.111.0#
Happy eleventy-first minor version update!
coiled batch run --map-over-valueslets you specify a list of values such that for each value, a task will be run with that value as the input. For example,coiled batch run --map-over-values "first,second,third" my_script.pywill run three tasks, with theCOILED_BATCH_TASK_INPUTenvironment variable set to “first” for the first task, and so on. You can also specify a file as the source of values, specify the delimiter for splitting values, and specify the name of the environment variable used to expose the input to the task. Seecoiled batch run --helpfor more details.The
coiled.batch.run()Python API also adds support for mapping tasks over a list of values, such asmap_over_values=["one", "two", "three"].coiled batch runnow interprets\$FOOas$FOOwhen running your command, so you can reference environment variables set for the task or job in your command. This allows you to usecoiled batch run --map-over-file inputs.txt my_command --x \$COILED_BATCH_TASK_INPUTto runmy_command --x <value>with the values being mapped over.For
coiled batch runandcoiled.batch.run(), default to maximum concurrency of 1000 VMs. You can specify lower limit with--max-workers(ormax_workerskwarg), or use -1 for no limit.
1.110.0#
When client needs to get the client public IP address (e.g., when using
allow_ssh_from="me"), it will now trycheckip.amazonaws.combefore tryingapi.ipify.org.
1.109.0#
Bug fix to
coiled.batch_runwhich (in 1.108.0) failed when certain optional arguments were not provided.
1.108.0#
Add
--env-fileand--secret-env-fileoptions tocoiled batch runfor sending environment variables specified in a.envfile.
1.107.0#
Add
timeoutkwarg tocoiled.batch.wait_for_job_done().Use
certififor SSL CA certificates for HTTP request to get client IP address when needed.Minor refactoring of log CLI code.
1.106.0#
Bug fix to
coiled batch log --task 0so it correctly filters to task 0.
1.105.0#
Use
certififor SSL CA certificates when client connects to Coiled control-plane. This enables systems with older CA certificates to connect instead of gettingClientConnectorCertificateError.
1.104.0#
Container can be specified using
coiled.containerDask config value. (When usingcoiled batch run --container foo:latest -- script.py, the Dask config value is automatically set so that ifscript.pymakes a Coiled cluster, it will usefoo:latestcontainer by default for that cluster.)
1.103.1#
Minor update to mapping of packages from pypi to conda.
1.103.0#
Add
coiled batch logsCLI for getting logs specifically from batch job tasks. You can specify (e.g.)--task 3to get logs for task 3. Note that this will only work for batch jobs run after this feature was released.
1.102.1#
Fix so that
coiled.get_cluster_from_node()will find cluster in correct workspace when using non-default workspace.
1.102.0#
Add
--job-timeouttocoiled batch runto automatically stop the job after some duration. (For timeouts on individual tasks, you can usetimeout <N> your-commandin place ofyour-command.)
1.101.1#
Added client-side rate limit on requests to get cluster state.
Added a workaround for an issue where some versions of conda store the incorrect channel URLs in their local metadata for certain packages, which caused package sync builds to fail. This time the fix is specifically for platform-specific packages that are reported as on the noarch version of a channel.
1.101.0#
Fixed some issues with package sync and
conda-authsupport.The Coiled client now ignores PyPI index URLs with domains that start with
localhostor end with.local.
1.100.0#
Added
Cluster(ignore_container_entrypoint=True, ...)flag to ignore an entrypoint on a container that you want to use as your software environment.Small bugfixes for package sync with private conda channels.
1.99.0#
Removed
importlib-metadatadependency, since it is part of the standard library in Python 3.8+.Tweak to client telemetry to improve debugging for client-cluster version mismatches.
1.98.0#
Add
coiled.get_cluster_from_node()to make it easier for code running viacoiled batch run(or another API for submitting batch jobs) to getClusterobject for the cluster it’s running on.When using
coiled runto run a script that itself makes a Coiled cluster with a different CPU architecture (x86 vs ARM), by default we can now rebuild the original software environment for the different architecture, rather than re-scanning and creating a software environment from scratch for the nested cluster. This matches the existing behavior when CPU architectures did match (i.e., both x86 or both ARM).
1.97.0#
Added support for private conda channels when using
create_software_environment. These will be added automatically just like with package sync environments.Add
--scheduler-vm-typetocoiled batch runto allow non-default scheduler node VM type that’s distinct from worker node VM type.
1.96.1#
Fixed issue with package sync when using
mamba authfor authentication with a username and password.
1.96.0#
For
coiled batch run, support automatically uploading “.py”, “.sh”, “.yml”, “.yaml”, “.txt”, “.csv”, “.tsv”, “.json” files referenced in your script(s) to cloud VMs. For example, if you runcoiled batch run script.shandscript.shhaspython foo.py data.csv, thenscript.sh`, ``foo.pyanddata.csvwill automatically be uploaded and copied to the VMs running your code. This isn’t intended for uploading large files, so files larger than 32 kB are skipped.
1.95.0#
When using
coiled.Cluster(..., mount_bucket="..."), the timeout for mounting is now 30 seconds by default and configurable viacoiled.mount-bucket.timeoutin your Dask config.For
coiled batch run, avoid extra worker VMs by accounting for tasks running on the scheduler when determining the number of worker VMs to provision.Fixes an issue where updates to forwarded data credentials were scheduled concurrently.
1.94.2#
Fixes an encoding error on Windows when running
coiled quickstart.
1.94.1#
Add
cluster_idkwarg tocoiled.delete_cluster().Support
coiled batch run --container [ref] --ignore-container-entrypointto allow ignoring entrypoint on user-specified container for batch jobs. For example, you can use this to runcoiled batch run --container amazon/aws-cli --ignore-container-entrypoint -- aws sts get-caller-identity.
1.94.0#
Removed package sync support for Python eggs, which are no longer widely used. If you are still building eggs, please switch to building wheels instead.
Added workaround for issue where some versions of conda store the incorrect channel URLs in their local metadata for certain packages, which caused package sync builds to fail.
1.93.2#
Allow running coiled CLI via python -m coiled instead of coiled.
1.93.1#
Better error message when specifying CPU or memory that doesn’t match any recommended instance types but does match non-recommended types.
1.93.0#
Cloud VMs for Coiled functions will not be reused if the function’s client is closed or closing. This avoids errors when submitting work to an already closed client.
Improve package sync error message when local environment has conflicts.
Use 4 vCPU minimum for default GPU VM when CPU/memory aren’t specified by user. AWS already uses
g4dn.xlargeas default GPU instance, but for Google Cloud this will change default fromn1-standard-1ton1-standard-4.
1.92.1#
Bug fix to correctly handle conda channel URLs that end with
/.
1.92.0#
Support
Cluster(n_workers=[min, initial, max])to specify an initial number of workers different from the minimum adaptive scaling range. As before, you can use[min, max]to start a cluster that adaptive scales and will haveminas the initial number of workers.
1.91.0#
Improved package sync reliability when replacing pip packages that do not have Linux wheels with conda equivalents
Fixed package sync handling of working directories with nonalphanumeric characters.
1.90.5#
Minor tweaks to
coiled hello.
1.90.4#
Improve reliability of
mount_bucketoption toCluster()
1.90.3#
Minor tweaks to
coiled hello.
1.90.2#
Reduce
coiled batch run --n-tasks <n>job submission time for very large values of<n>.Small bugfix to avoid package sync error when pyproject.toml has an unexpected format.
1.90.1#
Fix to how
coiled hellostartsipythonon Windows.
1.90.0#
Minor tweaks to coiled hello.
1.89.0#
Fix bug in
cluster.wait_for_workers()andclient.wait_for_workers()that caused them to keep waiting after workers were ready.Handle intermittent connection problem for cluster state endpoint while waiting for new cluster to be ready.
1.88.0#
Add support for conda channels that are not hosted on anaconda.org (including private ones that work with
mamba auth login)
1.87.0#
Changes to visual output from
coiled batch status.
1.86.0#
Add
coiled quickstartas alias tocoiled hello.
1.85.0#
Adjust how we set thread-related environment variables (
OMP_NUM_THREADS, etc) when usingcoiled run,coiled notebook,@coiled.function, orCluster(unset_single_threading_variables=True, ...). The new approach should work better for everyone usingdask>==2023.9.1.When creating a Coiled cluster from something run via
coiled runorcoiled batch run, only re-use the software environment if GPU requirements match.
1.84.0#
Add support for specifying a host setup script to run on each VM during setup process. For Dask clusters, use
Cluster(..., host_setup_script="local path, or text of script"); forcoiled runandcoiled notebook start, use--host-setup-script <local path>.
1.83.0#
Add
--spot-policytocoiled batch run, as well ascoiled.batch.run(..., spot_policy="spot")for batch Python API.Show note to user when using Coiled-hosted with
coiled batch run.Add support for specifying tags with the
coiled.tagsconfiguration option.Various tweaks to
coiled hello.
1.82.0#
Add
coiled batch listCLI for listing Coiled Batch jobs in a workspace. Previously this was done usingcoiled batch statuswith no cluster provided. From now on that will instead display the status of the most recent job in a workspace.Tweaks to
coiled batch runandcoiled hello.Use shorter URLs for cluster details page.
1.81.0#
Add
coiled batch waitCLI for monitoring progress of Coiled Batch jobs.Small changes to some information and error messages.
1.80.0#
Add experimental
coiled helloCLI command
1.79.2#
Bug fix to
coiled.batch.run()so that it will default to using just a single VM when appropriate.
1.79.1#
Bug fix for better bash and python scripts submitted as batch jobs.
Add
coiled.aws-sts-expiration-duration-if-unknownconfig to set the expiration duration on forwarded AWS STS token when we can’t determine the actual expiration.
1.79.0#
For batch jobs, default to allowing the workers to span availability zones. This improves availability for large jobs or harder-to-get VM types. If your job involves a large quantity of intra-cluster traffic, you may want to disable this (with
--no-cross-zoneorallow_cross_zone=Falsedepending on API) to avoid cost for cross-zone traffic.
1.78.0#
Fix cluster and client
wait_for_workers()method to only count ready workers.Add
--docker-shm-sizeoption tocoiled runfor specifying a non-default size. For example,coiled run --docker-shm-size "3gb" ....Add
--allow-ssh-fromtocoiled batch run, for example,coiled batch run --allow-ssh-from me ...orcoiled batch run --allow-ssh-from 10.0.0.0/8 ....
1.77.1#
Add
coiled.get_dask_client_from_batch_node()to make it easier for code running viacoiled batch run(or another API for submitting batch jobs) to get Dask client for the cluster it’s running on.
1.76.1#
For
coiled batch run, accept CPU and/or memory ranges. For example,coiled batch run --cpu 2-4 --memory 8GB-16GB.For
coiled batch status <id>, color task state and add support for--sort(for example,--sort state,exit_code,array_task_id).
1.76.0#
For batch jobs, ignore dask worker TTL and event loop health (dask runs in the background but isn’t used).
Tweak how
coiled batch statusshows task duration for long tasks (hours and minutes rather than total seconds).
1.75.0#
For batch jobs, add
--no-task-on-schedulerandtask_on_scheduler=Falseto explicitly prevent running task on scheduler VM when there’s only one task to run.
1.74.0#
Add
--host-setup-scripttocoiled batch runto support running a setup script on the host VM(s) before the batch job is run.
1.73.0#
Drop support for Python 3.8
Bug fix to handle whitespace around Coiled API tokens.
Add
coiled.batch.status()andcoiled.batch.list_jobs()Python API methods for accessing Coiled Batch job status.
1.72.3#
Fix issue where sometimes local files would be left open after package sync scan.
1.72.2#
Bug fix to
coiled logsso that it will show error message when no instances match specified filters, rather than showing logs for all instances in this case.For CLI commands that accept cluster name (like
coiled logs foo), this will now match on most recently created cluster with this name.For
coiled batch run, add--nameto specify cluster name.
1.72.1#
Fix bug for
coiled batchso that Instance Profile (on AWS VMs) will be used for permissions when not forwarding local AWS credentials.
1.72.0#
When using a public image on Docker Hub, if the image only supports ARM and you haven’t explicitly specified that you don’t want ARM VMs, we’ll automatically switch to using ARM VMs.
Add
coiled.create_package_sync_software_env()to Python API.
1.71.2#
Bug fix so that scaling down many workers at once does not result in error from very long request URL.
1.71.1#
Fix bug in 1.71.0 that caused exception in
coiled.batch.run()in you didn’t specify custom logger.Add support for using Azure ephemeral OS disks using the
backend_options={"scheduler_ephemeral_os_disk": True, "worker_ephemeral_os_disk": True}cluster keyword argument. Note that only some Azure VM types will support this.
1.71.0#
For
coiled batch runCLI, fixed bug where--gpuargument was ignored for workers.For
coiled.batch.run(), accept string forvm_typekwarg (as well as a list of strings for multiple VM types).
1.70.0#
Add
--package-sync-conda-extrasand--package-sync-strictoptions tocoiled runandcoiled batch run.
1.69.0#
Add
coiled.batch.run()Python API for running Coiled Batch Jobs, for example,coiled.batch.run("echo Hello")orcoiled.batch.run("script.py", container="ubuntu:latest", ntasks=10).For CLIs that allow you to specify environment variables (
coiled run,coiled batch run, andcoiled notebook), previously you had to specifyKEY=VALUE. Now you can also specifyKEYto read the value fromKEYlocal environment variable, orKEY=$OTHER_KEYto read the value fromOTHER_KEYlocal environment variable.For
coiled batch run, print the command that will be run for each task.For
coiled batch run, support%nsyntax for--arrayto limit number of parallel nodes (VMs) to use, e.g.,coiled batch run --array 0-12:3%2 ...runs jobs with IDs 0, 3, 6, 9, and 12 on 2 VMs.
1.68.1#
For
coiled batch run --ntasks n, number task IDs from 0 ton-1(rather than 1 ton).
1.68.0#
Add
--secret-envoption tocoiled batch runfor sensitive environment variables that you’d like us to store temporarily only for the duration of your batch job.Support
=as separator between--index-urlor--extra-index-urland URL when creating software environments fromrequirements.txtfiles.Better error message when workspace name is specified but slug is expected.
Bug fix to
coiled cluster azure-logsCLI so that all pages of logs will be fetched.
1.67.0#
Timeout (default 1 minute) when creating wheel from local package for package sync.
Add
armkwarg tocreate_software_environment()and--armflag tocoiled env createCLI (this can be used instead ofarchitecturekwarg or--architectureCLI option).Skip forwarding of local dask config if it is not JSON serializable.
Add support for
PIP_PYPI_URLenvironment variable as one of the ways to specify index URL.No longer open port 22 by default on VM for
coiled batch run.Handle log lines in
coiled logsCLI thatrichcan’t print; when there’s a markup exception, we’ll use normal printing. If you want to avoidrichfor all log lines, you can specify--no-color.For
coiled batch run, use single VM node when there’s a single task (rather than scheduler + worker nodes); add--tagoption for tagging runs.
1.66.1#
Minor tweak to
coiled batch runheader option parsing.
1.66.0#
Handle another
FileNotFoundErrorthat can occur if directory structure changes while running scan for package sync.Add help text
coiled batch runoptions; apply VM options for scheduler node (as well as workers) if you’ve specified that task(s) should run on scheduler.
1.65.0#
Enhancements to
coiled batch runfor jobs that involve coordination between scheduler and worker nodes:--wait-for-ready-clusterflag so that tasks are only assigned when entire cluster is ready;--task-on-schedulerflag to assign a single task to the scheduler node; and environment variables such asCOILED_BATCH_PROCESS_TYPE(“scheduler” or “worker”),COILED_BATCH_SCHEDULER_ADDRESS(scheduler node IP address),COILED_BATCH_LOCAL_ADDRESS(IP adddress of node where task is running), andCOILED_BATCH_READY_WORKERS(a comma delimited list of IP addresses).
1.64.1#
Handle
FileNotFoundErrorthat can occur if directory structure changes while running scan for package sync.
1.64.0#
Add
--keep-existing-accesstocoiled setup azureCLI.Small change to support using package sync with customer-hosted storage accounts in Azure.
1.63.0#
For package sync, correctly parse
extra-index-urlspecified under[tool.uv.pip]as a list rather than string.
1.62.0#
Fix to
coiled setup azurebug that affected some Windows users because of the way the inline JSON was formatted.Change to
coiled.google.credentials.CoiledShippedCredentialsclass so that forwarding of the OAuth2 token for Google Cloud data-access credentials is more reliable even when Dask workers have blocked event loops.Add
coiled logsas an alias for existingcoiled cluster logsCLI, and allow specifying cluster ascoiled logs <cluster>without--cluster.Tweaks to the experimental
coiled batchCLI.
1.61.0#
No changes.
1.60.0#
Small changes to the very experimental
coiled batch runCLI; use at your own risk.
1.59.0#
Add a very experimental
coiled batch runCLI; use at your own risk.
1.58.1#
Fix issue where starting a cluster with package sync from the
/directory would cause the software environment build to fail.
1.58.0#
During
coiled setup gcp, Coiled will attempt to temporarily adjust theiam.allowedPolicyMemberDomainsorganization policy so that the Google service account used for the Log Sink can be granted permission. This is required for newer project, and temporarily disabling the constraint is the approach that Google recommends.
1.57.0#
Add
coiled cluster addressCLI for getting the hostname or IP address of a cluster.Bug fix so that the countdown for “no client shutdown” is not triggered by mounting a bucket (i.e., the
mount_bucketkeyword argument or--mount-bucketCLI option).The
extra_kwargskeyword argument to@coiled.functionnow takes precedence over the default values used for@coiled.functionclusters (e.g.,extra_worker_on_scheduler).
1.56.0#
Shorter cluster creation error tracebacks for a few types of errors.
1.55.0#
Fix issue where the
package_sync_onlyargument toCluster()would be ignored if the deprecatedpackage_syncargument was not also a list.Add
package_sync_conda_extrasoption toCluster()for specifying names of conda packages not present in your local environment that you would like on your cluster.
1.54.0#
Bug fix for
coiled cluster azure-logsCLI that affected large clusters.
1.53.0#
Bug fix so that the 2 minute “no client” timeout won’t start immediately counting down for adaptive clusters. It will now start counting down only after the user connects a Dask client (as is already the case for non-adaptive clusters).
1.52.0#
When using
mount_bucketkeyword argument for clusters, the bucket is now mounted on all workers (as well as the scheduler).
1.51.0#
Minor internal changes to
coiled setup aws.
1.50.0#
Correctly classify the (not yet public) C4A series on Google Cloud as ARM.
For
coiled setup aws, minor fix to the scoping oniam:PassRole.
1.49.0#
Add
--no-credential-forwardingoption tocoiled runto disable the default forwarding of local credentials. Use this if you instead want to (e.g.) use AWS Instance Profile for code you’re running.
1.48.0#
For
coiled setup aws, reduce and scope the permissions in the ongoing role used by Coiled.Bug fix to remove any extra whitespace around index URLs used for package sync.
1.47.0#
Minor change to avoid deprecation warning from Dask (about
idempotentkwarg) when using Spark on Coiled.
1.46.0#
Change default cluster behavior to shut down 2 minutes (by default) after all clients have disconnected. You can control this by specifying
no_client_timeout, e.g.,Cluster(..., no_client_timeout=None)to disable, orCluster(..., no_client_timeout="30 minutes")to keep cluster running longer if you expect to re-connect to the cluster.
1.45.0#
Bug fix to enable forwarding OAuth2 token from local Google Cloud credentials when local credentials are service account key.
Bug fix when printing
mutagensync errors, plus telemetry so Coiled is aware of these errors.
1.44.0#
Idle timeout can be specified using
distributed.scheduler.idle-timeoutDask config value, as well asCluster(idle_timeout=...)keyword argument. The kwarg takes precedence.For
coiled runandcoiled notebook, interpret--disk-size <int>as GB (as docs claimed was the case).
1.43.0#
By default, use port 443 for both Dask scheduler comms and the scheduler dashboard. Traffic is routed as appropriate, both scheduler and dashboard are secure by default. If you’re using custom networking, either make sure your security group/firewall for the scheduler allows ingress to 443 from the client, or use
Cluster(..., scheduler_port=8786)to continue to use port 8786 for scheduler comms (the previous default).
1.42.0#
Add more tracking around client cluster scaling calls.
1.41.0#
When specifying both the
worker_cpuandworker_memory, or thescheduler_cpuandscheduler_memoryarguments toCluster, for AWS Coiled will now include some of the more recentrandcinstance types (as well as future types).Add
unset_single_threading_variableskwarg toCluster()for overriding the default Dask behavior of setting variables such asOMP_NUM_THREADSto1.Fix compatibility issue with
prefect>2.19.9.
1.40.0#
For adaptive scaling, when there are requested workers that Coiled is unable to provision and more workers are still desired, adaptive will now re-request those workers. This is likely to help when the workers couldn’t be provisioned because of cloud availability limits. The interval at which adaptive will retry is configured using the
coiled.adaptive.update-plan-intervalDask config value.
1.39.0#
Add client telemetry so Coiled can track timing and success of various client-side activities, such as preparing to sync the software environment or forwarding local credentials to cluster. No sensitive data is collected, telemetry can be disabled by setting
coiled.send-client-events(in Dask config) toFalse.Fix link to Google Cloud quotas when using
coiled setup gcp --quotas.
1.38.0#
When building software environments, we now use
uvto install packages from PyPI by default. This behavior can be disabled by setting thepackage_sync_use_uv_installeroption forcoiled.Cluster()toFalse.
1.37.0#
Add support for mounting S3 and GCS buckets as volumes on cloud VMs. You specify buckets using
mount_bucketkeyword argument forcoiled.Cluster(), or--mount-bucketCLI option forcoiled runandcoiled notebook. This feature is under active development, feedback welcome!Default idle timeout for Coiled Functions is now 6 hours (previously: 24). Clusters for Functions still shut down much quicker if a client is no longer attached.
Unpin
backoffto avoid conflicts with any libraries that pin it.Workaround to create temporary directory (when needed) inside current working directory if unable to create on in the default locations.
1.36.0#
Add
extra_kwargskeyword argument to@coiled.functionfor passing any extra arbitrary keyword arguments to the underlyingcoiled.Clusterclass.With
dask=>2024.7.1, usingmapwith@coiled.functionno longer submits tasks in batches because submitting without batching is no longer slow.Add
coiled cluster azure-logsCLI command to pull logs from Azure. This is under active development; it’s likely that there will be breaking changes in the future to this CLI command.
1.35.0#
Add
--ignore-container-entrypointflag tocoiled env createCLI for creating a software environment from a container with an entrypoint that you want Coiled to ignore.Fixed incompatibility with
importlib_metadata >=8.0.
1.34.3#
Fixed issue where toml configuration files would only be found in current working directory.
Use a default value of 20 minutes for
distributed.scheduler.no-workers-timeout, so that clusters with tasks that cannot be assigned a worker will not be stuck forever and will automatically shut down. This functionality relies ondistributed>=2024.1.0.Fixed issue where default logging was extremely minimal when not creating a cluster in a notebook.
coiled cluster logsCLI will now pull multiple pages of logs if needed, and will show label specified with--label.
1.34.2#
Shorter traceback for workspace permission error.
1.34.1#
Better handling of rate limits in cluster state checks.
1.34.0#
Add support for Prefect 3.
1.33.0#
Add
coiled cluster listCLI command.Tweak error message when
Cluster(software=...)is invalid software environment name format.
1.32.0#
When using
coiled runto start additional clusters (for example, when using@coiled.functionin the script you’re running viacoiled run), only re-use the software environment if the VM architecture matches.
1.31.0#
By default, transmit and capture all Dask events to control-plane. Which events to send is determined by
coiled.analytics.events.allowDask config.
1.30.0#
coiled runby default will no longer re-use VM from differentcoiled runinvocations. To run multiplecoiled runcommands using the same VM, specify the same name with--name <name>for commands that you want to run on the same VM.
1.29.0#
Show progress when using
coiled run --fileto upload files from local machine to cloud VM.When adaptive scaling tries to scale down but the Dask scheduler does not retire some of the workers because they have data than cannot be moved, don’t forcibly stop the worker VMs.
Warn about log volume when any loggers are set to DEBUG in Dask config.
For CLI commands like
coiled cluster sshorcoiled cluster stopthat by default use the most recent cluster if no cluster is specified, this is now filtered by clusters that your user created, not the most recent cluster anyone made in your workspace.Update the error message in
coiled setup awsif you have insufficient permissions to include the permissions needed for setting up cross-account role assumption.
1.28.1#
Fix bug in
coiled setup gcp --enable-garthat prevented adding policy binding for Coiled to access private Google Artifact Registry repositories.
1.28.0#
Fixed issue where starting a cluster with package sync enabled on Linux would cause package pins to be overly strict.
1.27.2#
Automatically use private IP address for client <-> scheduler connection if you’ve configured Coiled to not assign public IP address to scheduler.
1.27.1#
Allow configuring scheduler port using
coiled.scheduler-portdask config setting.
1.27.0#
For
coiled notebook --syncandcoiled run --sync, by default edit SSH config for SSH access (which is needed under the hood), rather than using SSH agent. If you want to use SSH agent instead, you can set this in your Dask config by runningcoiled config set coiled.use_ssh_agent True.
1.26.0#
For
coiled notebook --syncandcoiled run --sync, add--sync-ignore <path>option; also, always ignore.venvdirectories when syncing.
1.25.1#
When using
coiled run --port ...to start VM with port(s) exposed, the server address is now shown in widget.Error handling for creating the Coiled config file when
DASK_CONFIGpath is a file instead of directory.
1.25.0#
Allow explicitly setting
scopeswhen forwarding Google credentials when usingcoiled.credentials.google.send_application_default_credentials(). For example, when using Google Earth Engine, you’d usescopes=["https://www.googleapis.com/auth/earthengine"](or some longer list of desired scopes).Add
--workspaceoption tocoiled env listandcoiled env deleteCLI.
1.24.0#
Add support for getting credentials from
keyringfor private PyPI index.Bug fix so that
@coiled.functionshows initial workers as0rather than-1whenn_workersisn’t specified.Bug fix so that calling
coiled.credentials.google.send_application_default_credentials(cluster)won’t affect any client(s) you’ve already gotten usingcluster.get_client().
1.23.0#
Explicitly calling
cluster.scale(n)now turns off adaptive scaling (if adaptive was being used).Fix bug that sometimes caused package sync to not sync local code in the working directory.
1.22.0#
Forward long-lived Google credentials to your
coiled runVM using the--forward-gcp-adcCLI flag.
1.21.0#
Minimum and maximum number of workers for adaptive scaling can be specified using
n_workers=[min, max]kwarg withcoiled.Clusterandcoiled.function. When you don’t want adaptive, use (e.g.)n_workers=10as before to request exactly 10 workers.Add
allow_cross_zonekwarg tocoiled.Cluster()to allow the cluster to have VMs in distinct zones. There’s a cost for cross-zone traffic (usually pennies per GB), so this is a bad choice for shuffle-heavy workloads, but can be a good choice for large embarrassingly parallel workloads.Bug fix so that scaling down many workers at once does not result in error from very long request URL.
More accurate cloud cost estimate in
coiled notebookandcoiled runwidgets.Adjusted retry logic for when Coiled client HTTP requests get a server error response.
1.20.0#
Minor refactoring for compatibility with
pyright==1.1.362.Add support for setting user-specified resource tags on notebooks with
coiled notebook --tag key=value --tag another=value.
1.19.2#
Fix the
Credentialsblock for Prefect so that it’s compatible withpydantic>=2.
1.19.1#
Fixed issue where
@coiled.function(gpu=True)would raise anInstanceTypeError.
1.19.0#
Re-raise any exception that happens inside code wrapped by
coiled.performance_reportas original exception, rather than genericException.
1.18.0#
Added
--envforcoiled notebook start. These env vars are sent securely to the notebook server after boot and not stored on disk.
1.17.0#
Added
allow_ingress_fromargument to@coiled.function.Added
--allow-ingress-fromargument tocoiled notebook start.For
@coiled.function, only open port 22 to the client IP address (not0.0.0.0/0).Bug fix for running
coiled setup azurein Windows Command Prompt (it doesn’t accept single quotation marks around JSON strings).
1.16.0#
Added support to
coiled runfor string commands. For example,coiled run "python myscript.py".When requesting GPU cluster without specifying CPU or memory limits, we’ll default to small instance types (1 to 8 vCPUs).
Fixed an issue where
pytorchwould be missing on GPU clusters for some conda environments that did not include other packages that relied onpytorch.The coiled client now explicitly sets
PIP_REQUIRE_VIRTUALENV=falsewhen callingpip wheelandpip configto avoid issues whenPIP_REQUIRE_VIRTUALENV=trueis set in the local environment.All GPU package sync clusters will now automatically include
pynvmlin the environment to enable GPU performance metrics.
1.15.0#
Added
send_application_default_credentials(cluster)for sending Google Application Default Credentials to the cluster. These are longer-lived, revocable credentials, and can be automatically used by many libraries such asgcsfs. See Long-Lived (Revocable) Application Default Credentials for more details.Package sync will now automatically fix common package issues that used to be warnings (e.g., using sklean instead of scikit-learn).
Added
--roottocoiled runfor executing commands as root. For example, you could usecoiled run --interactive bash --root --keepalive 5mto interactively install software withapt-get.Added
CoiledShippedCredentialsfor refreshable Google Cloud credentials using the temporary OAuth2 token shipped by Coiled to the cluster. See Personal OAuth2 Tokens for more details.
1.14.0#
Package sync will now automatically switch to the GPU-enabled
torchconda package if you are using a GPU cluster.Add
--synctocoiled runfor syncing files from local working directory to/scratch/syncedon the cloud VM.Bug fix so that
@coiled.function(gpu=True)will add GPU to any additional workers (not just the first).
1.13.1#
Another (better) fix for how environment variables such as
OMP_NUM_THREADSare unset for CLI jobs.
1.13.0#
When starting Spark, you can now specify memory limits relative to available VM memory. For example,
cluster.get_spark(executor_memory_factor=0.9, worker_memory_factor=0.8).
1.12.4#
coiled run --filewill now report an error before cluster creation if the file does not exist.Fixed typo in
coiled setup aws --quotasoutput
1.12.3#
Fix bug for
coiled run --envand--tagintroduced in 1.12.2.Fix a bug for how environment variables such as
OMP_NUM_THREADSare unset for notebooks and CLI jobs. They were being set to"", not they’re unset before user-code runs. Some libraries, such astorch, would give an error for"".Fix common dependency conflict between
aiobotocoreandbotocore.
1.12.2#
Support for setting
shm_sizefor Docker (sometimes needed for training on GPUs). For example,Cluster(..., backend_options={"docker_shm_size":"1gb"}).Fix to
coiled run [...] --detachso that VM will stay up while your code is running, and by default shut down after your code is no longer running. To keep VM running for re-use, specify a keepalive value, for example--keepalive 10m.
1.12.1#
For the temporary AWS token shipped from client to cluster for data access, format the expiration datetime string to match format expected by some non-Python AWS libraries.
For
coiled run path/to/file.py --file /another/dir/to/upload/, preserve path structure for the implicitly specified file and for explicitly specified directories outside current working directory.
1.12.0#
Dropped support for Python 3.7.
We’re using the name “workspace” for what we used to call an “account”, so throughout the client we’re deprecating
account=(in Python API) and--account(in CLI). You should useworkspace=and--workspaceinstead (althoughaccountwill continue to work for now).Various improvements to make
coiled setup azuremore user-friendly.Added kwarg for passing config when getting a
SparkSessionfor Spark running on Coiled. For example,cluster.get_spark(spark_connect_config={"spark.scheduler.mode":"FAIR"})to set the scheduling mode.Home directory will now be abbreviated as
~in the package sync widgets.
1.11.0#
Allow user-specified custom subdomain for hostname. For regular coiled clusters, use
dashboard_custom_subdomain='foo'to have the scheduler dashboard accessible atfoo.<account slug>.dask.host. Forcoiled run, use--subdomain foo.Fixed bug that prevented referencing cluster by name with
coiled cluster logsorcoiled cluster ssh
1.10.0#
For
coiled runandcoiled notebook, default to opening port 22 on the VM only to the public IP address from where the command is run. This can be controlled with--allow-ssh-from(run) and--allow-sync-from(notebook). Coiled relies on SSH in the implementation of bothrunandnotebook, and uses secure, unique-to-each-VM key pairs for SSH authentication. For normal Coiled clusters, you can also explicitly specify different SSH ingress CIDR usingCluster(allow_ssh_from=...).
1.9.0#
Added support for authenticating with Coiled using a Prefect Block.
Fixed bugs in
coiled setup awsCLI when setting up cross-account role assumption. Default region is now set as desired, and there’s now a check to ensure that permissions have propagated before Coiled tries to use them.
1.8.0#
Added support in
coiled runfor uploading directories (as well as files) using--file <dir>coiled setup awsCLI now uses cross-account role assumption for Coiled’s access to your AWS account. For existing users, we recommend transitioning to role assumption instead of using AWS access keys. (You can check which method you’re using under Cloud Provider Credentials.)
1.7.0#
Fixed a bug where the notebook created using
coiled notebook start --syncwould not be able to see updated code in the local working directory (package sync was creating a wheel of the directory and that overrode the sync’ed python source files).For
coiled run --file foo/bar/file.txt, the directory structure will no longer be lost, so you can run (e.g.)coiled run cat foo/bar/file.txt --file foo/bar/file.txt(where previously you would have wantedcoiled run cat file.txt --file foo/bar/file.txt).Add
--fileargument tocoiled prefect serve.
1.6.0#
When using
map()with@coiled.function-decorated function, you can now specifyerrors="skip"to map function calls that fail toNonerather than raising an exception when you try to get result.Fixed issue where package sync errors would get displayed multiple times.
Added local Python environment to package sync and notebook widgets, so you can now tell which Python environment is being synced.
1.5.0#
For Google Cloud, if you specify cpu or memory but not both (e.g.,
Cluster(scheduler_cpu=4)) we’ll now only match this on “balanced” instance types. This was already the behavior for AWS. If you want to include memory and/or compute optimized instances, you can specify both cpu and memory, or explicitly specify"*"for one of these (e.g.,Cluster(scheduler_cpu=4, scheduler_memory="*")).Add support for setting user-specified resource tags with
coiled run --tag key=value --tag another=valueand@coiled.function(..., tags={"key": "value", "another": "value"}).Bug fix so that when Coiled Function is running locally, e.g.,
@coiled.function(local=true), adaptive scaling will use a reasonable upper limit based on your local machine.
1.4.0#
Add option to save generated bash script when using Azure setup. This feature is experimental, so please contact us before attempting to use it.
1.3.14#
Bug fix so that
use_scheduler_public_ip=Falseanduse_dashboard_https=Truecan be used together.
1.3.10#
Bug fix so that
@coiled.functionused from withincoiled runorcoiled prefect servewill re-use the software environment.
1.3.9#
Bug fix for
coiled notebook --syncthat affects anyone not using their own Docker image.Bug fix for package sync environments that will prevent wheels from being built out of
site-packagesdirectories.
1.3.5#
Fix issue with using Coiled clusters from VSCode notebooks.
1.3.0#
Add ability to use custom PyPI URLs with package sync. Just set the
global.extra-index-urlorglobal.index-url`settings by runningpip config set 'global.extra-index-url' "YOUR_CUSTOM_URL".Deprecated
package_syncargument toCluster, because it is always enabled by default. Usepackage_sync_onlyfor restricting the list of packages. Specifycontainerorsoftwareargs to disable package sync.
1.2.0#
Add ability to include local code in a manually created software environment via the
--include-local-codeflag forcoiled env createand theinclude_local_codeargument forcoiled.create_software_environment.
1.1.14#
Add ability to use editable packages with package sync from clusters or notebooks that start other clusters.
1.1.12#
Notify users sooner when they have not configured a cloud provider and try to create a cluster.
1.1.10#
By default, don’t include burstable AWS instance types (e.g.,
t3) in types you’ll get if you specify desired cores and memory. You can still explicitly request these instance types if desired.For
@coiled.function, use “spot_with_fallback” as the default spot_policy.Declare extra optional dependencies for using
coiled notebookto simplify installation by allowingpip install coiled[notebook].
1.1.9#
Bug fix for package sync when starting Coiled cluster from inside Celery task.
1.1.7#
Bug fix for shipping refreshable temporary AWS credentials when we’re unable to determine expiration of local AWS session credentials.
1.1.6#
Change the default idle timeout for Coiled Notebooks from “24 hours” to “4 hours”.
1.1.5#
Fix path issues when using
coiled setup gcpon Windows
1.1.2#
Package sync will now use httpx with HTTP/2 for uploading wheels to S3 to fix a crash when uploading files larger than 2GB.
1.1.1#
coiled prefect serveis an easy way to run your Prefect flow on Coiled. See Prefect + Coiled for more details.
1.1.0#
coiled runnow uses SSH under the hood, instead of relying on dask client. This adds support for--interactiveand--detach, and means that local and remote versions of dask don’t need to match (this matters if you’re using--container). Note thatfabricis now a client dependency.
1.0.0#
This has no changes from version 0.9.39. We’re just updating the major version to 1 as an overdue acknowledgement of stability.
0.9.35#
Reattaching to an existing cluster is now ~35% faster.
Improvements to how temporary AWS credentials are shipped from your local machine to the cloud. The upshot is that code running in the cloud will automatically pick up refreshed credentials as needed, rather than sometimes getting errors from trying to use expired credentials.
0.9.34#
Fix to
coiled setup gcpfor when there’s an existing policy in GCP project with condition set.
0.9.33#
Fixed package sync KeyError when handling old conda packages that do not have the “constrains” key specified in their metadata.
0.9.32#
Package sync will now default to looking for conda packages on
conda-forgeif they were installed directly from an archive file (e.g.conda install /path/to/miniconda/pkgs/some_package.conda).coiled runCLI now captures what code ran (unless configured not to withcoiled.analytics.computation.code.transmitin your dask config).
0.9.15#
Package sync will no longer make a
coiled_local_*wheel out of your working directory if that directory is a git repository that you already installed viapip.
0.9.14#
Coiled can now ship an OAuth2 token for Google data access if you have Google Application Default Credentials configured locally. See Personal OAuth2 Tokens for more details.
0.9.10#
Package sync will now upload software environments to the same region as the cluster, which can improve performance.
0.9.8#
When you don’t specify instance types, for AWS the second-choice instance type will now be
m5.xlarge(older generation non-burstable) rather thant3.xlarge(similar hardware tom5but also burstable). The first-choice we request from AWS remainsm6i.xlarge. If you want to use burstable instance types, you can ask for them explicitly like this:coiled.Cluster(worker_vm_types=["t3.xlarge", "t2.xlarge"]).coiled runCLI and@coiled.functiondecorator will not setMKL_NUM_THREADS,OMP_NUM_THREADS, orOPENBLAS_NUM_THREADSto 1 by default (Dask sets these to 1 by default). Setting these to 1 makes sense when you have as many dask worker threads as you have cores, but forcoiled runand@coiled.functionyou often want to run a single task that has access to the full resources of the machine.
0.9.6#
Empty string environ values such as
cluster = coiled.Cluster(environ={"OMP_NUM_THREADS": ""})can now be used to disable default Dask values forMKL_NUM_THREADS,OMP_NUM_THREADS, orOPENBLAS_NUM_THREADS.
0.9.5#
Fixed a bug that would create extra
coiled_local_wheels when package sync ran inside a virtualenv.
0.9.4#
Restore Python 3.7 support.
Renamed
@coiled.run(...)to@coiled.function(...).
0.9.0#
Renamed
coiled.magicmodule tocoiled.capture_environmentWhen shipping local AWS credentials to cluster (using temporary STS token), default region is set on the cluster to the default AWS region for local AWS sessions. If a different region is desired, you can explicitly set region in your code.
0.8.14#
Increase the default time before adaptive shuts down workers from 1 minute to 2 minutes. This is implemented by switching the default
wait_countinCluster.adaptfrom 12 to 24, so that a worker must be suggested for shutdown during 24 consecutive 5-second intervals. Both theintervalandwait_countcan be controlled by users inCluster.adapt. We expect this is a better experience for interactive uses of adaptive by avoiding scaledown while waiting for user input. If you want the old behavior, specifycluster.adapt(wait_count=12).
0.8.13#
Resolve an issue that would cause
Clusterto wait on workers that could never arrive.
0.8.4#
Improvements to the package sync code that scans for all importable Python files, it now ignores more files that can be safely ignored.
0.8.3#
Fixed a bug that was ignoring
--accountoption oncoiled login.
0.8.2#
Fixed a bug with
coiled setup aws.
0.8.1#
coiled loginlets you grant access with a single click from your Coiled web account; you no longer need to manually create then copy and paste an API token. If you do wish to log in locally with an existing token, usecoiled login --token <token>.Resolved a crash on Windows with package sync when
dask_labextensionis present in the environment.
0.8.0#
If no cluster size is specified, adaptive scaling is now turned on, with a minimum of 4 workers and maximum of 20. However, recommend that users specify their cluster size and/or adaptive settings explicitly.
Package sync will now automatically create wheels from all importable Python packages in
sys.pathand your current working directory, even if you do not have asetup.pyorpyproject.tomlfile.Package sync will now issue warnings if the wheels being built are missing Python files that would have been importable locally.
Added a new
@coiled.rundecorator for running functions in the cloud. See the Serverless Functions documentation for more details.
0.7.9#
Fixed a bug in
coiled env listwhere the links to the software environment were incorrect.
0.7.0#
Drop support for Python 3.7
When the
package_sync_ignorekwarg forcoiled.Cluster()is set, package sync strict mode is no longer automatically enabled even if the client and cluster architectures match.coiled notebook startnow generates a unique name by default instead of determining the name from the current working directory.If the user has not configured any log handler (and is not in iPython/Jupyter), Coiled now attaches a StreamHandler to the logger named “COILED” and sets its level to INFO. This allows us to show important information (e.g. the cluster name) to users who have not set up info-level logging for themselves. Users configuring their own logging are unaffected. To disable this, pass use
coiled.Cluster(..., configure_logging=False).Some less important logging info has been changed from INFO to DEBUG.
coiled cluster ssh --dasklets you attach directly to shell inside container running Dask (rather than host machine).
0.6.10#
coiled notebook startwill now block your terminal by default (likejupyter labdoes), use--no-blockto not block.
0.6.9#
Shorter tracebacks for some client errors, some of internal (mostly async-related) functions are no longer included.
0.6.8#
Renamed
coiled notebook up/coiled notebook downtocoiled notebook start/coiled notebook stop.
0.6.7#
Fixed an issue where
coiled package-sync scanwould output data in the wrong columns when reporting that a wheel could not be created of local importable Python code.
0.6.5#
Add
containerkeyword argument tocoiled.Cluster()so that you can skip the step of explicitly creating a Coiled software environment when using a pre-made container image.coiled.analytics.list_computationsnow takescluster_idorscheduler_id. (This is minor breaking change to API since previouscluster_idkwarg was in fact interpreted as scheduler id.)Fixed an issue resolving software environments when package name was different for conda and pypi.
coiled cluster sshwith no cluster specified now connects you to most recent cluster.
0.6.3#
By default Coiled will now add GPU to scheduler if workers have GPU, as this is the recommended best practice. You can explicitly set
scheduler_gpu=Falseif you want to override this default.Package sync will now set CUDA version (used when installing packages from Conda) when syncing to a cluster with GPU.
0.6.1#
Add
armkeyword argument tocoiled.Cluster()so you don’t need to explicitly specify instance types to make a cluster with ARM instances.
0.6.0#
create_software_environmentand Package Sync now support clusters with ARM instances.
0.5.19#
regionis now an optional kwarg onCluster(...), making it more convenient to run a cluster in a different region from your default region. Including region inbackend_optionsis still supported.
0.5.17#
There’s a new CLI utility for setting Dask config related to Coiled:
coiled config set. For instance, you could runcoiled config set account footo set your default account tofoo, or runcoiled config set distributed.diagnostics.computations.nframes 3to have Coiled analytics collect up to three frames of code.
0.5.6#
create_software_environmentcan now set CUDA version when installing packages from Conda. See GPU for usage.
0.5.5#
Added
coiled setup gcp --quotasfor checking and setting Google Cloud quotas relevant for using Coiled.
0.5.4#
coiled setupnow checks if you have authenticated your Coiled account, and if not, will runcoiled loginBug fixed for link to package sync build logs
0.4.1#
For container-based software environments, fixed a bug introduced in 0.4.0 related to ENTRYPOINT. By default the ENTRYPOINT set in image will now be used (as it was in 0.3.x), you can set
use_entrypointif you want to override this default when creating software environment from a container image.
0.4.0#
Software environment builds are now much faster. The new system is based on conda instead of building an entire Docker image and is only in effect if you are using
coiled >= 0.4.0. You can continue to use the older system of creating a Docker image by pinningcoiled<0.4.0. If you encounter issues please reach out to support@coiled.io
0.3.0#
When no
softwareargument is passed, package sync will be used instead of acoiled-runtimeMinor improvements to
coiled setup awsUpdated Package Sync documentation includes more detailed usage guidelines and a clearer overview of features
0.2.60#
use_best_zoneis now on by default It’s now guaranteed that your scheduler and workers will all be in the same zone, so this option won’t result in any interzone network traffic for your cluster.coiled cluster better-logs(name and arguments subject to change) to pull logs from your cluster which uses your Coiled credentials.The
coiledpackage has fewer dependencies so you’ll get faster installs and fewer potential conflicts.- Package sync
Fixed windows and .egg format packages
Fixed python 3.7 compatibility
Use the anaconda defaults channel for python versions instead of conda-forge. This provides a wider number of versions while being slower to have new versions compared to conda-forge
Fixed a race condition that would sometimes cause hiccups creating a package sync environment
0.2.54#
When specifying both the
worker_cpuandworker_memory, or thescheduler_cpuandscheduler_memoryarguments toCluster, Coiled will now include AWS instance types fromc5,c6i, andr6ifamilies if they match your request. This is useful if you want high-cpu or high-memory instances, where the ratio of cpu cores to memory is different than the usual “balanced” instance types.When you specify only cpu or memory (and not both), we’ll only match “balanced” instance types from
t2,t3,m5andm6ifamilies on AWS. Previously we also includedc5instances as possibilities.
0.2.49#
Added
use_best_zoneargument toCluster. When you’re using this option, the cloud provider will pick the best availability zone (inside the region you’ve selected for your account). For spot or for harder-to-get instance types, this makes it much more likely that you’ll get all the workers you’ve requested. Your workers will all be in the same zone (to avoid cross-zone traffic costs), but one caveat is that it’s possible your scheduler will be in a different zone than your workers. This currently defaults toFalsebut soon we plan to have this default toTrue.
0.2.38#
Added
coiled cluster logsfor pulling cluster logs (see API)Package sync now works for Windows
Improved documentation for known package sync limitations (see Package Sync Limitations)
0.2.31#
Added
shutdown()method onCluster, useful for stopping a cluster whenshutdown_on_close=False.Added
allow_sshandallow_ingress_fromkwargs onClusterto make it easier to configure cluster firewall (see How to SSH into your cluster).Added
coiled cluster ssh <cluster name or id>for easily opening SSH connection to scheduler. This uses a keypair unique to each cluster (see How to SSH into your cluster).
0.2.29#
- A number of package sync-related fixes, including:
Fix detection of legacy egg-info metadata.
Improvements to detection of active package when multiple versions are installed.
Fix ignoring of packages
Fix detection of egg-linked packages
0.2.28#
Dask configuration (
dask.config) from your local client environment will be collected when you start a cluster and applied to the cluster. If you don’t want local dask config sent and applied to your cluster, there’s a kwarg you can use to disable this:
coiled.Cluster(send_dask_config=False)
package_sync_strict, aimed at production use of package sync, enforces an identical environment on the cluster where non-strict mode allows slight differences in package versions between client and cluster. Strict mode works best when your client is running on Linux x86 so that the exact same package versions are available locally and on the cluster.Bugfix for package sync sometimes using the wrong channel for a package.
0.2.13#
Removed all Coiled hosted backend logic for the
coiled.set_backend_options(), you must now provide credentials if you want to set your backend option with this command.Removed the following parameters from
coiled.set_backend_options():use_coiled_defaults,customer_hosted,create_vpc.Removed
acras a registry type since this registry is not supported.
0.2.11#
We’ve added support custom tagging on your clusters (tags on AWS, labels on GCP). This makes it easier to use your cloud providing billing interface to separately track the cloud costs of different teams or workloads. See Tags for more information.
You can specify a larger disk size using
worker_disk_sizekeyword argument when creating a cluster. For example, to start a cluster in which each worker has a 200GB disk (EBS on AWS, Persistent Disk on GCP), you’d call
coiled.Cluster(worker_disk_size=200)
0.2.5#
(Coiled v2) When creating container software environments, we no longer default to overriding the
ENTRYPOINTset on container image. If you’re using an image where Coiled should override the entrypoint, then you should setuse_entrypoint=Falsekwarg when creating your software environment. For example, to create a software environment from a RAPIDS image (which use entrypoint to start a Jupyter server), you’d call
coiled.create_software_environment(
name="my-rapids-nightly",
container="rapidsai/rapidsai-nightly:cuda11.5-runtime-ubuntu20.04-py3.9",
use_entrypoint=False,
)
(Coiled v2) Fixed issue where creating a Cluster with both cpu/memory and vm_types arguments would prioritize the cpu/memory argument over vm_types. If trying to use both, the client will now raise an exception.
0.2.2#
Released May 23rd, 2022
(Coiled v2)
backend_optionsnow lets you specify multiple CIDR blocks to open for ingress to your scheduler using theingresskeyword; see Configure Firewalls for more information.
0.2.1#
Released May 17th, 20022
Relaxes the version constraint on
clickin thecoiledpackage’s dependencies.
0.2.0#
Released May 5th, 2022
This version switches to using v2 clusters (see Manage Hardware).
0.0.78#
Released April 28, 2022
(Coiled v2) You can now use the
worker_classargument when creating a Cluster to change the workers class of the workers created by Coiled.(Coiled v2) You can now ask for AWS Spot instances When creating a cluster.
(Coiled v2) Various improvements to the Cluster widget.
The
coiledpackage now supports Python 3.10. Note that Python 3.10 is not recommended if you ar using thecoiled-runtimepackage which includes a Dask version (2022.1.0) that does not support Python 3.10.The CLI command
coiled env createused to create software environments, now accepts an--accountoption to specify the account to use for the creation of that software environment.
0.0.72#
Released March 29, 2022
No user-facing changes
0.0.70#
Released March 22, 2022
Added deprecation warning for cluster configurations. This feature will soon be deprecated
0.0.69#
Released March 17, 2022
No user-facing changes included in this release.
0.0.68#
Released March 9, 2022
Sometimes fetching account credentials would fail if the server responded with a brief error code. The code will now retry to fetch Coiled credentials for your user if the server responds with an error code.
The command
coiled.list_instance_types()will now accept exact values or a range of values for cores, memory and gpus. You can specify a range by passing a list of two values, for example: coiled.list_instance_types(cores=[2, 8]).When fetching instance types with the command coiled.list_instance_types() you can now specify memory values as you would when creating Clusters. For example: coiled.list_instance_types(memory=”8GiB”).
0.0.67#
Released February 25, 2022
Release is the same as 0.0.66, this new version was released to address some versioning issues that the team found.
0.0.66#
Released February 23, 2022
When creating a Cluster, if you specify an account with the keyword argument
account=that is not valid, the request will fail earlier. The error message will also contain the account name that you specified.Updated the error message that the command
coiled.delete_software_environment()returns if the software environment doesn’t exist. The error message will now contain the name of the software environment and the account.
0.0.65#
Released February 11, 2022
Fix misleading error message warning about not getting workers, when workers don’t connect to the scheduler once
wait_for_workerscompletes.
0.0.64#
Released February 10, 2022
This commit was stale and removed
0.0.63#
Released February 9, 2022
Clusters created with the
coiled.Clusterwill now wait for 30% of the requested workers before returning the prompt back to the user.The method
coiled.Cluster()accepts await_for_workerskeyword argument that allows you to increase/decrease the number of workers that need to be created before returning the prompt back. Additionally, the option to wait for workers can be toggled off.Improved validation for instance types when creating a Cluster
Added a warning message informing users to run
coiled.get_notifications(level="ERROR")when no workers have connected to the scheduler after 10 minutes.If a Cluster can’t get any workers due to availability issues or any other reason, the
coiled.Cluster()constructor will now return the last error message when Coiled tried to create the worker (you need to havewait_for_workersenabled).
0.0.62#
Released January 26, 2022
The command
coiled.list_instance_typesnow returns a list of all available instance types that your cloud provider allows.You can now specify a minimum number of memory, cores and gpus when using the command
coiled.list_instance_types().
0.0.61#
Released January 12, 2022
Fixed issue with setting loop when using a Dask version higher than 2021.11.2
0.0.60#
Released December 15, 2021
set_backend_optionsno longer accepts arguments related to Azure backends.coiled.Clusternow accepts ause_scheduler_public_ipto configure the scheduler address the Coiled client connects to.
0.0.59#
Released December 13, 2021
Pin
Dask.distributedto a version prior to2021.12.0since this introduced an incompatibility withcoiled.
0.0.58#
Released December 03, 2021
Fix a bug that prevented users’ AWS credentials from being sent to clusters.
0.0.57#
Released December 01, 2021
Add support for managing long lived API access tokens via the Coiled client.
Coiled client is tested and supported for Python version 3.7, 3.8 and 3.9. Coiled client raises an exception if you attempt to install in an environment with python versions below 3.7 or version 3.10
Removed functionality associated with Coiled Notebooks and Coiled Jobs since they have been deprecated.
0.0.56#
Released November 22, 2021
Users can specify during cluster creation whether to use the public address or the private address of the scheduler to connect to the cluster.
Python client will raise an
AccountFormatErrorif the account is not a combination of lowercase letters, numbers or hyphens.
0.0.55#
Released November 11, 2021
Fixed issue that when using the command
coiled login --tokenin the terminal, would show an error message saying that you have run out of credits.Updated connection timeout, which should mitigate the timeout error that sometimes was ocurring when launching clusters.
You can now customize the firewall/security group that Coiled uses by adding a
firewalldictionary and pass it to thebackend_optionskeyword argument for thecoiled.Clusterconstructor.
0.0.54#
Released October 17, 2021
You can now specify a list of instance types with the
scheduler_vm_types/worker_vm_typeswhen creating a cluster using thecoiled.Cluster()constructor.You can now select a GPU type by using the keyword argument
gpu_typefrom thecoiled.Cluster()constructor.Added a new command
coiled.list_instance_types()to the Coiled Client which returns a list of allowed instance types that you can use while creating your Cluster.Added a new command
coiled.list_gpu_types()to the Coiled Client which returns a list of allowed GPU types that you can use while creating your cluster.You can now specify
enable_public_http,enable_public_sshanddisable_public_ingresswhen using thecoiled.set_backend_options()to have more control on the security group that Coiled created with AWS.You can now use the Clusters private IP address when interacting with your cluster by using
backend_options={"disable_public_ingress": True}when creating a cluster with thecoiled.Cluster()constructor or when setting your backend with the commandcoiled.set_backend_options().You can now remove port 22 from the AWS security group that Coiled creates in your account by setting the
enable_public_sshflag to False used with either thebackend_optionsor when setting your backend with the commandcoiled.set_backend_options().
0.0.53#
Released October 13, 2021
Environment variables sent to the Cluster with the
environ=keyword argument are now converted to strings.Added a depagination method so our list commands (for example
coiled.list_cluster_configurations()) will now return all of the items instead of only the last 50.
0.0.52#
Released September 16, 2021
coiled.set_backend_options()no longer supports the deprecated ECS backend.
0.0.51#
Released September 1, 2021
Coiled clusters now support adaptive scaling. To enable it, create a cluster, then run
cluster.adapt(maximum=max_number_of_workers).Removed an unused
regionparameter fromcoiled.Cluster(). Cloud provider regions can be set usingbackend_options=.coiled.create_notebook()now takes an optionalaccount=parameter like the rest of the API. If there is a conflict between the account specified via the name and the account specified via thaaccountparameterm an error is raised.
0.0.50#
Released August 24, 2021
Another
aiobotocore-related fix.
0.0.49#
Released August 20, 2021
Hotfix to support
aiobotocore==1.4.0.
0.0.48#
Released August 17, 2021
Hotfix to relax the dependency on
typing_extensionsin order to conflict less with third-party packages.
0.0.47#
Released August 13, 2021
coiled.set_backend_options()has changed several parameter names, and it is now possible to specify a gcp zone. A VPC will now be created if credentials are provided.'vm_aws'is now the default backend forcoiled.set_backend_options()in preparation for the deprecation of the'ecs'backend.
0.0.46#
Released August 2, 2021.
Hotfix to better-specify typing-extensions dependency.
0.0.45#
Released July 28, 2021.
coiled.set_backend_options()now supports specifying a Google Artifact Registry for storing software environments.Cluter protocols (currently either
tlsorwss) can now be configured using the dask configuration system undercoiled.protocol.Cluster scheduler and worker options can now be configured using the dask configuration system under
coiled.scheduler-optionsandcoiled.worker-options.
0.0.44#
Released July 15, 2021.
Users with customer-hosted accounts on Google Cloud Platform can now provide a region (
gcp_region_name) tocoiled.set_backend_options().Users can now specify a
protocolwhen creating a Coiled cluster. By default, clusters communicate over TLS ("tls"), but in some restricted environments it can be useful to direct traffic through the Coiled web application over websockets ("wss").The command line interface for creating a software environment (
conda env create) now accepts an optional--conda-env-nameparameter to specify the name of the conda environment into which packages will be installed (defaults tocoiled).
0.0.43#
Released June 29, 2021.
Hotfix to remove aiostream dependency
0.0.42#
Released June 29, 2021.
coiled.set_backend_options()now supports configuring your Coiled account to run in your own Google Cloud Plaform account.
0.0.41#
Released June 9, 2021.
New function
coiled.set_backend_options()which allows users to set the options for an account (e.g., cloud provider, region, docker registry) from the Python client. Previously this was only available using the Coiled web application.Fixed a bug in
coiled.performance_report()that was preventing performance data from being captured.Fixed an issue where an error building software environments could result in hanging client sessions.
coiled.Cluster(),coiled.start_job(),coiled.create_software_environment(), andcoiled.create_notebook()can now take an optionalenvirondictionary as an argument, allowing users to pass in environment variables to clusters, jobs, software environments, and notebooks. These environment variables are not encrypted, and so should not be used to store credentials or other sensitive information.coiled.list_core_usage()now shows additional information about how many credits your account has used for the current program period.coiled.Cluster()no longer raises a warning if no AWS credentials can be found, since a given cluster may not want or need to use them.
0.0.40#
Released May 18, 2021.
New functions
coiled.performance_report()andcoiled.list_performance_reports().coiled.performance_report()is a context manager which captures cluster computation as a dask performance report, uploads it to Coiled, and hosts it online for later viewing.New function
coiled.get_notifications()returns notifications from resource creation steps in your chosen cloud provider. This can be useful in debugging when resources do not launch as intended.coiled.create_software_environment()now has an optional argumentforce_rebuild, defaulting toFalse, which forces a rebuild of the software environment, even if one matching the given specification already exists. There is a new corresponding flag--force-rebuildin thecoiled env createcommand line command.New functions
coiled.cluster_logs()andcoiled.job_logs()return logs from Coiled clusters and Coiled jobs, respectively.Cloud.logs()has been renamed toCloud.cluster_logs()to better distinguish it fromCloud.job_logs().New function
coiled.get_software_info()returns detailed information about a Coiled software environment specification.coiled.info()has been renamed tocoiled.diagnostics(), and now always returns JSON-formatted diagnostic information.New function
coiled.list_user_information()provides information about the currently logged-in user.New function
cloud.health_check()checks the user’s connection with the Coiled Cloud application.coiled login --server <url-for-your-coiled-deployment>now works if there is a trailing slash in the URL.coiled login --account <team_slug>sets the user’s specified account as a config value.Previously, some
coiledfunctions acceptedaccountas an optional parameter, and others did not. Now the entire API consistently allows users to specify their account with anaccount=keyword argument. The priority order for choosing an account to make API requests is:Accounts specified via a resource name (where applicable), e.g.
name = <account-name>/<software-environment-name>Accounts specified via the
account=keyword argumentAccounts specified in your Coiled configuration file (i.e.
~/.config/dask/coiled.yaml)The default account associated with your username (as determined by the token you use to log in)
Most of the resource creation functions in the
coiledAPI (e.g.,coiled.Cluster()orcoiled.create_software_environment()) can take a lot of optional arguments. The order of these arguments in their function invocations is not important, and so they have been turned into keyword-only arguments.
0.0.39#
Released on May 3, 2021.
Following dask/distributed, we have dropped support for Python 3.6
The arguments for
coiled.Cluster()are now keyword-only.coiledis now more fully type annotated, allowing for better type checking and editor integration.coiled.Cloud.logs()now hasaccountas an optional second parameter instead of a required first parameter to be more consistent with the rest of the API.Fixed a bug where updating the software environment in a cluster configuration did not work.
Add a
--privateflag to the command line interface forcoiled env create.Fixed a bug where the
richconsole output fromcoileddid not work well with the Spyder editor.Fixed a bug where the
coiled.Cloud.close()did not properly clean up threads.
0.0.38#
Released on March 25, 2021.
Improve connection error when creating a
coiled.Clusterwhere the local and remote versions ofdistributeduse different protocol versionsReturn the name of newly started jobs for use in other API calls
0.0.37#
Released on March 2, 2021.
Add core usage count interface
Make startup error more generic and hopefully less confusing
Filter clusters by descending order in
coiled.list_clusters()Add messages to commands and status bar to cluster creation
Don’t use coiled default if software environment doesn’t exist
Handle case when trying to create a cluster with a non-existent software environment
Set minimum
clickversionSeveral documentation updates
0.0.36#
Released on February 5, 2021.
Add backend options docs
Fix CLI command install for python < 3.8
Add color to coiled login output
Fix bug with
coiled.Cluster(account=...)De-couple container registry from backends options
0.0.35#
Released on January 29, 2021.
Flatten json object if error doesn’t have
"message"Enable all Django middleware to run
asyncRemove redundant test with flaky input mocking
Use util
handle_api_exceptionto handle exceptions
0.0.34#
Released on January 26, 2021.
Update AWS IAM docs
Add
--retry/--no-retryoption tocoiled loginUpdate default conda env to
coiledinstead ofbaseAdd
worker_memory < "16 GiB"to GPU exampleFix small issues in docs and add note for users in teams
Do not add python via conda if
containerin software specUse new
Statusenumindistributed
0.0.33#
Released on January 15, 2021.
Update
post_buildto run as POSIX shellFix errors due to software environment / account name capitalization mismatches
Automatically use local Python version when creating a
pip-only software environmentImproved support for custom Docker registries
Several documentation updates
0.0.32#
Released on December 22, 2020.
Add
boto3dependency
0.0.31#
Released on December 22, 2020.
Add
coiled.backend-optionsconfig valueAllow selecting which AWS credentials are used
Don’t initialize with
accountwhen listing cluster configurationsAdd support for using custom Docker registries
Add
coiled.cluster_cost_estimateSeveral documentation updates
0.0.30#
Released on November 30, 2020.
Update API to support generalized backend options
Enable
coiled.inspectandcoiled.installinside Jupyter
0.0.29#
Released on November 24, 2020.
Add informative error message when AWS GPU capacity is low
Fix bug in software environment creation which caused conda packages to be uninstalled
Add notebook creation functionality and documentation
Generalize backend options
Add support for AWS Fargate spot instances
0.0.28#
Released on November 9, 2020.
Expose
privatefield in list/create/updateMore docs for running in users’ AWS accounts
Add Dask-SQL example
Use examples account instead of coiled-examples
Add list of permissions for users AWS accounts
Add example to software environment usage section
Update
conda_env_namedescriptionSet default TOC level for sphinx theme
0.0.27#
Released on October 9, 2020.
Fix AWS credentials error when running in Coiled notebooks
0.0.26#
Released on October 8, 2020.
Handle AWS STS session credentials
Fix coiled depending on older aiobotocore
Only use proxied dashboard address in Jobs
Improve invalid fargate resources error message
Mention team accounts
Support AWS credentials to launch resources on other AWS accounts
Update FAQ with a note on notebooks and Azure support
Add GPU docs
Add jupyterlab example
Add community page
Add tabbed code snippets to doc landing page
Ensure job configuration description and software envs are updated
0.0.25#
Released on September 22, 2020.
Handle redirecting from
beta.coiled.iotocloud.coiled.ioAdd Prefect example
Update dashboards to go through our proxy
Add descriptions to notebooks
Update cluster documentation
Add Optuna example
0.0.24#
Released on September 16, 2020.
Support overriding cluster configuration settings in
coiled.ClusterDon’t require region on cluster creation
Add links to OSS licenses
Add ability to upload files
Add access token for private repos
0.0.23#
Released on September 4, 2020.
Fixed bug where specifying
namein a conda spec would cause clusters to not be launchedOpen external links in a separate browser tab in the docs
Explicitly set the number of worker threads to the number of CPUs requested if not otherwise specified
Improvements to Coiled login behavior
Update to using
coiled/defaultas our default base image for software environmentsSeveral documentation updates
0.0.22#
Released on August 27, 2020.
Add AWS multi-region support
Log informative message when rebuilding a software environment Docker image
Remove link to Getting Started guide from
coiled loginoutputUpdate
distributedversion pinningAdd support for running non-Dask code through Coiled
JobsSeveral documentation updates
0.0.21#
Add logs to web UI
Verify worker count during cluster creation
Raise more informative error when a solve conda spec is not available
Improve docker caching when building environments
0.0.20#
Allow ‘target’ conda env in creating software environment (#664)
Start EC2 instances in the right subnets (#689)
0.0.19#
Added support for installing pip packages with
coiled installSupport Python 3.8 on Windows with explicit
ProactorEventLoopUpdated default
coiled.Clusterconfiguration to use the current Python versionUpdated dependencies to include more flexible version checking in
distributedDon’t scale clusters that we’re re-connecting to
Added support for using custom worker and scheduler classes
0.0.18#
Released August 8, 2020.
Add
--tokenoption tocoiled loginAdd
post_build=option tocoiled.create_software_environmentAdd back support for Python 3.6
Remove extra newline from websocket output
Remove
coiled uploadfrom public APIAdd
coiled envCLI command groupSeveral documentation updates
0.0.17#
Released July 31, 2020.
Move documentation page to docs.coiled.io
Added
--versionflag tocoiledCLIRaise an informative error when using an outdated version of the
coiledPython APISeveral documentation updates
Added
coiled.Cluster.get_logsmethodAdded top-level
coiled.configattributeUse fully qualified
coiled.Clustername in the cluster interactive IPython repr
0.0.16#
Released July 27, 2020.
Added getting started video to docs.
Added support GPU enabled workers.
Added new documentation page on configuring JupyterLab.
Added support for specifying pip, conda, and/or container inputs when creating software environments.
Remove account argument from
coiled.delete_software_environment.Added cost and feedback FAQs.
0.0.15#
Released July 22, 2020.
Removed “cloud” namespace in configuration values.
Several documentation updates.
Added new security and privacy page to the docs.
Added
coiled uploadcommand for creating a Coiled software environment from a local conda environment.Added tests for command line tools.
0.0.14#
Released July 17, 2020.
0.0.13#
Released July 16, 2020.
Update “Getting Started” documentation page.
Update
coiled.create_software_environmentto use name provided byconda=input, if provided.Send AWS credentials when making a
Clusterobject.
0.0.12#
Released July 14, 2020.
Switch to using full
coiledPython namespace and renameCoiledClustertocoiled.ClusterRaise informative error when attempting to create a cluster with a non-existent cluster configuration
Bump supported
aiobotocoreversion toaiobotocore>=1.0.7Add
coiled installcommand to create conda software environments locallyRepeated calls to
Cloud.create_cluster_configurationwill now update an existing configuration
0.0.11#
Released July 9, 2020.
Don’t shut down clusters if we didn’t create them
Slim down the outputs of
list_software_environmentsandlist_cluster_configurations
0.0.10#
Released July 8, 2020.
Use websockets to create clusters due to long-running requests
Avoid excess endlines when printing out status in the CLI
Allow calling coiled env create repeatedly on the same environment
0.0.9#
Released July 7, 2020.
Change default to coiled/default
Add
coiled loginCLI commandUse account namespaces everywhere, remove
account=keywordAllow the use of public environments and configurations
0.0.8#
Released on July 1, 2020.
Update to use new API endpoint scheme
Adds
conda env createcommand line interface
0.0.7#
Released on June 29, 2020.
Adds
Cloud.create_software_environment,Cloud.delete_software_environment, andCloud.list_software_environmentsmethodsAdds
Cloud.create_cluster_configuration,Cloud.delete_cluster_configuration, andCloud.list_cluster_configurationsmethodsUpdate
Cloudobject to use a token rather than a passwordChanged name of package from
coiled_cloudtocoiled
0.0.6#
Released on May 26, 2020.
Includes
requirements.txtinMANIFEST.in
0.0.5#
Released on May 26, 2020.
Includes versioneer in
MANIFEST.in
0.0.4#
Released on May 26, 2020.
Adds
LICENSEto project
0.0.3#
Released on May 21, 2020.
Deprecations#
Renamed
ClustertoCoiledCluster