Definitions:
A workflow is described by a JSON config file and contains information for the workflow’s steps, step dependencies, GCE/GCP/GCS credentials/configuration, and file resources. The config has the following fields (NOTE: all workflow and step field names are case-insensitive, but we suggest upper camel case.):
Field Name | Type | Description |
---|---|---|
Name | string | The name of the workflow. Must be between 1-20 characters and match regex [a-z]([-a-z0-9]*[a-z0-9])? |
Project | string | The GCE and GCS API enabled GCP project in which to run the workflow, if no project is given and Daisy is running on a GCE instance, that instance’s project will be used. |
Zone | string | The GCE zone in which to run the workflow, if no zone is given and Daisy is running on a GCE instance, that instance’s zone will be used. |
OAuthPath | string | A local path to JSON credentials for your Project. These credentials should have full GCE permission and read/write permission to GCSPath. If credentials are not provided here, Daisy will look for locally cached user credentials such as are generated by gcloud init . |
GCSPath | string | Daisy will use this location as scratch space and for logging/output results, if no GCSPath is given and Daisy will create a bucket to use in the project, subsequent runs will reuse this bucket. |
DefaultTimeout | string | The default timeout to use for all steps with no specified timeout, defaults to 10m. |
Sources | map[string]string | A map of destination paths to local and GCS source paths. These sources will be uploaded to a subdirectory in GCSPath. The sources are referenced by their key name within the workflow config. See Sources below for more information. |
Vars | map[string]string | A map of key value pairs. Vars are referenced by “${key}” within the workflow config. Caution should be taken to avoid conflicts with autovars. |
Steps | map[string]Step | A map of step names to Steps. See Steps below for more information. |
Dependencies | map[string]list(string) | A map of step names to a list of step names. This defines the dependencies for a step. Example: a step “foo” has dependencies on steps “bar” and “baz”; the map would include “foo”: [“bar”, “baz”]. |
Example workflow config:
{
"Name": "my-wf",
"Project": "my-project",
"Zone": "us-central1-f",
"OAuthPath": "path/to/my/creds.json",
"GCSPath": "gs://my-bucket/some/path/",
"Sources": {
"foo": "local/path/to/file1",
"bar": "gs://gcs/path/to/file2"
},
"Vars": {
"step1": "step1 name",
"step2": "step2 name"
},
"Steps": {
"${step1}": ...,
"${step2}": ...,
"step3 name": ...
},
"Dependencies": {
"${step2}": ["${step1}"],
"step3-name": ["${step2}"]
}
}
Daisy will upload any workflow sources to the sources directory in GCS
prior to running the workflow. The Sources
field in a workflow
JSON file is a map of ‘destination’ to ‘source’ file. Sources can be a local
or GCS file or directory. Directories will be recursively copied into
destination. The GCS path for the sources directory is available via the
Autovar ${SOURCESPATH}
.
In this example, the local file ./path/to/startup.sh
will be copied to
startup.sh
in the sources directory. Similarly, the GCS file
gs://my-bucket/some/path/install.py
will be copied to install.py
.
The contents of paths referencing directories like
./path/to/drivers_folder
and gs://my-bucket/my-files
will be
recursively copied to the directories drivers
and files
in GCS
respectively.
"Sources": {
"startup.sh": "./path/to/startup.sh",
"install.py": "gs://my-bucket/some/path/install.py",
"drivers": "./path/to/drivers_folder",
"files": "gs://my-bucket/my-files"
}
The Steps
field is a named set of executable steps. It is a map of
a step’s name to the step’s type and configuration. Step names must begin with
a letter and only contain letters, numbers and hyphens.
For each individual ‘step’, you set one ‘step type’ and the type’s
associated fields. You may optionally set a step timeout using
Timeout
. Timeout
uses Golang’s time.Duration string
format and defaults
to “10m” (10 minutes). As with workflow fields, step field names are
case-insensitive, but we suggest upper camel case.
This example has steps named “step 1” and “step 2”. “step 1” has a type of “<STEP 1 TYPE>” and a timeout of 2 hours. “step2” has a type of “<STEP 2 TYPE>” and a timeout of 10 minutes, by default.
"Steps": {
"step1": {
"<STEP 1 TYPE>": {
...
},
"Timeout": "2h"
},
"step2": {
"<STEP 2 TYPE>": {
...
}
}
}
Attaches a GCE disk to an instance. See https://cloud.google.com/compute/docs/reference/latest/instances/attachDisk, daisy uses the same representation with a few modifications:
Field Name | Type | Description |
---|---|---|
Source | string | The name of the disk to attach, either disk partial URLs or workflow-internal disk names are valid. |
Added fields:
Field Name | Type | Description |
---|---|---|
Instance | string | The name of the instance to attach this disk to, either instance partial URLs or workflow-internal instance names are valid. |
Example: the first is an example of attaching a disk referenced by its daisy name to an instance also referenced by it’s daisy name. This requires that both are created as part of the current workflow. The second is an example of using partial URLs for both the disk as well as the instance.
"step-name": {
"AttachDisks": [
{
"Source": "my-disk",
"Instance": "my-instance"
},
{
"Source": "projects/MYPROJECT/zones/MYZONE/disks/my-disk",
"Instance": "projects/MYPROJECT/zones/MYZONE/instances/my-instance"
}
]
}
Detaches a GCE disk to an instance. See https://cloud.google.com/compute/docs/reference/latest/instances/detachDisk, daisy uses the same representation with a few modifications:
Field Name | Type | Description |
---|---|---|
DeviceName | string | The name of the disk to detach, either disk partial URLs or workflow-internal disk names are valid. |
Added fields:
Field Name | Type | Description |
---|---|---|
Instance | string | The name of the instance to detach this disk to, either instance partial URLs or workflow-internal instance names are valid. |
Example: the first step is an example of attaching a disk and the second is detaching it.
"attach-step": {
"AttachDisks": [
{
"Source": "my-disk",
"Instance": "my-instance"
}
]
},
"detach-step": {
"DetachDisks": [
{
"DeviceName": "my-disk",
"Instance": "my-instance"
}
]
}
Creates GCE disks. A list of GCE Disk resources. See https://cloud.google.com/compute/docs/reference/latest/disks for the Disk JSON representation. Daisy uses the same representation with a few modifications:
Field Name | Type | Description of Modification |
---|---|---|
Name | string | If RealName is unset, the literal disk name will have a generated suffix for the running instance of the workflow. |
SourceImage | string | Either image partial URLs or workflow-internal image names are valid. |
Type | string | Optional. Defaults to “pd-standard”. Either disk type partial URLs or disk type names are valid. |
Added fields:
Field Name | Type | Description |
---|---|---|
Project | string | Optional. Defaults to workflow’s Project. The GCP project in which to create the disk. |
Zone | string | Optional. Defaults to workflow’s Zone. The GCE zone in which to create the disk. |
NoCleanup | bool | Optional. Defaults to false. Set this to true if you do not want Daisy to automatically delete this disk when the workflow terminates. |
RealName | string | Optional. If set Daisy will use this as the resource name instead generating a name. Be advised: this circumvents Daisy’s efforts to prevent resource name collisions. |
Example: the first is a standard PD disk created from a source image, the second is a blank PD SSD.
"step-name": {
"CreateDisks": [
{
"Name": "disk1",
"SourceImage": "projects/debian-cloud/global/images/family/debian-9"
},
{
"Name": "disk2",
"SizeGb": "200",
"Type": "pd-ssd"
}
]
}
Resizes GCE disks. A list of GCE ResizeDisk resources. See https://cloud.google.com/compute/docs/reference/latest/disks/resize for the ResizeDisk JSON representation. Daisy uses the same representation with a few modifications:
Field Name | Type | Description of Modification |
---|---|---|
Name | string | If RealName is unset, the literal disk name will have a generated suffix for the running instance of the workflow. |
Added fields:
Field Name | Type | Description |
---|---|---|
Project | string | Optional. Defaults to workflow’s Project. The GCP project in which to create the disk. |
Zone | string | Optional. Defaults to workflow’s Zone. The GCE zone in which to create the disk. |
NoCleanup | bool | Optional. Defaults to false. Set this to true if you do not want Daisy to automatically delete this disk when the workflow terminates. |
RealName | string | Optional. If set Daisy will use this as the resource name instead generating a name. Be advised: this circumvents Daisy’s efforts to prevent resource name collisions. |
Example: Resizes a previously created diskname “disk1”.
"resizes": {
"ResizeDisks": [
{
"Name": "disk1",
"SizeGb": "128"
}
]
}
Creates GCE ForwardingRule. A list of GCE ForwardinRule resources. See https://cloud.google.com/compute/docs/reference/latest/forwardingRules for the ForwardingRules JSON representation. Daisy uses the same representation.
Example: A ForwardingRule is created to forward TCP traffic from port 80 to the
instance inst-1
, which was added to the Target Instance target-instance-1
.
"step-name": {
"CreateForwardingRules": [
{
"Name": "forwarding-rule-1",
"Target": "target-instance-1",
"PortRange": "80",
"IPProtocol": "TCP"
}
]
},
"dependency-step": {
"CreateTargetInstances": [
{
"Name": "target-instance-1",
"Instance": "inst-1"
}
]
},
Creates GCE images. A list of GCE Image resources. See https://cloud.google.com/compute/docs/reference/latest/images for the Image JSON representation. Daisy uses the same representation with a few modifications:
Field Name | Type | Description of Modification |
---|---|---|
Name | string | If RealName is unset, the literal image name will have a generated suffix for the running instance of the workflow. |
RawDisk.Source | string | Either a GCS Path or a key from Sources are valid. |
SourceDisk | string | Either disk partial URLs or workflow-internal disk names are valid. |
SourceImage | string | Either image partial URLs or workflow-internal image names are valid. |
RawDisk.Source
, SourceDisk
, and SourceImage
all set the image’s source.
For this reason, they are mutually exclusive; only one should be present in a
CreateImages
step.
Added fields:
Field Name | Type | Description |
---|---|---|
Project | string | Optional. Defaults to the workflow Project. The GCP project in which to create this image. |
GuestOsFeatures | []string | Optional. Along with the GCE JSON API’s more complex object structure, Daisy allows the use of a simple list. |
NoCleanup | bool | Optional. Defaults to false. Set this to true if you do not want Daisy to automatically delete this image when the workflow terminates. |
RealName | string | Optional. If set Daisy will use this as the resource name instead generating a name. Be advised: this circumvents Daisy’s efforts to prevent resource name collisions. |
This CreateImages example creates an image from a source disk.
"step-name": {
"CreateImages": [
{
"Name": "image1",
"SourceDisk": "disk2",
"GuestOsFeatures": ["Feature1", "Feature2"]
}
]
}
This CreateImages example creates three images. image1
is created from
a source from the workflow’s Sources
and will not be cleaned up by
Daisy. image2
is created from a source from a GCS Path and will use
my-image2
as the resource name. Lastly, image3
is created from a disk from
the workflow and will be created in a different project from the
workflow’s specified Project.
"step-name": {
"CreateImages": [
{
"Name": "image1",
"RawDisk": {
"Source": "my-source"
},
"NoCleanup": true
},
{
"Name": "image2",
"RawDisk": {
"Source": "gs://my-bucket/image.tar.gz"
},
"RealName": "my-image2"
},
{
"Name": "image3",
"SourceDisk": "my-disk",
"Project": "my-other-project"
}
]
}
Creates GCE machine images. A list of GCE Machine Image resources. See https://cloud.google.com/compute/docs/reference/rest/beta/machineImages for the Machine Image JSON representation. Daisy uses the same representation with a few modifications:
Field Name | Type | Description of Modification |
---|---|---|
Name | string | If RealName is unset, the literal machine image name will have a generated suffix for the running instance of the workflow. |
Added fields:
Field Name | Type | Description |
---|---|---|
Project | string | Optional. Defaults to the workflow Project. The GCP project in which to create this machine image. |
NoCleanup | bool | Optional. Defaults to false. Set this to true if you do not want Daisy to automatically delete this machine image when the workflow terminates. |
RealName | string | Optional. If set Daisy will use this as the resource name instead generating a name. Be advised: this circumvents Daisy’s efforts to prevent resource name collisions. |
This CreateMachineImages example creates a machine image from a source instance.
"step-name": {
"CreateMachineImages": [
{
"Name": "image1",
"SourceInstance": "instance1",
}
]
}
Creates GCE instances. A list of GCE Instance resources. See https://cloud.google.com/compute/docs/reference/latest/instances for the Instance JSON representation. Daisy uses the same representation with a few modifications:
Field Name | Type | Description of Modification |
---|---|---|
Name | string | If RealName is unset, the literal instance name will have a generated suffix for the running instance of the workflow. |
Disks[].Boot | bool | Now unused. First disk automatically has boot = true. All others are set to false. |
Disks[].InitializeParams.DiskType | string | Optional. Will prepend “projects/PROJECT/zones/ZONE/diskTypes/” as needed. This allows user to provide “pd-ssd” or “pd-standard” as the DiskType. |
Disks[].InitializeParams.SourceImage | string | Either image partial URLs or workflow-internal image names are valid. |
Disks[].Mode | string | Now Optional. Now defaults to “READ_WRITE”. |
Disks[].Source | string | Either disk partial URLs or workflow-internal disk names are valid. |
MachineType | string | Now Optional. Now defaults to “n1-standard-1”. Either machine type partial URLs or machine type names are valid. |
Metadata | map[string]string | Optional. Instead of the GCE JSON API’s more complex object structure, Daisy uses a simple key-value map. Daisy will provide metadata keys daisy-logs-path , daisy-outs-path , and daisy-sources-path . |
NetworkInterfaces[] | list | Now Optional. Now defaults to [{"network": "global/networks/default", "accessConfigs": [{"type": "ONE_TO_ONE_NAT"}]} . |
NetworkInterfaces[].Network | string | Either network partial URLs or workflow-internal network names are valid. |
NetworkInterfaces[].AccessConfigs[] | list | Now Optional. Now defaults to [{"type": "ONE_TO_ONE_NAT}] . |
Added fields:
Field Name | Type | Description |
---|---|---|
Scopes | list(string) | Optional. Defaults to ["https://www.googleapis.com/auth/devstorage.read_only"] . Only used if serviceAccounts is not used. Sets default service account scopes by setting serviceAccounts to [{"email": "default", "scopes": <value of Scopes>}] . For example, if you wanted to give the default service account read-write access to GCS (see https://cloud.google.com/storage/docs/authentication#oauth-scopes), you’d use ["https://www.googleapis.com/auth/devstorage.read_write"] . |
StartupScript | string | Optional. A source file from Sources. If provided, metadata will be set for startup-script-url and windows-startup-script-url . |
Project | string | Optional. Defaults to workflow’s Project. The GCP project in which to create the disk. |
Zone | string | Optional. Defaults to workflow’s Zone. The GCE zone in which to create the disk. |
NoCleanup | bool | Optional. Defaults to false. Set this to true if you do not want Daisy to automatically delete this disk when the workflow terminates. |
RealName | string | Optional. If set Daisy will use this as the resource name instead generating a name. Be advised: this circumvents Daisy’s efforts to prevent resource name collisions. |
This CreateInstances step example creates an instance with two attached disks, with machine type n1-standard-4, and with metadata “key” = “value”. The instance will have default scopes and will be attached to the default network.
"step-name": {
"CreateInstances": [
{
"Name": "instance1",
"Disks": [
{"Source": "disk1"},
{"Source": "zones/foo/disks/disk2", "Mode": "READ_ONLY"}
],
"MachineType": "n1-standard-4",
"Metadata": {"key": "value"}
}
]
}
Creates GCE TargetInstance. A list of GCE TargetInstances resources. See https://cloud.google.com/compute/docs/reference/latest/targetInstances for the TargetInstances JSON representation. Daisy uses the same representation.
Example: A TargetInstance is created and bounded to inst-1
.
"step-name": {
"CreateTargetInstances": [
{
"Name": "target-instance-1",
"Instance": "inst-1"
}
]
},
Creates GCE networks. A list of GCE Network resources. See https://cloud.google.com/compute/docs/reference/latest/networks for the Network JSON representation. Daisy uses the same representation with a few modifications:
Field Name | Type | Description of Modification |
---|---|---|
Name | string | If RealName is unset, the literal network name will have a generated suffix for the running instance of the workflow. |
Added fields:
Field Name | Type | Description |
---|---|---|
Project | string | Optional. Defaults to the workflow Project. The GCP project in which to create this network. |
NoCleanup | bool | Optional. Defaults to false. Set this to true if you do not want Daisy to automatically delete this network when the workflow terminates. |
RealName | string | Optional. If set Daisy will use this as the resource name instead generating a name. Be advised: this circumvents Daisy’s efforts to prevent resource name collisions. |
This CreateNetworks example creates a network in the project, my-other-project
,
with the real name my-network1
. The network will not be automatically cleaned
up.
"step-name": {
"CreateNetworks": [
{
"Name": "network1",
"Project": "my-other-project",
"RealName": "my-network1",
"NoCleanup": true
}
]
}
Creates GCE subnetworks. A list of GCE Subnetwork resources. See https://cloud.google.com/compute/docs/reference/latest/subnetworks for the Subnetwork JSON representation. Daisy uses the same representation.
This CreateSubnetworks example creates a subnetwork in inside of a daisy created network.
"create-network": {
"CreateNetworks": [
{
"name": "network_1",
"AutoCreateSubnetworks": true
},
]
},
"create-subnetwork": {
"CreateSubNetworks": [
{
"name": "subnetwork_1",
"network": "network_1",
"ipCidrRange": "10.64.1.0/24"
},
]
},
Creates GCE firewall rules. A list of GCE Subnetwork resources. See https://cloud.google.com/compute/docs/reference/latest/firewalls for the Subnetwork JSON representation. Daisy uses the same representation.
This CreateFirewallRules example creates a firewall for a daisy created network.
"create-network": {
"CreateNetworks": [
{
"name": "network_1",
"AutoCreateSubnetworks": true
},
]
},
"create-firewall-rules": {
"CreateFirewallRules": [
{
"name": "allow-icm-ssh",
"network": "network_1",
"allowed": [
{
"IPProtocol": "icmp"
},
{
"IPProtocol": "tcp",
"Ports": ["22"]
}
]
},
]
}
Copies a GCS files from Source to Destination. Each copy has the following fields:
Field Name | Type | Description |
---|---|---|
Source | string | Source path. |
Destination | list(string) | Destination path. |
ACLRules | list(ACLRule) | Optional. List of ACLRules to apply to the object. |
An ACLRule has two fields:
This CopyGCSObjects step example copies image.tar.gz from the Daisy OUTSPATH to gs://project2/my-image.tar.gz and gives the special user “allUsers” read permissions.
"step-name": {
"CopyGCSObjects": [
{
"Source": "${OUTSPATH}/image.tar.gz",
"Destination": "gs://project/my-image.tar.gz",
"AclRules": [{"Entity": "allUsers", "Role": "READER"}]
}
]
}
Deletes GCE resources (disks, images, instances, networks). Instances are deleted before all other resources.
Field Name | Type | Description |
---|---|---|
Disks | list(string) | Optional, but at least one of these fields must be used. The list of disks to delete. Values can be 1) Names of disks created in this workflow or 2) the partial URL of an existing GCE disk. |
Images | list(string) | Optional, but at least one of these fields must be used. The list of images to delete. Values can be 1) Names of images created in this workflow or 2) the partial URL of an existing GCE image. |
Instances | list(string) | Optional, but at least one of these fields must be used. The list of VM instances to delete. Values can be 1) Names of VMs created in this workflow or 2) the partial URL of an existing GCE VM. |
Networks | list(string) | Optional, but at least one of these fields must be used. The list of networks to delete. Values can be 1) Names of networks created in this workflow or 2) the partial URL of an existing GCE network. |
GCSPaths | list(string) | Optional, but at least one of these fields must be used. A list of GCS paths to delete. |
This DeleteResources step example deletes an image, an instance, two disks, a network, a GCS object and a GCS ‘folder’ (recursive object delete).
"step-name": {
"DeleteResources": {
"Disks":["disk1", "disk2"],
"Images":["image1"],
"Instances":["instance1"],
"Networks":["network1"],
"GCSPaths":["gs://some/object", "gs://my-bucket/delete-every-object-under-here/"]
}
}
Starts GCE instances that is stopped.
Field Name | Type | Description |
---|---|---|
Instances | list(string) | Optional, but at least one of these fields must be used. The list of VM instances to start. Values can be 1) Names of VMs created in this workflow or 2) the partial URL of an existing GCE VM. |
This StartInstances step example starts an instance in the project.
"step-name": {
"StartInstances": {
"Instances":["instance1"],
}
}
Stops GCE instances without deleting it.
Field Name | Type | Description |
---|---|---|
Instances | list(string) | Optional, but at least one of these fields must be used. The list of VM instances to stop. Values can be 1) Names of VMs created in this workflow or 2) the partial URL of an existing GCE VM. |
This StopInstances step example stops an instance in the project.
"step-name": {
"StopInstances": {
"Instances":["instance1"],
}
}
Includes another Daisy workflow JSON file into this workflow. The included workflow’s steps will run as if they were part of the parent workflow, but follow the IncludeWorkflow step’s dependency map (all steps from an included workflow also depend on the IncludeWorkflow step’s dependencies).
Included workflows have access to all of their parent workflows resources and
vice versa. For example the disk disk1
created in a previous step will be
available to the included workflow and the instance instance1
created in the
included workflow will be available to the parent. The included workflow’s
Sources are similarly merged with the parent workflow and share the same scratch
directory. The included workflow will not have access to the parent workflows
variables however, all variable substitutions will come from the Var
field
in the IncludeWorkflow step or from the included workflow’s JSON file.
For more information on using IncludeWorkflow, see Reusing Workflow Files.
IncludeWorkflow step type fields:
Field Name | Type | Description |
---|---|---|
Path | string | The local path to the Daisy workflow file to include. |
Vars | map[string]string | Optional. Key-value pairs of variables to send to the included workflow. |
This IncludeWorkflow step example uses a local workflow file and passes a var, “foo”, to the included workflow.
"step-name": {
"IncludeWorkflow": {
"Path": "./some_subworkflow.wf.json",
"Vars": {
"foo": "bar"
}
}
}
Runs a Daisy workflow as a step. The subworkflow will have some fields overwritten. For example, the subworkflow may specify a GCP Project “foo”, but the parent workflow is working in Project “bar”. The subworkflow’s Project will be overwritten so that subworkflow is also running in “bar”, the same as the parent. The fields that get modified by the parent:
The SubWorkflow step type works similarly to the IncludeWorkflow step type, except that resources (disks, instances, and images) are not shared between the parent workflow and the subworkflow.
SubWorkflow step type fields:
Field Name | Type | Description |
---|---|---|
Path | string | The local path to the Daisy workflow file to run as a subworkflow. |
Vars | map[string]string | Optional. Key-value pairs of variables to send to the subworkflow. Analogous to calling the subworkflow via the commandline with the -variables foo=bar,baz=gaz flag. |
This SubWorkflow step example uses a local workflow file and passes a var, “foo”, to the subworkflow.
"step-name": {
"SubWorkflow": {
"Path": "./some_subworkflow.wf.json",
"Vars": {
"foo": "bar"
}
}
}
Suspend an compute instance.
SubWorkflow step type fields:
Field Name | Type | Description |
---|---|---|
Project | string | Optional. Project in which to suspend the instance. |
Zone | string | Optional. Zone in which to suspend the instance. |
Instance | string | The instance to suspend. |
This step example suspends instance foo.
"step-name": {
"Suspend": {
"Instance": "foo"
}
}
Resume an compute instance.
SubWorkflow step type fields:
Field Name | Type | Description |
---|---|---|
Project | string | Optional. Project in which to resume the instance. |
Zone | string | Optional. Zone in which to resume the instance. |
Instance | string | The instance to resume. |
This step example resumes instance foo.
"step-name": {
"Resume": {
"Instance": "foo"
}
}
Waits for a signal from GCE VM instances. This step will fail if its Timeout is reached or if a failure signal is received. The wait configuration for each VM has the following fields:
Field Name | Type | Description |
---|---|---|
Name | string | The Name or partial URL of the VM. |
Interval | string (Golang’s time.Duration format) | The signal polling interval. |
Stopped | bool | (DEPRECTATED use Status) Use the VM stopping as the signal. |
SerialOutput | SerialOutput (see below) | Parse the serial port output for a signal. |
GuestAttribute | GuestAttribute (see below) | Parse guest attributes for a signal. |
Status | []string | Wait for one of the given strings in the instance status field. |
SerialOutput:
Field Name | Type | Description |
---|---|---|
Port | int64 | The serial port number to listen to. GCE VMs have serial ports 1-4. |
FailureMatch | string or []string | Optional, but this or SuccessMatch must be provided. An expected string or array of strings in case of a failure. |
SuccessMatch | string | Optional, but this or FailureMatch must be provided. An expected string when the VM performed its task successfully. |
StatusMatch | string | Optional An informational status line to print out. |
If any serial line matches FailureMatch, SuccessMatch or StatusMatch the line from the match onward will be logged. This example step waits for VM “foo” to stop and for a signal from VM “bar”:
"step-name": {
"WaitForInstancesSignal": [
{
"Name": "foo",
"Status": ["STOPPED", "TERMINATED"]
},
{
"Name": "bar",
"SerialOutput": {
"Port": 1,
"SuccessMatch": "DaisySuccess:",
"FailureMatch": "DaisyFailure:",
"StatusMatch": "DaisyStatus:"
}
}
]
}
To output to the serial port from a startup script (launched using the
StartupScript
field of the CreateInstances
step type), it is sufficient to
write output to “standard out”: On Unix systems this might be using echo
or
print
, on Windows Write-Host
or Write-Console
.
GuestAttribute:
Field Name | Type | Description |
---|---|---|
Namespace | string | Optional The namespace of the key to watch for. Defaults to “daisy”. |
KeyName | string | Optional The key name to watch for. Defaults to “DaisyResult”. |
SuccessValue | string | Optional An expected value to be matched. |
If the key specified by Namespace and KeyName is found, the value will be compared to SuccessValue for determining success or failure of the step. If SuccessValue is not set, any value will be considered a success. This example step waits for vm “foo” to emit a guest attribute with the default key and any value and for vm “bar” to emit key “CustomKey” with value “Success”:
"step-name": {
"WaitForInstancesSignal": [
{
"Name": "foo",
"GuestAttribute": { }
}
{
"Name": "bar",
"GuestAttribute": {
"KeyName": "CustomKey",
"SuccessValue": "Success"
}
}
]
}
Setting Guest Attributes can be done using system utilities such as curl
or
from any scripting or programming language. See
the public docs
for more details.
Update instances metadata. This step can update the value of an existing key or add new keys. However this step will not remove metadata keys.
Field Name | Type | Description |
---|---|---|
Instance | string | The Name or partial URL of the VM. |
Metadata | map[string]string | Simple key-value map of the keys and values to update. |
This UpdateInstancesMetadata step example updating an instance metadata in the project.
"step-name": {
"UpdateInstancesMetadata": [
{
"Instance": "instance1",
"Metadata": {
"foo" : "bar",
"foobar": "barfoo"
}
}
]
}
Wait for available quotas. Given a list of quotas, wait until they are all simultenously available and return.
Field Name | Type | Description |
---|---|---|
Interval (Optional) | string | The interval to poll for quotas (default is 5 seconds). |
Quotas | []QuotaAvailabe | List of quotas to query for. |
AvailableQuotas: A representation of a desired quantity of available quota.
Field Name | Type | Description |
---|---|---|
Interval (Optional) | string | The interval to poll for quotas (default is 5 seconds). |
Quotas | []QuotaAvailabe | List of quotas to query for. |
"step-name": {
"WaitForAvailableQuotas": {
"Interval": "1s",
"Quotas": [
{
"Metric" : "N2_CPUS",
"Region": "us-central1-a",
"Units": 42.5,
}
]
}
}
The Dependencies map describes the order in which workflow steps will run. Steps without any dependencies will run immediately, otherwise a step will only run once its dependencies have completed successfully.
In this example, step1 will run immediately as it has no dependencies, step2 and step3 will run after step1 completes, and step4 will run after step2 and step3 complete.
{
"Steps": {
"step1": {
...
},
"step2": {
...
},
"step3": {
...
},
"step4": {
...
}
},
"Dependencies": {
"step2": ["step1"],
"step3": ["step1"],
"step4": ["step2", "step3"]
}
}
Vars are a user-provided set of key-value pairs. Vars are used in string
substitutions in the rest of the workflow config using the syntax ${key}
.
Vars can be hardcoded into the workflow config or passed via the commandline.
Vars passed via the commandline will override Vars hardcoded into the Daisy
config.
Vars can be either a simple key:value pairing or with the following fields:
A few restrictions on Vars:
In this example var1
is an optional variable with an empty string as the
default value, var2
is an example of an optional variable with a default
value provided, var3
is a required variable with no default value. If var3
is not set or is set as an empty string the workflow will fail with an error.
{
"Zone": "${var2}",
"Vars": {
"var1": "",
"var2": {"Value": "foo-zone", "Description": "default zone to run the workflow in"},
"var3": {"Required": true, "Description": "variable 3"}
}
}
When run, Name will be set to “foo-name” and Zone will be set to “foo-zone”.
But, if the user calls Daisy with daisy wf.json -variables var1=bar-name
,
then Name will be set to “bar-name” and not “foo-name”.
Autovars are used the same as Vars, but are automatically populated by Daisy out of convenience. Here is the exhaustive list of autovars:
Autovar key | Description |
---|---|
ID | The autogenerated random ID for the current workflow run. |
NAME | The workflow’s Name field. |
FULLNAME | The workflow’s Name field in the format ${parent1}-${parent2}-…-${parentN}-${NAME}-${ID}. |
PROJECT | The workflow’s Project field. |
ZONE | The workflow’s Zone field. |
DATE | The date of the current workflow run in YYYYMMDD. |
DATETIME | The date and time of the current workflow run in YYYYMMDDhhmmss. |
TIMESTAMP | The Unix epoch of the current workflow run. |
WFDIR | The directory of the workflow file being run. |
CWD | The current working directory. |
GCSPATH | The workflow’s GCSPath field. |
SCRATCHPATH | The scratch subdirectory of GCSPath that the running workflow instance uses. |
SOURCESPATH | Equivalent to ${SCRATCHPATH}/sources. |
LOGSPATH | Equivalent to ${SCRATCHPATH}/logs. |
OUTSPATH | Equivalent to ${SCRATCHPATH}/outs. |
USERNAME | Username of the user running the workflow. |
Any files set in sources can have their contents injected into a workflow by
using the SOURCE:my_source
variable. This is useful for embedding a script
or configuration into instance metadata.
{
"Sources": {
"my_source.sh": "./path/to/my/source.sh"
},
"Steps": {
"step1": {
"CreateInstances": [
{
"Name": "instance1",
...
"Metadata": {"my-script": "${SOURCE:my_source.sh}"}
}
]
}
}
}