Kanister Functions are written in go and are compiled when building the controller. They are referenced by Blueprints phases. A Kanister Function implements the following go interface:
// Func allows custom actions to be executed.
type Func interface {
Name() string
Exec(ctx context.Context, args ...string) (map[string]interface{}, error)
RequiredArgs() []string
}
Kanister Functions are registered by the return value of Name()
, which must be
static.
Each phase in a Blueprint executes a Kanister Function. The Func
field in
a BlueprintPhase
is used to lookup a Kanister Function. After
BlueprintPhase.Args
are rendered, they are passed into the Kanister Function's
Exec()
method.
The RequiredArgs
method returns the list of argument names that are required.
The Kanister controller ships with the following Kanister Functions out-of-the-box that provide integration with Kubernetes:
KubeExec is similar to running
kubectl exec -it --namespace <NAMESPACE> <POD> -c <CONTAINER> [CMD LIST...]
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
pod | Yes | string | name of the pod in which to execute |
container | No | string | (required if pod contains more than 1 container) name of the container in which to execute |
command | Yes | []string | command list to execute |
Example:
- func: KubeExec
name: examplePhase
args:
namespace: "{{ .Deployment.Namespace }}"
pod: "{{ index .Deployment.Pods 0 }}"
container: kanister-sidecar
command:
- sh
- -c
- |
echo "Example"
KubeExecAll is similar to running KubeExec on multiple containers on multiple pods (all specified containers on all pods) in parallel.
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
pods | Yes | []string | list of names of pods in which to execute |
containers | Yes | []string | list of names of the containers in which to execute |
command | Yes | []string | command list to execute |
Example:
- func: KubeExecAll
name: examplePhase
args:
namespace: "{{ .Deployment.Namespace }}"
pods:
- "{{ index .Deployment.Pods 0 }}"
- "{{ index .Deployment.Pods 1 }}"
containers:
- kanister-sidecar1
- kanister-sidecar2
command:
- sh
- -c
- |
echo "Example"
KubeTask spins up a new container and executes a command via a Pod. This allows you to run a new Pod from a Blueprint.
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
image | Yes | string | image to be used for executing the task |
command | Yes | []string | command list to execute |
podOverride | No | map[string]interface{} | specs to override default pod specs with |
Example:
- func: KubeTask
name: examplePhase
args:
namespace: "{{ .Deployment.Namespace }}"
image: busybox
podOverride:
containers:
- name: container
imagePullPolicy: IfNotPresent
command:
- sh
- -c
- |
echo "Example"
ScaleWorkload is used to scale up or scale down a Kubernetes workload. The function only returns after the desired replica state is achieved:
- When reducing the replica count, wait until all terminating pods complete.
- When increasing the replica count, wait until all pods are ready.
Currently the function supports Deployments and StatefulSets.
It is similar to running
kubectl scale deployment <DEPLOYMENT-NAME> --replicas=<NUMBER OF REPLICAS> --namespace <NAMESPACE>
This can be useful if the workload needs to be shutdown before processing
certain data operations. For example, it may be useful to use ScaleWorkload
to stop a database process before restoring files.
Argument | Required | Type | Description |
---|---|---|---|
namespace | No | string | namespace in which to execute |
name | No | string | name of the workload to scale |
kind | No | string | deployment or statefulset |
replicas | Yes | int | The desired number of replicas |
Example of scaling down:
- func: ScaleWorkload
name: examplePhase
args:
namespace: "{{ .Deployment.Namespace }}"
kind: deployment
replicas: 0
Example of scaling up:
- func: ScaleWorkload
name: examplePhase
args:
namespace: "{{ .Deployment.Namespace }}"
kind: deployment
replicas: 1
This function allows running a new Pod that will mount one or more PVCs and execute a command or script that manipulates the data on the PVCs.
The function can be useful when it is necessary to perform operations on the data volumes that are used by one or more application containers. The typical sequence is to stop the application using ScaleWorkload, perform the data manipulation using PrepareData, and then restart the application using ScaleWorkload.
Note
It is extremely important that, if PrepareData modifies the underlying data, the PVCs must not be currently in use by an active application container (ensure by using ScaleWorkload with replicas=0 first). For advanced use cases, it is possible to have concurrent access but the PV needs to have RWX mode enabled and the volume needs to use a clustered file system that supports concurrent access.
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
image | Yes | string | image to be used the command |
volumes | No | map[string]string | Mapping of pvcName to mountPath under which the volume will be available. |
command | Yes | []string | command list to execute |
serviceaccount | No | string | service account info |
podOverride | No | map[string]interface{} | specs to override default pod specs with |
Note
The volumes
argument does not support subPath
mounts so the
data manipulation logic needs to be aware of any subPath
mounts
that may have been used when mounting a PVC in the primary
application container.
If volumes
argument is not specified, all volumes belonging to the protected object
will be mounted at the predefined path /mnt/prepare_data/<pvcName>
Example:
- func: ScaleWorkload
name: ShutdownApplication
args:
namespace: "{{ .Deployment.Namespace }}"
kind: deployment
replicas: 0
- func: PrepareData
name: ManipulateData
args:
namespace: "{{ .Deployment.Namespace }}"
image: busybox
volumes:
application-pvc-1: "/data"
application-pvc-2: "/restore-data"
command:
- sh
- -c
- |
cp /restore-data/file_to_replace.data /data/file.data
This function backs up data from a container into any object store supported by Kanister.
Note
It is important that the application includes a kanister-tools
sidecar container. This sidecar is necessary to run the
tools that capture path on a volume and store it on the object store.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
pod | Yes | string | pod in which to execute |
container | Yes | string | container in which to execute |
includePath | Yes | string | path of the data to be backed up |
backupArtifactPrefix | Yes | string | path to store the backup on the object store |
encryptionKey | No | string | encryption key to be used for backups |
Outputs:
Output | Type | Description |
---|---|---|
backupTag | string | unique tag added to the backup |
backupID | string | unique snapshot id generated during backup |
Example:
actions:
backup:
type: Deployment
outputArtifacts:
backupInfo:
keyValue:
backupIdentifier: "{{ .Phases.BackupToObjectStore.Output.backupTag }}"
phases:
- func: BackupData
name: BackupToObjectStore
args:
namespace: "{{ .Deployment.Namespace }}"
pod: "{{ index .Deployment.Pods 0 }}"
container: kanister-tools
includePath: /mnt/data
backupArtifactPrefix: s3-bucket/path/artifactPrefix
This function concurrently backs up data from one or more pods into an any object store supported by Kanister.
Note
It is important that the application includes a kanister-tools
sidecar container. This sidecar is necessary to run the
tools that capture path on a volume and store it on the object store.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
pods | No | string | pods in which to execute (by default runs on all the pods) |
container | Yes | string | container in which to execute |
includePath | Yes | string | path of the data to be backed up |
backupArtifactPrefix | Yes | string | path to store the backup on the object store appended by pod name later |
encryptionKey | No | string | encryption key to be used for backups |
Outputs:
Output | Type | Description |
---|---|---|
BackupAllInfo | string | info about backup tag and identifier required for restore |
Example:
actions:
backup:
type: Deployment
outputArtifacts:
params:
keyValue:
backupInfo: "{{ .Phases.backupToObjectStore.Output.BackupAllInfo }}"
phases:
- func: BackupDataAll
name: BackupToObjectStore
args:
namespace: "{{ .Deployment.Namespace }}"
container: kanister-tools
includePath: /mnt/data
backupArtifactPrefix: s3-bucket/path/artifactPrefix
This function restores data backed up by the BackupData function. It creates a new Pod that mounts the PVCs referenced by the specified Pod and restores data to the specified path.
Note
It is extremely important that, the PVCs are not be currently in use by an active application container, as they are required to be mounted to the new Pod (ensure by using ScaleWorkload with replicas=0 first). For advanced use cases, it is possible to have concurrent access but the PV needs to have RWX mode enabled and the volume needs to use a clustered file system that supports concurrent access.
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
image | Yes | string | image to be used for running restore |
backupArtifactPrefix | Yes | string | path to the backup on the object store |
backupIdentifier | No | string | (required if backupTag not provided) unique snapshot id generated during backup |
backupTag | No | string | (required if backupIdentifier not provided) unique tag added during the backup |
restorePath | No | string | path where data is restored |
pod | No | string | pod to which the volumes are attached |
volumes | No | map[string]string | Mapping of pvcName to mountPath under which the volume will be available |
encryptionKey | No | string | encryption key to be used during backups |
podOverride | No | map[string]interface{} | specs to override default pod specs with |
Note
The image
argument requires the use of kanisterio/kanister-tools
image since it includes the required tools to restore data from
the object store.
Between the pod
and volumes
arguments, exactly one argument
must be specified.
Example:
Consider a scenario where you wish to restore the data backed up by the
:ref:`backupdata` function. We will first scale down the application,
restore the data and then scale it back up.
For this phase, we will use the backupInfo
Artifact provided by
backup function.
.. substitution-code-block:: yaml :linenos: - func: ScaleWorkload name: ShutdownApplication args: namespace: "{{ .Deployment.Namespace }}" name: "{{ .Deployment.Name }}" kind: Deployment replicas: 0 - func: RestoreData name: RestoreFromObjectStore args: namespace: "{{ .Deployment.Namespace }}" pod: "{{ index .Deployment.Pods 0 }}" image: kanisterio/kanister-tools:|version| backupArtifactPrefix: s3-bucket/path/artifactPrefix backupTag: "{{ .ArtifactsIn.backupInfo.KeyValue.backupIdentifier }}" - func: ScaleWorkload name: StartupApplication args: namespace: "{{ .Deployment.Namespace }}" name: "{{ .Deployment.Name }}" kind: Deployment replicas: 1
This function concurrently restores data backed up by the :ref:`backupdataall` function, on one or more pods. It concurrently runs a job Pod for each workload Pod, that mounts the respective PVCs and restores data to the specified path.
Note
It is extremely important that, the PVCs are not be currently in use by an active application container, as they are required to be mounted to the new Pod (ensure by using ScaleWorkload with replicas=0 first). For advanced use cases, it is possible to have concurrent access but the PV needs to have RWX mode enabled and the volume needs to use a clustered file system that supports concurrent access.
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
image | Yes | string | image to be used for running restore |
backupArtifactPrefix | Yes | string | path to the backup on the object store |
restorePath | No | string | path where data is restored |
pods | No | string | pods to which the volumes are attached |
encryptionKey | No | string | encryption key to be used during backups |
backupInfo | Yes | string | snapshot info generated as output in BackupDataAll function |
podOverride | No | map[string]interface{} | specs to override default pod specs with |
Note
The image argument requires the use of kanisterio/kanister-tools image since it includes the required tools to restore data from the object store. Between the pod and volumes arguments, exactly one argument must be specified.
Example:
Consider a scenario where you wish to restore the data backed up by the
:ref:`backupdataall` function. We will first scale down the application,
restore the data and then scale it back up. We will not specify pods
in
args, so this function will restore data on all pods concurrently.
For this phase, we will use the params
Artifact provided by
BackupDataAll function.
.. substitution-code-block:: yaml :linenos: - func: ScaleWorkload name: ShutdownApplication args: namespace: "{{ .Deployment.Namespace }}" name: "{{ .Deployment.Name }}" kind: Deployment replicas: 0 - func: RestoreDataAll name: RestoreFromObjectStore args: namespace: "{{ .Deployment.Namespace }}" image: kanisterio/kanister-tools:|version| backupArtifactPrefix: s3-bucket/path/artifactPrefix backupInfo: "{{ .ArtifactsIn.params.KeyValue.backupInfo }}" - func: ScaleWorkload name: StartupApplication args: namespace: "{{ .Deployment.Namespace }}" name: "{{ .Deployment.Name }}" kind: Deployment replicas: 2
This function copies data from the specified volume (referenced by a Kubernetes PersistentVolumeClaim) into an object store. This data can be restored into a volume using the :ref:`restoredata` function
Note
The PVC must not be in-use (attached to a running Pod)
If data needs to be copied from a running workload without stopping it, use the :ref:`backupdata` function
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace the source PVC is in |
volume | Yes | string | name of the source PVC |
dataArtifactPrefix | Yes | string | path on the object store to store the data in |
encryptionKey | No | string | encryption key to be used during backups |
podOverride | No | map[string]interface{} | specs to override default pod specs with |
Outputs:
Output | Type | Description |
---|---|---|
backupID | string | unique snapshot id generated when data was copied |
backupRoot | string | parent directory location of the data copied from |
backupArtifactLocation | string | location in objectstore where data was copied |
backupTag | string | unique string to identify this data copy |
Example:
If the ActionSet Object
is a PersistentVolumeClaim:
- func: CopyVolumeData
args:
namespace: "{{ .PVC.Namespace }}"
volume: "{{ .PVC.Name }}"
dataArtifactPrefix: s3-bucket-name/path
This function deletes the snapshot data backed up by the BackupData function.
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
backupArtifactPrefix | Yes | string | path to the backup on the object store |
backupIdentifier | No | string | (required if backupTag not provided) unique snapshot id generated during backup |
backupTag | No | string | (required if backupIdentifier not provided) unique tag added during the backup |
encryptionKey | No | string | encryption key to be used during backups |
podOverride | No | map[string]interface{} | specs to override default pod specs with |
Example:
Consider a scenario where you wish to delete the data backed up by the
:ref:`backupdata` function.
For this phase, we will use the backupInfo
Artifact provided by backup function.
- func: DeleteData
name: DeleteFromObjectStore
args:
namespace: "{{ .Namespace.Name }}"
backupArtifactPrefix: s3-bucket/path/artifactPrefix
backupTag: "{{ .ArtifactsIn.backupInfo.KeyValue.backupIdentifier }}"
This function concurrently deletes the snapshot data backed up by the BackupDataAll function.
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
backupArtifactPrefix | Yes | string | path to the backup on the object store |
backupInfo | Yes | string | snapshot info generated as output in BackupDataAll function |
encryptionKey | No | string | encryption key to be used during backups |
reclaimSpace | No | bool | provides a way to specify if space should be reclaimed |
podOverride | No | map[string]interface{} | specs to override default pod specs with |
Example:
Consider a scenario where you wish to delete all the data backed up by the
:ref:`backupdataall` function.
For this phase, we will use the params
Artifact provided by backup function.
- func: DeleteDataAll
name: DeleteFromObjectStore
args:
namespace: "{{ .Namespace.Name }}"
backupArtifactPrefix: s3-bucket/path/artifactPrefix
backupInfo: "{{ .ArtifactsIn.params.KeyValue.backupInfo }}"
reclaimSpace: true
This function uses a new Pod to delete the specified artifact from an object store.
Argument | Required | Type | Description |
---|---|---|---|
artifact | Yes | string | artifact to be deleted from the object store |
Note
The Kubernetes job uses the kanisterio/kanister-tools
image,
since it includes all the tools required to delete the artifact
from an object store.
Example:
- func: LocationDelete
name: LocationDeleteFromObjectStore
args:
artifact: s3://bucket/path/artifact
This function is used to create snapshots of one or more PVCs associated with an application. It takes individual snapshot of each PVC which can be then restored later. It generates an output that contains the Snapshot info required for restoring PVCs.
Note
Currently we only support PVC snapshots on AWS EBS. Support for more storage providers is coming soon!
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
pvcs | No | []string | list of names of PVCs to be backed up |
skipWait | No | bool | initiate but do not wait for the snapshot operation to complete |
When no PVCs are specified in the pvcs
argument above, all PVCs in use by a
Deployment or StatefulSet will be backed up.
Outputs:
Output | Type | Description |
---|---|---|
volumeSnapshotInfo | string | Snapshot info required while restoring the PVCs |
Example:
Consider a scenario where you wish to backup all PVCs of a deployment. The output
of this phase is saved to an Artifact named backupInfo
, shown below:
actions:
backup:
type: Deployment
outputArtifacts:
backupInfo:
keyValue:
manifest: "{{ .Phases.backupVolume.Output.volumeSnapshotInfo }}"
phases:
- func: CreateVolumeSnapshot
name: backupVolume
args:
namespace: "{{ .Deployment.Namespace }}"
This function is used to wait for completion of snapshot operations initiated using the :ref:`createvolumesnapshot` function.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function |
This function is used to restore one or more PVCs of an application from the snapshots taken using the :ref:`createvolumesnapshot` function. It deletes old PVCs, if present and creates new PVCs from the snapshots taken earlier.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function |
Example:
Consider a scenario where you wish to restore all PVCs of a deployment. We will first scale down the application, restore PVCs and then scale up. For this phase, we will make use of the backupInfo Artifact provided by the :ref:`createvolumesnapshot` function.
- func: ScaleWorkload
name: shutdownPod
args:
namespace: "{{ .Deployment.Namespace }}"
name: "{{ .Deployment.Name }}"
kind: Deployment
replicas: 0
- func: CreateVolumeFromSnapshot
name: restoreVolume
args:
namespace: "{{ .Deployment.Namespace }}"
snapshots: "{{ .ArtifactsIn.backupInfo.KeyValue.manifest }}"
- func: ScaleWorkload
name: bringupPod
args:
namespace: "{{ .Deployment.Namespace }}"
name: "{{ .Deployment.Name }}"
kind: Deployment
replicas: 1
This function is used to delete snapshots of PVCs taken using the :ref:`createvolumesnapshot` function.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
snapshots | Yes | string | snapshot info generated as output in CreateVolumeSnapshot function |
Example:
- func: DeleteVolumeSnapshot
name: deleteVolumeSnapshot
args:
namespace: "{{ .Deployment.Namespace }}"
snapshots: "{{ .ArtifactsIn.backupInfo.KeyValue.manifest }}"
This function get stats for the backed up data from the object store location
Note
It is important that the application includes a kanister-tools
sidecar container. This sidecar is necessary to run the
tools that get the information from the object store.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
namespace | Yes | string | namespace in which to execute |
backupArtifactPrefix | Yes | string | path to the object store location |
backupID | Yes | string | unique snapshot id generated during backup |
mode | No | string | mode in which stats are expected |
encryptionKey | No | string | encryption key to be used for backups |
Outputs:
Output | Type | Description |
---|---|---|
mode | string | mode of the output stats |
fileCount | string | number of files in backup |
size | string | size of the number of files in backup |
Example:
actions:
backupStats:
type: Deployment
outputArtifacts:
backupStats:
keyValue:
mode: "{{ .Phases.BackupDataStatsFromObjectStore.Output.mode }}"
fileCount: "{{ .Phases.BackupDataStatsFromObjectStore.Output.fileCount }}"
size: "{{ .Phases.BackupDataStatsFromObjectStore.Output.size }}"
phases:
- func: BackupDataStats
name: BackupDataStatsFromObjectStore
args:
namespace: "{{ .Deployment.Namespace }}"
backupArtifactPrefix: s3-bucket/path/artifactPrefix
mode: restore-size
backupID: "{{ .ArtifactsIn.snapshot.KeyValue.backupIdentifier }}"
This function describes the backups for an object store location
Note
It is important that the application includes a kanister-tools
sidecar container. This sidecar is necessary to run the
tools that get the information from the object store.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
backupArtifactPrefix | Yes | string | path to the object store location |
encryptionKey | No | string | encryption key to be used for backups |
Outputs:
Output | Type | Description |
---|---|---|
fileCount | string | number of files in backup object store location |
size | string | size of the number of files in in backup object store location |
passwordIncorrect | string | true if encryption key is incorrect |
repoDoesNotExist | string | true if object store location does not exist |
Example:
actions:
backupStats:
type: Deployment
outputArtifacts:
backupStats:
keyValue:
fileCount: "{{ .Phases.DescribeBackupsFromObjectStore.Output.fileCount }}"
size: "{{ .Phases.DescribeBackupsFromObjectStore.Output.size }}"
passwordIncorrect: "{{ .Phases.DescribeBackupsFromObjectStore.Output.passwordIncorrect }}"
repoDoesNotExist: "{{ .Phases.DescribeBackupsFromObjectStore.Output.repoDoesNotExist }}"
phases:
- func: DescribeBackups
name: DescribeBackupsFromObjectStore
args:
backupArtifactPrefix: s3-bucket/path/artifactPrefix
This function creates RDS snapshot of running RDS instance.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
instanceID | Yes | string | ID of RDS instance you want to create snapshot of |
Outputs:
Output | Type | Description |
---|---|---|
snapshotID | string | ID of the RDS snapshot that has been created |
instanceID | string | ID of the RDS instance |
securityGroupID | []string | AWS Security Group IDs associated with the RDS instance |
Example:
actions:
backup:
type: Namespace
outputArtifacts:
backupInfo:
keyValue:
snapshotID: "{{ .Phases.createSnapshot.Output.snapshotID }}"
instanceID: "{{ .Phases.createSnapshot.Output.instanceID }}"
securityGroupID: "{{ .Phases.createSnapshot.Output.securityGroupID }}"
backupID: "{{ .Phases.exportSnapshot.Output.backupID }}"
configMapNames:
- dbconfig
phases:
- func: CreateRDSSnapshot
name: createSnapshot
args:
instanceID: '{{ index .ConfigMaps.dbconfig.Data "postgres.instanceid" }}'
This function spins up a temporary RDS instance from the given snapshot, extracts database dump and uploads that dump to the configured object storage.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
instanceID | Yes | string | RDS db instance ID |
namespace | Yes | string | namespace in which to execute the Kanister tools pod for this function |
snapshotID | Yes | string | ID of the RDS snapshot |
dbEngine | Yes | string | one of the RDS db engines. Supported engine(s): PostgreSQL |
username | No | string | username of the RDS database instance |
password | No | string | password of the RDS database instance |
backupArtifactPrefix | No | string | path to store the backup on the object store |
databases | No | []string | list of databases to take backup of |
securityGroupID | No | []string | list of securityGroupID to be passed to temporary RDS instance. () |
Note
- If
databases
argument is not set, backup of all the databases will be taken. - If
securityGroupID
argument is not set,ExportRDSSnapshotToLocation
will find out Security Group IDs associated with instance withinstanceID
and will pass the same. - If
backupArtifactPrefix
argument is not set,instanceID
will be used as backupArtifactPrefix.
Outputs:
Output | Type | Description |
---|---|---|
snapshotID | string | ID of the RDS snapshot that has been created |
instanceID | string | ID of the RDS instance |
backupID | string | unique backup id generated during storing data into object storage |
securityGroupID | []string | AWS Security Group IDs associated with the RDS instance |
Example:
actions:
backup:
type: Namespace
outputArtifacts:
backupInfo:
keyValue:
snapshotID: "{{ .Phases.createSnapshot.Output.snapshotID }}"
instanceID: "{{ .Phases.createSnapshot.Output.instanceID }}"
securityGroupID: "{{ .Phases.createSnapshot.Output.securityGroupID }}"
backupID: "{{ .Phases.exportSnapshot.Output.backupID }}"
configMapNames:
- dbconfig
phases:
- func: CreateRDSSnapshot
name: createSnapshot
args:
instanceID: '{{ index .ConfigMaps.dbconfig.Data "postgres.instanceid" }}'
- func: ExportRDSSnapshotToLocation
name: exportSnapshot
objects:
dbsecret:
kind: Secret
name: '{{ index .ConfigMaps.dbconfig.Data "postgres.secret" }}'
namespace: "{{ .Namespace.Name }}"
args:
namespace: "{{ .Namespace.Name }}"
instanceID: "{{ .Phases.createSnapshot.Output.instanceID }}"
securityGroupID: "{{ .Phases.createSnapshot.Output.securityGroupID }}"
username: '{{ index .Phases.exportSnapshot.Secrets.dbsecret.Data "username" | toString }}'
password: '{{ index .Phases.exportSnapshot.Secrets.dbsecret.Data "password" | toString }}'
dbEngine: "PostgreSQL"
databases: '{{ index .ConfigMaps.dbconfig.Data "postgres.databases" }}'
snapshotID: "{{ .Phases.createSnapshot.Output.snapshotID }}"
backupArtifactPrefix: test-postgresql-instance/postgres
This function restores the RDS DB instance either from an RDS snapshot or from the data dump (if snapshotID is not set) that is stored in an object storage.
Note
- If snapshotID is set, the function will restore RDS instance from the RDS snapshot. Otherwise backupID needs to be set to restore the RDS instance from data dump.
- While restoring the data from RDS snapshot if RDS instance (where we have to restore the data) doesn't exist, the RDS instance will be created. But if the data is being restored from the Object Storage (data dump) and the RDS instance doesn't exist new RDS instance will not be created and will result in an error.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
instanceID | Yes | string | RDS db instance ID |
snapshotID | No | string | ID of the RDS snapshot |
username | No | string | username of the RDS database instance |
password | No | string | password of the RDS database instance |
backupArtifactPrefix | No | string | path to store the backup on the object store |
backupID | No | string | unique backup id generated during storing data into object storage |
securityGroupID | No | []string | list of securityGroupID to be passed to temporary RDS instance |
namespace | No | string | namespace in which to execute. Required if snapshotID is nil |
dbEngine | No | string | one of the RDS db engines. Supported engines: PostgreSQL . Required if snapshotID is nil |
Note
- If
snapshotID
is not set, restore will be done from data dump. In that casebackupID
arg is required. - If
securityGroupID
argument is not set,RestoreRDSSnapshot
will find out Security Group IDs associated with instance withinstanceID
and will pass the same.
Outputs:
Output | Type | Description |
---|---|---|
endpoint | string | endpoint of the RDS instance |
Example:
restore:
inputArtifactNames:
- backupInfo
kind: Namespace
phases:
- func: RestoreRDSSnapshot
name: restoreSnapshots
objects:
dbsecret:
kind: Secret
name: '{{ index .ConfigMaps.dbconfig.Data "postgres.secret" }}'
namespace: "{{ .Namespace.Name }}"
args:
namespace: "{{ .Namespace.Name }}"
backupArtifactPrefix: test-postgresql-instance/postgres
instanceID: "{{ .ArtifactsIn.backupInfo.KeyValue.instanceID }}"
backupID: "{{ .ArtifactsIn.backupInfo.KeyValue.backupID }}"
securityGroupID: "{{ .ArtifactsIn.backupInfo.KeyValue.securityGroupID }}"
username: '{{ index .Phases.restoreSnapshots.Secrets.dbsecret.Data "username" | toString }}'
password: '{{ index .Phases.restoreSnapshots.Secrets.dbsecret.Data "password" | toString }}'
dbEngine: "PostgreSQL"
This function deletes the RDS snapshot by the snapshotID.
Arguments:
Argument | Required | Type | Description |
---|---|---|---|
snapshotID | No | string | ID of the RDS snapshot |
Example:
actions:
delete:
kind: Namespace
inputArtifactNames:
- backupInfo
phases:
- func: DeleteRDSSnapshot
name: deleteSnapshot
args:
snapshotID: "{{ .ArtifactsIn.backupInfo.KeyValue.snapshotID }}"
Kanister can be extended by registering new Kanister Functions.
Kanister Functions are registered using a similar mechanism to database/sql drivers. To register new Kanister Functions, import a package with those new functions into the controller and recompile it.