1
0
mirror of https://github.com/tektoncd/catalog.git synced 2024-11-21 05:55:35 +00:00

orka-deploy 0.2 changes

This commit is contained in:
jkomoda 2022-08-18 10:38:37 -07:00 committed by tekton-robot
parent fa0a0a81ff
commit d49622ca8d
11 changed files with 670 additions and 0 deletions

View File

@ -0,0 +1,118 @@
# Run macOS Builds with Tekton and Orka by MacStadium
> **IMPORTANT:** This `Task` requires **Tekton Pipelines v0.16.0 or later** and an Orka environment running on **Orka 1.4.1 or later**.
This `Task`, along with `orka-init` and `orka-teardown`, allows you to utilize multiple macOS build agents in your [Orka environment](https://orkadocs.macstadium.com).
## `orka-deploy`
A `Task` that deploys a VM instance from a specified VM template. Usually, you would use the VM template created with `orka-init`.
## Platforms
The Task can be run on `linux/amd64` platform.
## Prerequisites
* You need a Kubernetes cluster with Tekton Pipelines v0.16.0 or later configured.
* You need an Orka environment with the following components:
* Orka 1.4.1 or later.
* [An Orka service endpoint](https://orkadocs.macstadium.com/docs/endpoint-faqs#whats-the-orka-service-endpoint) (IP or custom domain). Usually, `http://10.221.188.20`, `http://10.221.188.100` or `https://<custom-domain>`.
* A dedicated Orka user with valid credentials (email & password). Create a new user or request one from your Orka administrator.
* An SSH-enabled base image and the respective SSH credentials (email & password OR SSH key). Use an [existing base image](https://orkadocs.macstadium.com/docs/existing-images-upload-management) or [create your own](https://orkadocs.macstadium.com/docs/creating-an-ssh-enabled-image).
* You need an active VPN connection between your Kubernetes cluster and Orka. Use a [VPN client](https://orkadocs.macstadium.com/docs/vpn-connect) for temporary access or create a [site-to-site VPN tunnel](https://orkadocs.macstadium.com/docs/aws-orka-connections) for permanent access.
See also: [Using Orka, At a Glance](https://orkadocs.macstadium.com/docs/quick-start-introduction)
See also: [GCP-MacStadium Site-to-Site VPN](https://docs.macstadium.com/docs/google-cloud-setup)
> **NOTE:** Beginning with Orka 2.1.0, net new Orka clusters are configured with the Orka service endpoint as `http://10.221.188.20`. Existing clusters will continue to use the service endpoint as initially configured, typically `http://10.221.188.100`.
## Installation
Before you can use this `Task` in Tekton pipelines, you need to install it and the Orka configuration in your Kubernetes cluster. See the `orka-init` documentation [here](https://github.com/tektoncd/catalog/blob/main/task/orka-init/0.2/README.md#installation) for more information on setting up the Orka API configuration.
```sh
kubectl apply --namespace=<namespace> -f https://raw.githubusercontent.com/tektoncd/catalog/task/orka-deploy/0.2/orka-deploy.yaml
```
Omit `--namespace` if installing in the `default` namespace.
## Storing SSH credentials
The `orka-deploy` task looks for a Kubernetes secret that stores the SSH access credentials for your macOS base image. This secret is called `orka-ssh-creds` by default and is expected to have the keys `username` and `password`.
These defaults exist for convenience, and you can change them using the available [`Task` parameters](#Configuring-Secrets-and-Config-Maps).
You can use the following example configuration. Make sure to provide the correct credentials for your base image.
```yaml
# orka-ssh-creds.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: orka-ssh-creds
type: Opaque
stringData:
username: admin
password: admin
```
```sh
kubectl apply --namespace=<namespace> -f orka-ssh-creds.yaml
```
Omit `--namespace` if installing in the `default` namespace.
### Using an SSH key
If using an SSH key to connect to the VM instead of an SSH username and password, complete the following:
1. Copy the public key to the VM and commit the base image.
2. Store the username and private key in a Kubernetes secret:
```sh
kubectl create secret generic orka-ssh-key --from-file=id_rsa=/path/to/id_rsa --from-literal=username=<username>
```
See also: [use-ssh-key.yaml](https://github.com/tektoncd/catalog/blob/main/task/orka-deploy/0.2/samples/use-ssh-key.yaml) example
## Workspaces
* **orka**: The contents of this workspace will be copied to the deployed VM. All files present in the workspace will be available to the build script run inside the VM. For example, you could clone a git repository containing Objective-C or Swift code in a previous pipeline step and run a build with Xcode command line tools in your build script. If the `copy-build` parameter is set to true, all build artifacts will be copied from the VM back to the workspace.
## Parameters
### Common parameters
| Parameter | Description | Default |
| --- | --- | ---: |
| `vm-metadata` | Inject custom metadata to the VM (on Intel nodes only). You need to provide the metadata in format:`[{ key: firstKey, value: firstValue }, { key: secondKey, value: secondValue }]`. Refer to [`inject-vm-metadata`](samples/inject-vm-metadata.yaml) example. | --- |
| `system-serial` | Assign an owned macOS system serial number to the VM (on Intel nodes only). Refer to [`inject-system-serial`](samples/inject-system-serial.yaml) example. | --- |
| `gpu-passthrough` | Enables or disables GPU passthrough for the VM (on Intel nodes only). Refer to [`gpu-passthrough`](samples/gpu-passthrough.yaml) example. | false |
| `script` | The script to run inside of the VM. The script will be prepended with `#!/bin/sh` and `set -ex` if no shebang is present. You can set your shebang instead (e.g., to run a script with your preferred shell or a scripting language like Python or Ruby). | --- |
| `copy-build` | Specifies whether to copy build artifacts from the Orka VM back to the workspace. Disable when there is no need to copy build artifacts (e.g., when running tests or linting code). | true |
| `verbose` | Enables verbose logging for all connection activity to the VM. | false |
| `ssh-key` | Specifies whether the SSH credentials secret contains an [SSH key](#using-an-ssh-key), as opposed to a password. | false |
| `delete-vm` | Specifies whether to delete the VM after use when run in a pipeline. You can discard build agents that are no longer needed to free up resources. Set to false if you intend to clean up VMs after use manually. | true |
| `orka-tekton-runner-image` | The docker image used to run the task steps. | ghcr.io/macstadium/orka-tekton-runner:2022-06-29-ec3440a7@sha256:d7cfb75ea082a927e36c131aa96e96bfcacd23f62fdaf33f5b37320b86baf50e |
### Configuring secrets and config maps
| Parameter | Description | Default |
| --- | --- | ---: |
| `ssh-secret` | The name of the secret holding your VM SSH credentials. | orka-ssh-creds |
| `ssh-username-key` | The name of the key in the VM SSH credentials secret for the username associated with the macOS VM. | username |
| `ssh-password-key` | The name of the key in the VM SSH credentials secret for the password associated with the macOS VM. If `ssh-key` is true, this parameter should specify the name of the key in the VM SSH credentials secret that holds the private SSH key. | password |
| `orka-token-secret` | The name of the secret holding the authentication token used to access the Orka API. Applicable to `orka-init` / `orka-deploy` / `orka-teardown`. | orka-token |
| `orka-token-secret-key` | The name of the key in the Orka token secret, which holds the authentication token. Applicable to `orka-init` / `orka-deploy` / `orka-teardown`. | token |
| `orka-vm-name-config` | The name of the config map, which stores the name of the generated VM configuration. Applicable to `orka-init` / `orka-deploy` / `orka-teardown`. | orka-vm-name |
| `orka-vm-name-config-key` | The name of the key in the VM name config map, which stores the name of the generated VM configuration. Applicable to `orka-init` / `orka-deploy` / `orka-teardown`. | vm-name |
## Samples
[parallel-deploy.yaml](https://github.com/tektoncd/catalog/blob/main/task/orka-deploy/0.2/samples/parallel-deploy.yaml) is a sample `Pipeline` that uses the `orka-init`, `orka-deploy`, and `orka-teardown` tasks and performs the following operations:
1. Sets up an Orka job runner.
2. Deploys 2 VMs in parallel and executes a different script on each VM.
3. Cleans up the environment.

View File

@ -0,0 +1,191 @@
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: orka-deploy
labels:
app.kubernetes.io/version: "0.2"
annotations:
tekton.dev/categories: Deployment
tekton.dev/pipelines.minVersion: "0.16.0"
tekton.dev/tags: "orka, macstadium, deploy, build"
tekton.dev/platforms: "linux/amd64"
tekton.dev/displayName: "orka deploy"
spec:
description: >-
With this set of Tasks, you can use your Orka environment
to run macOS builds and macOS-related testing from your Tekton pipelines.
This Task deploys a VM instance from a specified VM template.
Usually, you would use the VM template created with `orka-init`.
params:
- name: script
type: string
description: |
The script to run inside of the VM. The script will be prepended with the following
if no shebang is present:
#!/bin/sh
set -ex
You can set your shebang instead (e.g., to run a script with your preferred shell or a scripting language like Python or Ruby).
- name: copy-build
type: string
description: |
Specifies whether to copy build artifacts from the Orka VM back to the workspace.
Disable when there is no need to copy build artifacts (e.g., when running tests or linting code).
default: "true"
- name: verbose
type: string
description: Enables verbose logging for all connection activity to the VM.
default: "false"
- name: delete-vm
type: string
description: |
Specifies whether to delete the VM after use when run in a pipeline.
You can discard build agents that are no longer needed to free up resources.
Set to false if you intend to clean up VMs after use manually.
default: "true"
- name: orka-tekton-runner-image
type: string
description: |
The name of the docker image which runs the task step.
default: ghcr.io/macstadium/orka-tekton-runner:2022-06-29-ec3440a7@sha256:d7cfb75ea082a927e36c131aa96e96bfcacd23f62fdaf33f5b37320b86baf50e
- name: ssh-key
type: string
description: |
Specifies whether the SSH credentials secret contains an SSH key, as opposed to a password.
default: "false"
- name: ssh-secret
type: string
description: The name of the secret holding your VM SSH credentials.
default: orka-ssh-creds
- name: ssh-username-key
type: string
description: |
The name of the key in the VM SSH credentials secret for the username associated with the macOS VM.
default: username
- name: ssh-password-key
type: string
description: |
The name of the key in the VM SSH credentials secret for the password
associated with the macOS VM.
If ssh-key is true, this parameter should specify the name of the key in
the VM SSH credentials secret that holds the private SSH key.
default: password
- name: system-serial
type: string
description: "Assign an owned macOS system serial number to the VM (on Intel nodes only)."
default: ""
- name: vm-metadata
type: string
description: |
"Inject custom metadata to the VM (on Intel nodes only). You need to provide the metadata in format:
[
{ key: firstKey, value: firstValue },
{ key: secondKey, value: secondsValue }
]"
default: ""
- name: gpu-passthrough
type: string
description: Enables or disables GPU passthrough for the VM (on Intel nodes only).
default: "false"
- name: orka-token-secret
type: string
description: |
The name of the secret holding the authentication token used to access the Orka API.
default: orka-token
- name: orka-token-secret-key
type: string
description: |
The name of the key in the Orka token secret, which holds the authentication token.
default: token
- name: orka-vm-name-config
type: string
description: |
The name of the config map, which stores the name of the generated VM configuration.
default: orka-vm-name
- name: orka-vm-name-config-key
type: string
description: |
The name of the key in the VM name config map, which stores the name of the generated VM configuration.
default: vm-name
- name: user-home
type: string
default: /tekton/home
description: Absolute path to the user's home directory.
stepTemplate:
env:
- name: HOME
value: $(params.user-home)
workingDir: /workspace
steps:
- name: copy-script
image: $(params.orka-tekton-runner-image)
script: |
#!/bin/sh
SCRIPT=$(mktemp)
# Safeguard against having to escape quotes / vars in script
cat > "$SCRIPT" << 'EOF'
$(params.script)
EOF
copy-script "$SCRIPT"
- name: build
image: $(params.orka-tekton-runner-image)
env:
- name: ORKA_API
valueFrom:
configMapKeyRef:
name: orka-tekton-config
key: ORKA_API
- name: VERBOSE
value: $(params.verbose)
- name: DELETE_VM
value: $(params.delete-vm)
- name: COPY_BUILD
value: $(params.copy-build)
- name: SSH_USERNAME
valueFrom:
secretKeyRef:
name: $(params.ssh-secret)
key: $(params.ssh-username-key)
- name: SSH_PASSFILE
value: /etc/$(params.ssh-secret)/$(params.ssh-password-key)
- name: SSH_KEY
value: $(params.ssh-key)
- name: SYSTEM_SERIAL
value: $(params.system-serial)
- name: VM_METADATA
value: $(params.vm-metadata)
- name: GPU_PASSTHROUGH
value: $(params.gpu-passthrough)
- name: TOKEN
valueFrom:
secretKeyRef:
name: $(params.orka-token-secret)
key: $(params.orka-token-secret-key)
- name: VM_NAME
valueFrom:
configMapKeyRef:
name: $(params.orka-vm-name-config)
key: $(params.orka-vm-name-config-key)
volumeMounts:
- name: ssh-creds
readOnly: true
mountPath: /etc/$(params.ssh-secret)
script: |
#!/bin/sh
set -x
orka-deploy
volumes:
- name: ssh-creds
secret:
secretName: $(params.ssh-secret)
items:
- key: $(params.ssh-password-key)
path: $(params.ssh-password-key)
mode: 256
workspaces:
- name: orka

View File

@ -0,0 +1,70 @@
# ###
# This example shows how to deploy multiple Orka VMs in a Pipeline.
# In this scenario, the VMs will be run in parallel and run two different jobs.
#
# The orka-init and orka-teardown Tasks require a Kubernetes service account with
# permission to create / delete secrets and config maps. See the README for more
# information.
# ###
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: parallel-deploy
spec:
workspaces:
- name: shared-data
tasks:
- name: setup
taskRef:
name: orka-init
params:
- name: base-image
value: 90GBigSurSSH.img
- name: diskinfo
runAfter:
- setup
retries: 1
taskRef:
name: orka-deploy
params:
- name: copy-build
value: "false"
- name: script
value: |
diskutil info /
workspaces:
- name: orka
workspace: shared-data
- name: ruby
runAfter:
- setup
retries: 1
taskRef:
name: orka-deploy
params:
- name: copy-build
value: "false"
- name: script
value: |
#!/usr/bin/env ruby
puts "Hello macOS"
workspaces:
- name: orka
workspace: shared-data
finally:
- name: cleanup
taskRef:
name: orka-teardown
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: run-parallel-deploy
spec:
serviceAccountName: orka-svc
pipelineRef:
name: parallel-deploy
workspaces:
- name: shared-data
emptyDir: {}

View File

@ -0,0 +1,67 @@
# ###
# This example demonstrates how to use an SSH key to connect to the Orka VM.
# You will first need to copy the public key to the VM and commit or save an
# image using `orka image commit` or `orka image save` and specify that base image.
# in the TaskRun or Pipeline as a param.
#
# You must specify the Task param ssh-key="true" in order to use an SSH key.
#
# ###
# You can create a Kubernetes secret with the SSH credentials as follows:
# kubectl create secret generic orka-ssh-key --from-file=id_rsa=/path/to/id_rsa --from-literal=username=<username>
# ###
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: use-ssh-key
spec:
workspaces:
- name: shared-data
tasks:
- name: setup
taskRef:
name: orka-init
params:
- name: base-image
value: catalina-ssh-key-30G.img
- name: hello-macos
runAfter:
- setup
retries: 1
taskRef:
name: orka-deploy
params:
- name: ssh-secret
value: orka-ssh-key
- name: ssh-password-key
value: id_rsa
- name: ssh-key
value: "true"
- name: verbose
value: "true"
- name: copy-build
value: "false"
- name: script
value: |
#!/usr/bin/env ruby
puts "Hello macOS"
workspaces:
- name: orka
workspace: shared-data
finally:
- name: cleanup
taskRef:
name: orka-teardown
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: run-use-ssh-key
spec:
serviceAccountName: orka-svc
pipelineRef:
name: use-ssh-key
workspaces:
- name: shared-data
emptyDir: {}

View File

@ -0,0 +1,32 @@
---
headers:
method: POST
path: /token
response:
status: 200
output: '{"token": "abcd"}'
content-type: text/json
---
headers:
method: POST
path: /resources/vm/create
response:
status: 200
output: '{"status": 200}'
content-type: text/json
---
headers:
method: POST
path: /resources/vm/deploy
response:
status: 200
output: '{"ip": "localhost", "ssh_port": "22", "vm_id": "wxyz"}'
content-type: text/json
---
headers:
method: DELETE
path: /resources/vm/delete
response:
status: 200
output: '{"message": "Successfully deleted VM"}'
content-type: text/json

View File

@ -0,0 +1,45 @@
import os, sys, yaml
def str_presenter(dumper, data):
scalar_style = "|" if len(data.splitlines()) > 1 else None
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style=scalar_style)
yaml.add_representer(str, str_presenter)
if __name__ == "__main__":
# Load YAML files
with open(os.path.join(sys.path[0], "..", "mocks", "orka.yaml"), "r", encoding="utf-8") as f:
mocks = f.read()
with open(sys.argv[1], "r", encoding="utf-8") as f:
data = yaml.load(f.read(), Loader=yaml.FullLoader)
go_rest_api = [
{
"name": "go-rest-api",
"image": "quay.io/chmouel/go-rest-api-test",
"env": [
{
"name": "CONFIG",
"value": mocks
}
]
}
]
# Modify Task YAML
if data["metadata"]["name"] == "orka-deploy":
# Load YAML files
with open(os.path.join(sys.path[0], "sidecars.yaml"), "r", encoding="utf-8") as f:
sidecars = yaml.load(f.read(), Loader=yaml.FullLoader)
with open(os.path.join(sys.path[0], "volumes.yaml"), "r", encoding="utf-8") as f:
volumes = yaml.load(f.read(), Loader=yaml.FullLoader)
data["spec"]["volumes"] += volumes
data["spec"]["sidecars"] = sidecars + go_rest_api
else:
data["spec"]["sidecars"] = go_rest_api
# Dump YAML
print(yaml.dump(data, default_flow_style=False))

View File

@ -0,0 +1,12 @@
---
- name: ssh-server
image: panubo/sshd:1.3.0
env:
- name: "SSH_USERS"
value: "admin:1000:1000"
- name: "SSH_ENABLE_PASSWORD_AUTH"
value: "true"
volumeMounts:
- name: startup-script
mountPath: "/etc/entrypoint.d"
readOnly: true

View File

@ -0,0 +1,8 @@
---
- name: startup-script
configMap:
name: ssh-scripts
items:
- key: "startup-script"
path: "startup.sh"
mode: 448

View File

@ -0,0 +1,16 @@
#!/bin/bash
# Modify orka-init
ORKA_INIT=$(mktemp /tmp/.mm.XXXXXX)
MOD_SCRIPT=${taskdir}/tests/mods/mod_task.py
cp task/orka-init/0.2/orka-init.yaml ${ORKA_INIT}
python3 ${MOD_SCRIPT} ${ORKA_INIT} > ${ORKA_INIT}.mod
# Add orka-init
${KUBECTL_CMD} -n "${tns}" apply -f ${ORKA_INIT}.mod
rm -f ${ORKA_INIT} ${ORKA_INIT}.mod
# Modify task
cp ${TMPF} ${TMPF}.read
python3 ${MOD_SCRIPT} ${TMPF}.read > ${TMPF}
rm -f ${TMPF}.read

View File

@ -0,0 +1,68 @@
# Secrets set and used for local test instance.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ssh-scripts
data:
startup-script: |
#!/usr/bin/env bash
set -e
echo "admin:admin" | chpasswd
---
apiVersion: v1
kind: ConfigMap
metadata:
name: orka-tekton-config
data:
ORKA_API: http://localhost:8080
---
apiVersion: v1
kind: Secret
metadata:
name: orka-creds
type: Opaque
stringData:
email: tekton-svc@macstadium.com
password: p@ssw0rd
---
apiVersion: v1
kind: Secret
metadata:
name: orka-ssh-creds
type: Opaque
stringData:
username: admin
password: admin
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: orka-svc
namespace: orka-deploy-0-2
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: orka-runner-deploy-0-2
rules:
- apiGroups: [""]
resources:
- configmaps
- secrets
verbs:
- create
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: orka-runner-deploy-0-2
subjects:
- kind: ServiceAccount
name: orka-svc
namespace: orka-deploy-0-2
roleRef:
kind: ClusterRole
name: orka-runner-deploy-0-2
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,43 @@
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: orka-deploy-test
spec:
workspaces:
- name: shared-data
tasks:
- name: init
taskRef:
name: orka-init
params:
- name: base-image
value: base-image.img
- name: deploy
runAfter:
- init
taskRef:
name: orka-deploy
params:
- name: copy-build
value: "false"
- name: verbose
value: "true"
- name: script
value: |
echo "Hello from TektonCD test"
workspaces:
- name: orka
workspace: shared-data
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: orka-deploy-test-run
spec:
serviceAccountName: orka-svc
pipelineRef:
name: orka-deploy-test
workspaces:
- name: shared-data
emptyDir: {}