Deploy to OpenShift with vCluster
Cosmonic Control uses Kubernetes custom resource definitions (CRDs) to represent the core primitives of wasmCloud and WebAssembly components.
Some users may be restricted from deploying custom resources in a larger Kubernetes environment. In these cases, it is possible to use vCluster to run an isolated virtual cluster on existing Kubernetes infrastructure, enabling the use of CRDs within the virtual cluster.
This page outlines deployment of Cosmonic Control to an isolated virtual cluster using vCluster on Red Hat OpenShift Local.
Pre-requisites:
helm
kubectl
- Cosmonic Control credentials (contact us to request trial credentials)
- An OpenShift environment (you can use
crc
for local testing purposes) - Red Hat account
Set up a local OpenShift environment
You can use the crc
CLI utility to run a virtualized OpenShift environment, if you don't have one already.
Download the crc
installer and follow the installation instructions.
Configure your crc
instance and make sure it's ready to launch with crc setup
:
crc config set consent-telemetry no
crc config set preset openshift
crc setup
By default, crc
creates an instance with 10752 MB memory. It can be helpful to double the default when you start the instance:
crc start --memory 21504
You will be asked for a pull secret from Red Hat. You can find your pull secret by logging in to your Red Hat account and retrieving it from the Pull Secret section of the CRC page on the Red Hat Hybrid Cloud Console.
Once the crc
instance is ready, you can begin the vCluster and Cosmonic Control deployment.
Install vCluster
Create a vcluster.yaml
Helm values file in your working directory and use the contents below:
# For OpenShift specific changes, see:
# https://www.vcluster.com/docs/vcluster/deploy/environment/openshift?x0=2
controlPlane:
ingress:
enabled: true
host: "vcluster.apps-crc.testing"
pathType: "Prefix"
annotations:
route.openshift.io/termination: "passthrough"
spec:
ingressClassName: openshift-default
# OpenShift
statefulSet:
security:
podSecurityContext:
fsGroup: 1000650000
containerSecurityContext:
runAsGroup: 1000650000
runAsUser: 1000650000
runAsNonRoot: true
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
distro:
k8s:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
# OpenShift
rbac:
role:
extraRules:
- apiGroups: [""]
resources: ["endpoints/restricted"]
verbs: ["create"]
exportKubeConfig:
context: "vcluster"
server: "https://vcluster.apps-crc.testing"
integrations:
metricsServer:
enabled: false
externalSecrets:
enabled: false
kubeVirt:
enabled: false
certManager:
enabled: false
Use Helm to install vCluster:
helm upgrade --install demo vcluster \
--repo https://charts.loft.sh \
--values vcluster.yaml \
--namespace demo \
--repository-config='' \
--create-namespace
Wait for the vCluster pod to reach Ready
state:
kubectl rollout status sts demo -n demo --timeout=90s
Create a route.yaml
manifest in your working directory and use the contents below to add an OpenShift route resource for vCluster ingress:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: vcluster
spec:
host: vcluster.apps-crc.testing
port:
targetPort: https
tls:
termination: passthrough
insecureEdgeTerminationPolicy: None
to:
kind: Service
name: demo
kubectl apply -f route.yaml -n demo
This will create a vcluster.apps-crc.testing
entry that we configure as the endpoint for vCluster that we can call from outside of Kubernetes.
Store the kubeconfig
that you will use to communicate with the vCluster control plane:
kubectl get secret -n demo vc-demo -o 'jsonpath={.data.config}' | base64 -d > kubeconfig
Install Cosmonic Control inside vCluster
Create a new cosmonic-system
namespace:
KUBECONFIG=kubeconfig kubectl create ns cosmonic-system
The next step requires your credentials from Cosmonic. If you'd like to try Cosmonic Control, contact us to request trial credentials.
Create environment variables for your credentials from Cosmonic:
CC_USERNAME=username-value-here
CC_PASSWORD=password-value-here
The next step is to create pull secrets for pulling Cosmonic artifacts from quay.io.
For the cosmonic-system
namespace:
KUBECONFIG=kubeconfig kubectl create secret docker-registry cosmonic-quay -n cosmonic-system \
--docker-username $CC_USERNAME \
--docker-password $CC_PASSWORD \
--docker-server quay.io
For the default
namespace:
KUBECONFIG=kubeconfig kubectl create secret docker-registry cosmonic-quay -n default \
--docker-username $CC_USERNAME \
--docker-password $CC_PASSWORD \
--docker-server quay.io
Create a control-values.yaml
file in your working directory and use the contents below:
image:
repository: quay.io/cosmonic/infrastructure-operator
tag: 0.1.7
imagePullSecrets:
- name: cosmonic-quay
Install the Cosmonic operator:
KUBECONFIG=kubeconfig helm upgrade --install demo oci://quay.io/cosmonic/cosmonic-control \
--version 0.1.2 \
--values control-values.yaml \
--namespace cosmonic-system \
--username $CC_USERNAME \
--password $CC_PASSWORD
Create a cluster.yaml
file in your working directory and use the contents below:
apiVersion: k8s.cosmonic.io/v1alpha1
kind: Cluster
metadata:
name: demo
namespace: default
spec:
nexus:
replicas: 3
image: quay.io/cosmonic/nexus:0.1.7
imagePullSecrets:
- name: cosmonic-quay
console:
image: quay.io/cosmonic/console:0.1.7
imagePullSecrets:
- name: cosmonic-quay
baseUrl: ""
connectors:
- id: "static"
type: "static"
name: "Demo Auth"
operator:
image: quay.io/cosmonic/runtime-operator:0.1.7
imagePullSecrets:
- name: cosmonic-quay
observability:
disable: true
grafana: {}
prometheus: {}
surveyor: {}
Install the Cosmonic cluster in the default
namespace inside vCluster:
KUBECONFIG=kubeconfig kubectl apply -f cluster.yaml
Wait for the operator to become ready:
KUBECONFIG=kubeconfig kubectl rollout status deployment operator-demo --timeout=120s
Create a hostgroup.yaml
file in your working directory and use the contents below:
apiVersion: k8s.cosmonic.io/v1alpha1
kind: HostGroup
metadata:
name: hostgroup
namespace: default
spec:
cluster:
name: demo
replicas: 1
image: ghcr.io/wasmcloud/wasmcloud:1.8.0
imagePullPolicy: Always
env:
- name: RUST_LOG
value: debug
Install the wasmCloud host group:
KUBECONFIG=kubeconfig kubectl apply -f hostgroup.yaml
Wait for the host group to become ready:
KUBECONFIG=kubeconfig kubectl rollout status deployment demo-hostgroup --timeout=120s
Deploy a wasmCloud application
Use Helm to deploy the "Hello World" application:
KUBECONFIG=kubeconfig helm install hello-world oci://ghcr.io/cosmonic-labs/charts/hello-world
Open a port-forward to the host group to access the application:
KUBECONFIG=kubeconfig kubectl port-forward deploy/demo-hostgroup 8080
In another tab or window, call the application through the port-forward:
curl localhost:8080
You should now be greeted with:
Hello from Cosmonic Control!
Your deployment is now running a WebAssembly application with wasmCloud.
To learn more about building your own WebAssembly applications, visit:
https://wasmcloud.com/docs/tour/hello-world/