In this tutorial, we show you how to install Kasten K10 for Azure Kubernetes Service (AKS) on Azure Stack using specific values that are usually defaulted on the Azure public cloud. First, we go through the steps to set up an AKS cluster; then we install Kasten K10, which is a very short and simple procedure. To conclude, we provide a short walkthrough of Kasten K10 and run a few tests to confirm everything was set up correctly.
Azure Stack and AKS
Azure Stack is a true hybrid cloud computing solution that is an extension of Azure, allowing organizations to provide Azure services using their own on-premises data centers. AKS is a fully managed, open-source container orchestration service that is available on the Microsoft Azure public cloud.
Note: Only use the Kubernetes Azure Stack Marketplace item to deploy clusters as a proof-of-concept. For supported Kubernetes clusters on Azure Stack, use the AKS engine.
Why You Need Kasten K10 When Using AKS on Azure Stack
Azure Stack does not provide backup and disaster recovery solution services out of the box. Thanks to the integration of Kasten K10 with AKS on Azure Stack, it is very easy to protect your Kubernetes cluster. Kasten K10 also allows you to migrate your applications to another Kubernetes cluster at regular intervals.
Prerequisite
You need an owner role on Azure Stack to assign a collaborator role to an application service principal. Your az client needs to be connected to the Azure Stack instance, you need the tenant-id, and the manager service endpoint of this Azure Stack instance. You’ll find your tenant-id using the directory switch button.
The ResourceManagerUrl in integrated systems is: https://management.<region>.<fqdn> which in my case, for instance is: https://management.ppe5.stackpoc.com.
Now, setup a new cloud registration (that we name stackpock) pointing to your Azure Stack and login to your tenant.
az cloud register -n stackpoc --endpoint-resource-manager “https://management.ppe5.stackpoc.com”
az cloud set -n stackpoc
az cloud update --profile 2019-03-01-hybrid
az login -t 36ad9d54-ca17-4ea7-89b3-0e1638cf878e
The default web browser has been opened at
https://login.microsoftonline.com/36ad9d54-ca17-4ea7-89b3-0e1638cf878e/oauth2/authorize.
Please continue the login in the web browser. If no web browser is available or if the web browser
fails to open, use device code flow with `az login --use-device-code`.
You have logged in. Now let us find all the subscriptions to which you have access...
[
{
"cloudName": "stackpoc",
"id": "14b81536-0522-4bac-b7c0-82c7d30012e3",
"isDefault": true,
"name": "Kasten",
"state": "Enabled",
"tenantId": "36ad9d54-ca17-4ea7-89b3-0e1638cf878e",
"user": {
"name": "michael@kasten.io",
"type": "user"
}
}
]
Part 1: Setting up Your AKS Cluster on Azure Stack
- Create a service principal on the global Azure portal
- Give a contributor role to this service principal on Azure Stack
- Create the AKS cluster
- Create a bastion to connect to the Kubernetes master and retrieve the kubeconfig file
- Create a service principal on the global Azure portal
Login to the global Azure portal and make sure you select the same tenant-id. Then in Azure Directory create an app registration.
*Note the client ID and the secret for this application.
- Give a contributor role to this service principal on Azure Stack
To allow this application to create machines, network, and storage, we need to bind a contributor role to this application on the Azure Stack portal.
Come back to Azure Stack portal : All services > Subscriptions > Access control and add a role assignment.
- Create the AKS cluster
You are now ready to create the AKS cluster : Create a resource > Compute > Kubernetes Cluster. And provide the information you gathered previously.
Create a new resource group or use an empty one.
Use your public key to connect to the Kubernetes control and worker nodes. For this tutorial, we just use one master but for production you should choose three.
Check the values and launch the creation of the cluster.
- Create a bastion to connect to the Kubernetes master and retrieve the kubeconfig file
Choose the same resource group and the same network; make sure you have proper rules for ssh access. The bastion should have a public IP.
Now you can retrieve the config file on the master and bring it back on your local machine. To connect to your machine use the portal to retrieve their respective IP: public IP for the bastion and private IP for the master node.
ssh-add ~/.ssh/id_rsa
ssh -A michael@38.102.183.44
ssh -A azureuser@10.240.255.5
cat ~/.kube/config
Exit, and with the output create a kubeconfig file on your laptop. Check kubectl can connect to the Kubernetes cluster.
export KUBECONFIG=kubeconfig
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-linuxpool-37225108-0 Ready agent 12d v1.15.5
k8s-linuxpool-37225108-1 Ready agent 12d v1.15.5
k8s-linuxpool-37225108-2 Ready agent 12d v1.15.5
k8s-master-37225108-0 Ready master 12d v1.15.5
You can now destroy your bastion to avoid exposure on the internet and only use kubectl.
Part 2: Installing Kasten K10
Installing K10 is actually the simplest part of this tutorial.
Install the helm chart on your laptop, if it’s not already done.
You need to provide these options to the helm install command :
- Azure tenant: the Azure Stack tenant ID (you’ll find it in global azure portal > Azure Directory > Properties)
- Service principal client ID: client ID of the app that was used to create the Kubernetes cluster (you’ll find it in global azure portal > Azure Directory > App registration). You already used it during AKS creation.
- Service principal client secret: client-secret of the app that was used to create the Kubernetes cluster (you’ll find it in global azure portal > Azure Directory > App registration > Certificate and secrets). You already used it during AKS creation.
And here are the options that you usually don’t need to provide when installing on the global azure cloud:
- Azure Resource Group: name of the Resource Group that was created for the Kubernetes cluster
- Azure subscription ID: a valid subscription in your Azure Stack tenant (you can obtain the first subscription ID with az account list | jq ‘.[0].id’)
- Azure Resource Manager endpoint: the resource management endpoint for this Azure Stack instance (you can obtain it with az cloud show | jq ‘.endpoints.resourceManager’)
- Active Directory endpoint: the active directory login endpoint (you can obtain it with az cloud show | jq ‘.endpoints.activeDirectory’)
- Active Directory resource ID: the resource ID to obtain AD tokens (you can obtain it with az cloud show | jq ‘.endpoints.activeDirectoryResourceId’)
We also need to allow the dashboard to access the nodes network to request some special IP to get metadata information –set services.dashboardbff.hostNetwork=true
All in all, here is how I executed the install in my case:
helm repo add kasten https://charts.kasten.io/
kubectl create ns kasten-io
helm repo update
helm install K10 kasten/K10 --namespace=kasten-io \
--set secrets.azureTenantId=36ad9d54-ca17-4ea7-89b3-0e1638cf878e \
--set secrets.azureClientId=df9a685b-5c50-4300-b821-08b043dda47b \
--set secrets.azureClientSecret=ODaRYyeTd~67rQI.bd2uGw8k7Z-C8jJ7X- \
--set secrets.azureResourceGroup=michael-aks2 \
--set secrets.azureSubscriptionID=14b81536-0522-4bac-b7c0-82c7d30012e3 \
--set secrets.azureResourceMgrEndpoint=https://management.ppe5.stackpoc.com \
--set secrets.azureADEndpoint=https://login.microsoftonline.com \
--set secrets.azureADResourceID=https://management.stackpoc.com/71fb132f-bfbf-4e60-8cb6-923303347e19 \
--set services.dashboardbff.hostNetwork=true
Note : all the secrets exposed in this command line are not valid anymore, but we keep them to give a more realistic example.
Part 3: Time to Test
- Create a mysql application and insert data
- Use K10 dashboard to setup a policy and take a backup
- Clone the application
- Check the cloned application
- Create a MySQL application and insert data
It’s time to test. We’re going to create a small mysql application, create a table, insert a few lines, and restore it in another namespace to check that the cloned application is up and running with all the data in the expected state.
## create the app
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
kubectl create ns mysql
helm install mysql-release bitnami/mysql --namespace mysql
## Check pvc and pod are starting as expected.
kubectl get po -n mysql
NAME READY STATUS RESTARTS AGE
mysql-release-0 1/1 Running 0 2m54s
kubectl get pvc -n mysql
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-mysql-release-0 Bound pvc-22c04c33-7ce6-42ff-b8c8-d8d5cc981de0 8Gi RWO default 3m1s
## Connect and create data
ROOT_PASSWORD=$(kubectl get secret --namespace mysql mysql-release -o
jsonpath="{.data.mysql-root-password}" | base64 --decode)
echo $ROOT_PASSWORD
YEQiaC6Wj6
kubectl run mysql-release-client --rm --tty -i --restart='Never' --image
docker.io/bitnami/mysql:8.0.22-debian-10-r44 --namespace mysql --command -- bash
If you don't see a command prompt, try pressing enter.
I have no name!@mysql-release-client:/$ mysql -h mysql-release.mysql.svc.cluster.local -uroot -p my_database
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 91
Server version: 8.0.22 Source distribution
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database sales;
Query OK, 1 row affected (0.01 sec)
mysql> use sales
Database changed
mysql> CREATE TABLE leads (id INT NOT NULL, name VARCHAR(255), PRIMARY KEY ( id) );
Query OK, 0 rows affected (0.02 sec)
mysql> insert into leads values(1, 'value 1');
Query OK, 1 row affected (0.01 sec)
mysql> insert into leads values(2, 'value 2');
Query OK, 1 row affected (0.02 sec)
mysql> select * from leads;
+----+---------+
| id | name |
+----+---------+
| 1 | value 1 |
| 2 | value 2 |
+----+---------+
2 rows in set (0.00 sec)
mysql> exit
Bye
I have no name!@mysql-release-client:/$ exit
exit
pod "mysql-release-client" deleted
- Use K10 Dashboard to setup a policy and take a backup
The dashboard access can be configured through multiple schemes, but the simplest to configure is port forwarding.
kubectl --namespace kasten-io port-forward service/gateway 8080:8000
Now open your browser to http://localhost:8080/K10/#/dashboard
Create a policy for the mysql application and run a backup
Click on the unmanaged link and find the mysql app. Create a yearly policy and run it once.
Go to the dashboard and check that the policy ran successfully.
On the Azure portal, you’ll find the snapshot that was created for this policy.
All the specs of this application are also saved in K10’s catalog. We could have used an external object storage to export all of the application (spec + data) and run a disaster recovery or a migration, but that is beyond the objective of this tutorial, even if it’s really easy to set it up using K10.
- Clone the application
Let’s try to restore the application in another namespace and check all workloads and all the data is up and running.
Use a restore point and choose restore in another namespace by creating a new one.
Leave all the other fields intact and click restore.
On the dashboard see the restoration process and wait until it completes.
- Check the cloned application
kubectl get all -n mysql-restored
NAME READY STATUS RESTARTS AGE
pod/mysql-release-0 1/1 Running 0 4m14s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mysql-release ClusterIP 10.0.86.189 <none> 3306/TCP 4m53s
service/mysql-release-headless ClusterIP None <none> 3306/TCP 4m53s
NAME READY AGE
statefulset.apps/mysql-release 1/1 4m16s
Let’s see the content of the database
ROOT_PASSWORD=$(kubectl get secret
--namespace mysql-restored mysql-release -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
echo $ROOT_PASSWORD
YEQiaC6Wj6
kubectl run mysql-release-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r44 --namespace mysql-restored --command -- bash
If you don't see a command prompt, try pressing enter.
I have no name!@mysql-release-client:/$ mysql -h mysql-release.mysql.svc.cluster.local -uroot -p my_database
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 56925
Server version: 8.0.22 Source distribution
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> use sales;
Database changed
mysql> select * from leads;
+----+---------+
| id | name |
+----+---------+
| 1 | value 1 |
| 2 | value 2 |
+----+---------+
2 rows in set (0.00 sec)
mysql> exit
Bye
I have no name!@mysql-release-client:/$ exit
exit
pod "mysql-release-client" deleted
We have retrieved and restored the data we created.
Conclusion
I hope we’ve shown why you need Kasten K10 when using AKS on Azure Stack. By default, Azure Stack does not provide backup and disaster recovery solution services. But thanks to the integration of Kasten K10 with AKS on Azure Stack, it is very easy to protect your Kubernetes cluster. Kasten K10 also allows you to migrate your applications to another Kubernetes cluster at regular intervals.
Now, try it out for yourself.
We encourage you to give K10 a try for FREE, and let us know how we can help. We look forward to hearing from you!