As part of the application and cluster lifecycle, you may want to upgrade to the latest available version of Kubernetes. You can upgrade your Azure Kubernetes Service (AKS) cluster by using the Azure CLI, Azure PowerShell, or the Azure portal.
In this tutorial, part seven of seven, you learn how to:
- Identify current and available Kubernetes versions.
- Upgrade your Kubernetes nodes.
- Validate a successful upgrade.
Before you begin
In previous tutorials, an application was packaged into a container image, and this container image was uploaded to Azure Container Registry (ACR). You also created an AKS cluster. The application was then deployed to the AKS cluster. If you have not done these steps and would like to follow along, start with Tutorial 1: Prepare an application for AKS.
- If you’re using Azure CLI, this tutorial requires that you’re running Azure CLI version 2.34.1 or later. Run
az --version
to find the version. If you need to install or upgrade, see Install Azure CLI. - If you’re using Azure PowerShell, this tutorial requires that you’re running Azure PowerShell version 5.9.0 or later. Run
Get-InstalledModule -Name Az
to find the version. If you need to install or upgrade, see Install Azure PowerShell.
Get available cluster versions
Before you upgrade a cluster, use the az aks get-upgrades command to check which Kubernetes releases are available.
Azure CLICopy
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
In the following example output, the current version is 1.18.10, and the available versions are shown under upgrades.
OutputCopy
{
"agentPoolProfiles": null,
"controlPlaneProfile": {
"kubernetesVersion": "1.18.10",
...
"upgrades": [
{
"isPreview": null,
"kubernetesVersion": "1.19.1"
},
{
"isPreview": null,
"kubernetesVersion": "1.19.3"
}
]
},
...
}
Upgrade a cluster
AKS nodes are carefully cordoned and drained to minimize any potential disruptions to running applications. The following steps are performed during this process:
- The Kubernetes scheduler prevents additional pods from being scheduled on a node that is to be upgraded.
- Running pods on the node are scheduled on other nodes in the cluster.
- A new node is created that runs the latest Kubernetes components.
- When the new node is ready and joined to the cluster, the Kubernetes scheduler begins to run pods on the new node.
- The old node is deleted, and the next node in the cluster begins the cordon and drain process.
Note
If no patch is specified, the cluster will automatically be upgraded to the specified minor version’s latest GA patch. For example, setting --kubernetes-version
to 1.21
will result in the cluster upgrading to 1.21.9
.
When upgrading by alias minor version, only a higher minor version is supported. For example, upgrading from 1.20.x
to 1.20
will not trigger an upgrade to the latest GA 1.20
patch, but upgrading to 1.21
will trigger an upgrade to the latest GA 1.21
patch.
Use the az aks upgrade command to upgrade your AKS cluster.
Azure CLICopy
az aks upgrade \
--resource-group myResourceGroup \
--name myAKSCluster \
--kubernetes-version KUBERNETES_VERSION
Note
You can only upgrade one minor version at a time. For example, you can upgrade from 1.14.x to 1.15.x, but you cannot upgrade from 1.14.x to 1.16.x directly. To upgrade from 1.14.x to 1.16.x, you must first upgrade from 1.14.x to 1.15.x, then perform another upgrade from 1.15.x to 1.16.x.
The following example output shows the result of upgrading to 1.19.1. Notice the kubernetesVersion now reports 1.19.1.
OutputCopy
{
"agentPoolProfiles": [
{
"count": 3,
"maxPods": 110,
"name": "nodepool1",
"osType": "Linux",
"storageProfile": "ManagedDisks",
"vmSize": "Standard_DS1_v2",
}
],
"dnsPrefix": "myAKSClust-myResourceGroup-19da35",
"enableRbac": false,
"fqdn": "myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io",
"id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
"kubernetesVersion": "1.19.1",
"location": "eastus",
"name": "myAKSCluster",
"type": "Microsoft.ContainerService/ManagedClusters"
}
View the upgrade events
When you upgrade your cluster, the following Kubernetes events may occur on the nodes:
- Surge: Create surge node.
- Drain: Pods are being evicted from the node. Each pod has a 5 minute timeout to complete the eviction.
- Update: Update of a node has succeeded or failed.
- Delete: Delete a surge node.
Use kubectl get events
to show events in the default namespaces while running an upgrade.
Azure CLICopyOpen Cloudshell
kubectl get events
The following example output shows some of the above events listed during an upgrade.
OutputCopy
...
default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
...
default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
...
Validate an upgrade
Confirm that the upgrade was successful using the az aks show command.
Azure CLICopy
az aks show --resource-group myResourceGroup --name myAKSCluster --output table
The following example output shows the AKS cluster runs KubernetesVersion 1.19.1:
OutputCopy
Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn
------------ ---------- --------------- ------------------- ------------------------ ------------------- ----------------------------------------------------------------
myAKSCluster eastus myResourceGroup 1.19.1 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
Delete the cluster
As this tutorial is the last part of the series, you may want to delete your AKS cluster. The Kubernetes nodes run on Azure virtual machines and continue incurring charges even if you don’t use the cluster.
Use the az group delete command to remove the resource group, container service, and all related resources.
Azure CLICopyOpen Cloudshell
az group delete --name myResourceGroup --yes --no-wait
Note
When you delete the cluster, the Azure Active Directory (AAD) service principal used by the AKS cluster isn’t removed. For steps on how to remove the service principal, seeĀ AKS service principal considerations and deletion. If you used a managed identity, the identity is managed by the platform and it doesn’t require that you provision or rotate any secrets.
Ref: