No results found

Your search did not match any results.

We suggest you try the following to help find what you're looking for:

  • Check the spelling of your keyword search.
  • Use synonyms for the keyword you typed, for example, try "application" instead of "software."
  • Try one of the popular searches shown below.
  • Start a new search.
Trending Questions
 

Deploying A Multi-Cluster Verrazzano On Oracle Container Engine for Kubernetes (OKE) Part 1

In a previous article, we took a brief look at Verrazzano and took it for a quick spin on Oracle Container Engine for Kubernetes (OKE).

Author: Ali Mukadam

Updated:

About Ali Mukadam



Technical Director, Asia Pacific Center of Excellence. For the past 16 years, Ali has held technical presales, architect and industry consulting roles in BEA Systems and Oracle across Asia Pacific, focusing on middleware and application development. Although he pretends to be Thor, his real areas of expertise are Application Development, Integration, SOA (Service Oriented Architecture) and BPM (Business Process Management). An early and worthy Docker and Kubernetes adopter, Ali also leads a few open source projects (namely terraform-oci-oke) aimed at facilitating the adoption of Kubernetes and other cloud native technologies on Oracle Cloud Infrastructure.

More tutorials from this author:

  • Deploying Verrazzano on Oracle Container Engine for Kubernetes (OKE)
  • Deploying A Multi-Cluster Verrazzano On Oracle Container Engine for Kubernetes (OKE) Part 1
  • Deploying A Multi-Cluster Verrazzano On Oracle Container Engine for Kubernetes (OKE) Part 2
  • Installing and using Calico on Oracle Container Engine (OKE)
  • Tags

    open-source oke kubernetes terraform devops

    Back to tutorials

    Welcome!

    Verrazzano Logo

    In a previous article, we took a brief look at Verrazzano and took it for a quick spin on Oracle Container Engine for Kubernetes (OKE). In this article, we are going to deploy a multi-cluster Verrazzano on OKE. To make things interesting, we will also do that using different OCI regions.

    First, a little digression into WebLogic and Kubernetes and then we’ll discuss Verrazzano.

    From WebLogic to Kubernetes to Verrazzano

    A few years ago, when I had to explain Kubernetes to folks internally, especially those with a WebLogic background, I made some (grossly simplified) analogies with WebLogic:

    WebLogic and Kubernetes analogy
    WebLogic and Kubernetes analogy

    Explaining Kubernetes using familiar concepts greatly helped with understanding. In a WebLogic cluster, the Admin server handles the administration, deployment and other less silky but nevertheless important tasks, whereas the managed servers were meant for deploying and running the applications and responding to requests. Of course, you could always run your applications on the single Admin server (somewhat equivalent to taints and tolerations of the master nodes) but this is not recommended.

    The managed servers, on the other hand, could be scaled out and configured to run your applications. Together, the admin and managed servers form a cluster. You can run your applications across your entire cluster or on specific managed servers. If your application is deployed to the cluster and a managed server in the cluster fails (JVM, host, reboot etc), other managed servers in the cluster will automatically handle the job. If the managed server where your singleton service is running fails, WebLogic has got you covered as well with Automatic Service Migration. Check this document for a more detailed read. Essentially, it’s a bit like a ReplicaSet in Kubernetes. Applications on Kubernetes were initially stateless until the addition of StatefulSets. You can now also run stateful applications across the entire cluster.

    What if, for the purpose of high availability, you needed to run your Kubernetes applications in geographically distributed clusters. You could try your luck with kubefed, whose progress is painfully slow and is still in beta (this is not a criticism — Ed). Or you could try deploying the same applications to different clusters, implement a kind of global health check and then use an intelligent load balancer to switch the traffic from one cluster to another. All these approaches are still error-prone and risky and have several limitations.

    Enter Verrazzano multi-clustering.

    Verrazzano took the concept of Admin and managed servers in WebLogic and applied it to Kubernetes clusters:

    Verrazzano multi-cluster
    Verrazzano multi-cluster

    Where you had a single Admin server for WebLogic, you now have a single Admin cluster based on Kubernetes for Verrazzano. Where your applications would be deployed on managed servers, your Verrazzano workloads will now be deployed on managed Kubernetes clusters, possibly closer to your users.

    Infrastructure Planning

    In order to achieve this, the Verrazzano managed clusters (a Verrazzano cluster is a Kubernetes cluster administered and managed by the Verrazzano container platform) need to be able to communicate with the Verrazzano Admin cluster and vice-versa. In WebLogic, the managed servers would usually be part of the same network (unless you were doing stretch clusters) and this would usually be straightforward.

    However, our aim here is to deploy the different Verrazzano clusters in different cloud regions on OCI and we need to think and plan about networking and security. Note that you can also use Verrazzano to manage clusters deployed in other clouds or on-premise but the networking and security configurations would vary (VPN/FastConnect etc).

    Below is a map of OCI regions to help us pick a set of regions:

    Map of OCI regions
    Map of OCI regions

    We will use our newly-minted Singapore region for the Admin cluster and then Mumbai, Tokyo and Sydney as managed clusters in a star architecture:

    Verrazzano Clusters spread across OCI Asia Pacific regions
    Verrazzano Clusters spread across OCI Asia Pacific regions

    Networking Infrastructure

    Remote Peering with different regions
    Remote Peering with different regions

    We need the clusters to communicate securely using the OCI Backbone so this means we need to set up DRGs in each region, attach them to their VCN and use remote peering. Since the VCNs and the clusters will be eventually be connected, we also need to ensure their respective IP address ranges (VCN, pod and service) do not overlap.

    Creating the Verrazzano clusters

    We are going to the use terraform-oci-oke module to create our clusters. We could create them individually by cloning the module 4 times and then changing the region parameters. However, you will be pleased to know that 1 of the things we recently improved in the 4.0 release of the module is reusability. We’ll take advantage of this!

    Create a new terraform project and define your variables as follows:

    # Copyright 2017, 2021 Oracle Corporation and/or affiliates.  All rights reserved.
    
    # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl
    
    # OCI Provider parameters
    variable "api_fingerprint" {
    default     = ""
    description = "Fingerprint of the API private key to use with OCI API."
    type        = string
    }
    
    variable "api_private_key_path" {
    default     = ""
    description = "The path to the OCI API private key."
    type        = string
    }
    
    variable "verrazzano_regions" {
    # List of regions: https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm#ServiceAvailabilityAcrossRegions
    description = "A map Verrazzano regions."
    type        = map(string)
    }
    
    variable "tenancy_id" {
    description = "The tenancy id of the OCI Cloud Account in which to create the resources."
    type        = string
    }
    
    variable "user_id" {
    description = "The id of the user that terraform will use to create the resources."
    type        = string
    default     = ""
    }
    
    # General OCI parameters
    variable "compartment_id" {
    description = "The compartment id where to create all resources."
    type        = string
    }
    
    variable "label_prefix" {
    default     = "none"
    description = "A string that will be prepended to all resources."
    type        = string
    }
    

    In your terraform.tfvars, along with your identity parameters, define your regions:

    verrazzano_regions = {
    
    home  = "your-tenancy-home-region" #replace with your tenancy's home region  
    admin = "ap-singapore-1"  
    syd   = "ap-sydney-1"  
    mum   = "ap-mumbai-1"  
    tok   = "ap-tokyo-1"  
    }
    

    In your provider.tf, define the providers for the different regions using aliases:

    provider "oci" {
    
    fingerprint      = var.api_fingerprint
    private_key_path = var.api_private_key_path
    region           = var.verrazzano_regions["admin"]
    tenancy_ocid     = var.tenancy_id
    user_ocid        = var.user_id
    alias            = "admin"
    }
    
    provider "oci" {
    fingerprint      = var.api_fingerprint
    private_key_path = var.api_private_key_path
    region           = var.verrazzano_regions["home"]
    tenancy_ocid     = var.tenancy_id
    user_ocid        = var.user_id
    alias            = "home"
    }
    
    provider "oci" {
    fingerprint      = var.api_fingerprint
    private_key_path = var.api_private_key_path
    region           = var.verrazzano_regions["syd"]
    tenancy_ocid     = var.tenancy_id
    user_ocid        = var.user_id
    alias            = "syd"
    }
    
    provider "oci" {
    fingerprint      = var.api_fingerprint
    private_key_path = var.api_private_key_path
    region           = var.verrazzano_regions["mum"]
    tenancy_ocid     = var.tenancy_id
    user_ocid        = var.user_id
    alias            = "mum"
    }
    
    provider "oci" {
    fingerprint      = var.api_fingerprint
    private_key_path = var.api_private_key_path
    region           = var.verrazzano_regions["tok"]
    tenancy_ocid     = var.tenancy_id
    user_ocid        = var.user_id
    alias            = "tok"
    }
    

    Finally, in your main.tf, create the different clusters (note that some of the parameters here have the same values, and you could use the default ones, but I wanted to show it was possible to configure these by regions too):

    module "vadmin" {
    
    source  = "oracle-terraform-modules/oke/oci"
    version = "4.0.1"
    
    home_region = var.verrazzano_regions["home"]
    region      = var.verrazzano_regions["admin"]
    
    tenancy_id = var.tenancy_id
    
    # general oci parameters
    compartment_id = var.compartment_id
    label_prefix   = "v8o"
    
    # ssh keys
    ssh_private_key_path = "~/.ssh/id_rsa"
    ssh_public_key_path  = "~/.ssh/id_rsa.pub"
    
    # networking
    create_drg                   = true
    internet_gateway_route_rules = []
    nat_gateway_route_rules = [
    {
      destination       = "10.1.0.0/16"
      destination_type  = "CIDR_BLOCK"
      network_entity_id = "drg"
      description       = "To Sydney"
    },
    {
      destination       = "10.2.0.0/16"
      destination_type  = "CIDR_BLOCK"
      network_entity_id = "drg"
      description       = "To Mumbai"
    },
    {
      destination       = "10.3.0.0/16"
      destination_type  = "CIDR_BLOCK"
      network_entity_id = "drg"
      description       = "To Tokyo"
    },
    ]
    
    vcn_cidrs     = ["10.0.0.0/16"]
    vcn_dns_label = "admin"
    vcn_name      = "admin"
    
    # bastion host
    create_bastion_host = true
    upgrade_bastion     = false
    
    # operator host
    create_operator                    = true
    enable_operator_instance_principal = true
    upgrade_operator                   = false
    
    # oke cluster options
    cluster_name                = "admin"
    control_plane_type          = "private"
    control_plane_allowed_cidrs = ["0.0.0.0/0"]
    kubernetes_version          = "v1.20.11"
    pods_cidr                   = "10.244.0.0/16"
    services_cidr               = "10.96.0.0/16"
    
    # node pools
    node_pools = {
    np1 = { shape = "VM.Standard.E4.Flex", ocpus = 2, memory = 32, node_pool_size = 2, boot_volume_size = 150, label = { app = "frontend", pool = "np1" } }
    }
    node_pool_name_prefix = "np-admin"
    
    # oke load balancers
    load_balancers          = "both"
    preferred_load_balancer = "public"
    public_lb_allowed_cidrs = ["0.0.0.0/0"]
    public_lb_allowed_ports = [80, 443]
    
    # freeform_tags
    freeform_tags = {
    vcn = {
      verrazzano = "admin"
    }
    bastion = {
      access     = "public",
      role       = "bastion",
      security   = "high"
      verrazzano = "admin"
    }
    operator = {
      access     = "restricted",
      role       = "operator",
      security   = "high"
      verrazzano = "admin"
    }
    }
    
    providers = {
    oci      = oci.admin
    oci.home = oci.home
    }
    }
    
    module "vsyd" {
    source  = "oracle-terraform-modules/oke/oci"
    version = "4.0.1"
    
    home_region = var.verrazzano_regions["home"]
    region      = var.verrazzano_regions["syd"]
    
    tenancy_id = var.tenancy_id
    
    # general oci parameters
    compartment_id = var.compartment_id
    label_prefix   = "v8o"
    
    # ssh keys
    ssh_private_key_path = "~/.ssh/id_rsa"
    ssh_public_key_path  = "~/.ssh/id_rsa.pub"
    
    # networking
    create_drg                   = true
    internet_gateway_route_rules = []
    nat_gateway_route_rules = [
    {
      destination       = "10.0.0.0/16"
      destination_type  = "CIDR_BLOCK"
      network_entity_id = "drg"
      description       = "To Admin"
    }
    ]
    
    vcn_cidrs     = ["10.1.0.0/16"]
    vcn_dns_label = "syd"
    vcn_name      = "syd"
    
    # bastion host
    create_bastion_host = false
    upgrade_bastion     = false
    
    # operator host
    create_operator                    = false
    enable_operator_instance_principal = true
    upgrade_operator                   = false
    
    # oke cluster options
    cluster_name                = "syd"
    control_plane_type          = "private"
    control_plane_allowed_cidrs = ["0.0.0.0/0"]
    kubernetes_version          = "v1.20.11"
    pods_cidr                   = "10.245.0.0/16"
    services_cidr               = "10.97.0.0/16"
    
    # node pools
    node_pools = {
    np1 = { shape = "VM.Standard.E4.Flex", ocpus = 2, memory = 32, node_pool_size = 2, boot_volume_size = 150 }
    }
    
    # oke load balancers
    load_balancers          = "both"
    preferred_load_balancer = "public"
    public_lb_allowed_cidrs = ["0.0.0.0/0"]
    public_lb_allowed_ports = [80, 443]
    
    # freeform_tags
    freeform_tags = {
    vcn = {
      verrazzano = "syd"
    }
    bastion = {
      access     = "public",
      role       = "bastion",
      security   = "high"
      verrazzano = "syd"
    }
    operator = {
      access     = "restricted",
      role       = "operator",
      security   = "high"
      verrazzano = "syd"
    }
    }
    
    providers = {
    oci      = oci.syd
    oci.home = oci.home
    }
    }
    
    module "vmum" {
    source  = "oracle-terraform-modules/oke/oci"
    version = "4.0.1"
    
    home_region = var.verrazzano_regions["home"]
    region      = var.verrazzano_regions["mum"]
    
    tenancy_id = var.tenancy_id
    
    # general oci parameters
    compartment_id = var.compartment_id
    label_prefix   = "v8o"
    
    # ssh keys
    ssh_private_key_path = "~/.ssh/id_rsa"
    ssh_public_key_path  = "~/.ssh/id_rsa.pub"
    
    # networking
    create_drg                   = true
    internet_gateway_route_rules = []
    nat_gateway_route_rules = [
    {
      destination       = "10.0.0.0/16"
      destination_type  = "CIDR_BLOCK"
      network_entity_id = "drg"
      description       = "To Admin"
    }
    ]
    
    vcn_cidrs     = ["10.2.0.0/16"]
    vcn_dns_label = "mum"
    vcn_name      = "mum"
    
    # bastion host
    create_bastion_host = false
    upgrade_bastion     = false
    
    # operator host
    create_operator                    = false
    enable_operator_instance_principal = true
    upgrade_operator                   = false
    
    # oke cluster options
    cluster_name                = "mum"
    control_plane_type          = "private"
    control_plane_allowed_cidrs = ["0.0.0.0/0"]
    kubernetes_version          = "v1.20.11"
    pods_cidr                   = "10.246.0.0/16"
    services_cidr               = "10.98.0.0/16"
    
    # node pools
    node_pools = {
    np1 = { shape = "VM.Standard.E4.Flex", ocpus = 2, memory = 32, node_pool_size = 2, boot_volume_size = 150 }
    }
    
    # oke load balancers
    load_balancers          = "both"
    preferred_load_balancer = "public"
    public_lb_allowed_cidrs = ["0.0.0.0/0"]
    public_lb_allowed_ports = [80, 443]
    
    # freeform_tags
    freeform_tags = {
    vcn = {
      verrazzano = "mum"
    }
    bastion = {
      access     = "public",
      role       = "bastion",
      security   = "high"
      verrazzano = "mum"
    }
    operator = {
      access     = "restricted",
      role       = "operator",
      security   = "high"
      verrazzano = "mum"
    }
    }
    
    providers = {
    oci      = oci.mum
    oci.home = oci.home
    }
    }
    
    module "vtok" {
    source  = "oracle-terraform-modules/oke/oci"
    version = "4.0.1"
    
    home_region = var.verrazzano_regions["home"]
    region      = var.verrazzano_regions["tok"]
    
    tenancy_id = var.tenancy_id
    
    # general oci parameters
    compartment_id = var.compartment_id
    label_prefix   = "v8o"
    
    # ssh keys
    ssh_private_key_path = "~/.ssh/id_rsa"
    ssh_public_key_path  = "~/.ssh/id_rsa.pub"
    
    # networking
    create_drg                   = true
    internet_gateway_route_rules = []
    nat_gateway_route_rules = [
    {
      destination       = "10.0.0.0/16"
      destination_type  = "CIDR_BLOCK"
      network_entity_id = "drg"
      description       = "To Admin"
    }
    ]
    
    vcn_cidrs     = ["10.3.0.0/16"]
    vcn_dns_label = "tok"
    vcn_name      = "tok"
    
    # bastion host
    create_bastion_host = false
    upgrade_bastion     = false
    
    # operator host
    create_operator                    = false
    enable_operator_instance_principal = true
    upgrade_operator                   = false
    
    # oke cluster options
    cluster_name                = "tok"
    control_plane_type          = "private"
    control_plane_allowed_cidrs = ["0.0.0.0/0"]
    kubernetes_version          = "v1.20.11"
    pods_cidr                   = "10.247.0.0/16"
    services_cidr               = "10.99.0.0/16"
    
    # node pools
    node_pools = {
    np1 = { shape = "VM.Standard.E4.Flex", ocpus = 2, memory = 32, node_pool_size = 2, boot_volume_size = 150 }
    }
    
    # oke load balancers
    load_balancers          = "both"
    preferred_load_balancer = "public"
    public_lb_allowed_cidrs = ["0.0.0.0/0"]
    public_lb_allowed_ports = [80, 443]
    
    # freeform_tags
    freeform_tags = {
    vcn = {
      verrazzano = "tok"
    }
    bastion = {
      access     = "public",
      role       = "bastion",
      security   = "high"
      verrazzano = "tok"
    }
    operator = {
      access     = "restricted",
      role       = "operator",
      security   = "high"
      verrazzano = "tok"
    }
    }
    
    providers = {
    oci      = oci.tok
    oci.home = oci.home
    }
    }
    

    For convenience, let’s print out the operator host in each region:

    output "ssh_to_admin_operator" {
    
    description = "convenient command to ssh to the Admin operator host"
    value       = module.vadmin.ssh_to_operator
    }
    
    output "ssh_to_au_operator" {
    description = "convenient command to ssh to the Sydney operator host"
    value       = module.vsyd.ssh_to_operator
    }
    
    output "ssh_to_in_operator" {
    description = "convenient command to ssh to the Mumbai operator host"
    value       = module.vmum.ssh_to_operator
    }
    
    output "ssh_to_jp_operator" {
    description = "convenient command to ssh to the Tokyo operator host"
    value       = module.vtok.ssh_to_operator
    }
    

    Run terraform init, plan and the plan should indicate the following:

    Plan: 292 to add, 0 to change, 0 to destroy.Changes to Outputs:
    
    + ssh_to_admin_operator = (known after apply)  
    + ssh_to_au_operator    = "ssh -i ~/.ssh/id_rsa -J opc@ opc@"  
    + ssh_to_in_operator    = "ssh -i ~/.ssh/id_rsa -J opc@ opc@"  
    + ssh_to_jp_operator    = "ssh -i ~/.ssh/id_rsa -J opc@ opc@"
    

    Run terraform apply and relax, because soon after you should see the following:

    Simultaneous creation of 4 OKE clusters in different regions
    Simultaneous creation of 4 OKE clusters in different regions

    This means our four OKE Clusters are being simultaneously created in 4 different OCI regions. In about 15 minutes, you’ll have all four clusters created:

    Showing outputs after creating clusters

    The ssh convenience commands to the various operator hosts will also be printed.

    Next, navigate to the DRGs in each managed cluster’s region i.e. Mumbai, Tokyo, Sydney. Click on Remote Peering Attachment and create a Remote Peering Connection (call it rpc_to_admin). However, in the Admin region (Singapore in our selected region), create 3 Remote Peering Connections:

    3 RPCs in the Admin region
    3 RPCs in the Admin region

    We need to peer them. Click on the rpc_to_syd. Open a new tab in your browser and access the OCI Console and change region to Sydney. Then, navigate to the DRG and the rpc_to_syd page. Copy the RPC’s OCID (not the DRG), switch to the Admin tab and click on “Establish Connection”:

    Establishing RPC
    Establishing RPC

    Once you’ve provided the RPC ID and the region as above, click on “Establish Connection” button to perform the peering. Repeat the same procedure for the Tokyo and Mumbai regions until all the managed cluster regions are peered with the Admin region. When the peering is performed and completed, you will see its status will change to “Pending” and eventually “Peered”:

    RPCs in Pending state
    RPCs in Pending state
    RPCs in Peered state
    RPCs in Peered state

    At this point, our VCNs are peered but there are three more things we need to do:

    1. Configure routing tables so that the Verrazzano managed clusters can communicate to the Admin cluster and vice-versa
    2. Configure NSGs for the control plane CIDRs to accept requests from Admin VCN
    3. Merge the kubeconfigs

    Actually, the configuration of the routing rules have already been done. “How,” you ask? Well, one of the recent features we added is the ability to configure and update routing tables. In your main.tf, look in the the Admin cluster module, you will find a parameter that is usually an empty list:

    nat_gateway_route_rules = []
    
    

    Instead, in our Admin module definition, we had already changed this to:

    nat_gateway_route_rules = [
    
    {
    destination       = "10.1.0.0/16"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = "drg"       
    description       = "To Sydney"
    },
    {
    destination       = "10.2.0.0/16"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = "drg"       
    description       = "To Mumbai"
    },
    {
    destination       = "10.3.0.0/16"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = "drg"       
    description       = "To Tokyo"
    },
    ]
    

    Similarly, in the managed cluster definitions, we had also set the routing rules to reach the Admin cluster in Singapore:

    nat_gateway_route_rules = [
    
    {  
    destination       = "10.0.0.0/16"  
    destination_type  = "CIDR_BLOCK"  
    network_entity_id = "drg"  
    description       = "To Admin"  
    }  
    ]
    

    Note that you can also update these later. Let’s say you add another managed region in Hyderabad (VCN CIDR: 10.4.0.0). In the routing rules for Admin, you will add ome more entry to route traffic to Hyderabad:

    nat_gateway_route_rules = [
    
    {  
    destination       = "10.4.0.0/16"  
    destination_type  = "CIDR_BLOCK"  
    network_entity_id = "drg"  
    description       = "To Hyderabad"  
    }  
    ]
    

    After updating the custom rules, run terraform apply again and the routing rules in the Admin region will be updated.

    Navigate to the Network Visualizer page to check your connectivity and routing rules:

    Network connectivity across regions
    Network connectivity across regions

    Next, in each region managed VCN’s control plane NSG, add an ingress to accept TCP requests from source CIDR 10.0.0.0/16 (Admin) and destination port 6443. This is for the Admin cluster to be able to communicate with the Managed Cluster’s control plane.

    Additional ingress security rule in each managed cluster's control plane NSG
    Additional ingress security rule in each managed cluster's control plane NSG

    Operational Convenience

    Finally, for convenience, we want to be able to execute most of our operations from the Admin operator host. We first need to obtain the kubeconfig of each cluster and merge them together on the admin operator. You have to do this step manually today but we will try to improve this in the future:

    1. Navigate to each managed cluster’s page and click on Access cluster.
    2. Copy the second command which allows you get the kubeconfig for that cluster
    oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.... --file $HOME/.kube/configsyd --region ap-sydney-1 --token-version 2.0.0  --kube-endpoint PRIVATE_ENDPOINT
    
    
    oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.... --file $HOME/.kube/configmum --region ap-mumbai-1 --token-version 2.0.0  --kube-endpoint PRIVATE_ENDPOINT
    
    oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.... --file $HOME/.kube/configtok --region ap-tokyo-1 --token-version 2.0.0  --kube-endpoint PRIVATE_ENDPOINT
    

    Note that you also have to rename the file so it won’t overwrite the existing config for the Admin region. In our example above, that would be configsyd, configmum, and configtok. Run the commands to get the managed cluster’s respective kubeconfigs. You should have four kubeconfigs:

    $ ls -al .kube
    
    total 16  
    drwxrwxr-x. 2 opc opc   71 Nov 10 11:40 .  
    drwx------. 4 opc opc  159 Nov 10 11:15 ..  
    -rw--w----. 1 opc opc 2398 Nov 10 11:15 config  
    -rw-rw-r--. 1 opc opc 2364 Nov 10 11:40 configmum  
    -rw-rw-r--. 1 opc opc 2364 Nov 10 11:40 configsyd  
    -rw-rw-r--. 1 opc opc 2362 Nov 10 11:40 configtok
    

    We can check access to the clusters from the Admin operator host:

    cd .kubefor cluster in config configsyd configmum configtok; do
    
      KUBECONFIG=$CLUSTER kubectl get nodes  
    done
    

    This will return us the list of nodes in each cluster:

    List of nodes in each cluster
    List of nodes in each cluster

    One thing we also want to do for convenience is rename each cluster’s context for convenience so we know which region we are dealing with. In this exercise, we want 1 context to equate to a Verrazzano cluster. Let’s rename all the kubeconfig files first:

    • config -> admin
    • configmum -> mumbai
    • configsyd -> sydney
    • configtok -> tokyo

    Let’s rename their respective contexts:

    for cluster in admin sydney mumbai tokyo; do
    
    current=$(KUBECONFIG=$cluster kubectl config current-context)  
    KUBECONFIG=$cluster kubectl config rename-context $current $cluster  
    done
    

    We are now ready to merge:

    KUBECONFIG=./admin:./sydney:./mumbai:./tokyo kubectl config view --flatten > ./config
    
    

    Let’s get a list of the contexts:

    kubectl config get-contexts
    
    

    This will return us the following:

    CURRENT   NAME     CLUSTER               AUTHINFO           NAMESPACE
    
    *         admin    cluster-cillzxw34tq   user-cillzxw34tqmumbai
    mumbai    cluster-cuvo2ifxe2a   user-cuvo2ifxe2asydney   
    sydney    cluster-cmgb37morjq   user-cmgb37morjqtokyo    
    tokyo     cluster-coxskjynjra   user-coxskjynjra
    

    This is all rather verbose. Instead we will use kubectx (I’m a huge fan). Install kubectx (which we could have used to rename the contexts earlier):

    wget https://github.com/ahmetb/kubectx/releases/download/v0.9.4/kubectx
    
    chmod +x kubectx  
    sudo mv kubectx /usr/local/bin
    

    Now when we run kubectx:

    Using kubectx
    Using kubectx

    The current context, i.e. the current Verrazzano cluster, is highlighted in yellow. We can also easily change contexts in order to perform Verrazzano installations and other operations e.g.

    Changing context to Sydney
    Changing context to Sydney

    This concludes setting up OKE, networking connectivity and routing and some operational convenience to run multi-cluster Verrazzano in different regions. With this, I would like to thank my colleague and friend Shaun Levey for his ever perceptive insights into the intricacies of OCI Networking.