Technical Articles

Tech Articles from your friends at Oracle and the developer community.

Topics
Topics

Learn to build microservices on Oracle Cloud Infrastructure

Setup: Creating your microservices environment

This lab will show you how to set up the Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE) for creating and deploying a front-end Helidon application which accesses the backend Oracle Autonomous Database (ATP).

Step 1: Launch the Cloud Shell

Cloud Shell is a small virtual machine running a Bash shell which you access through the OCI Console. Cloud Shell comes with a pre-authenticated CLI which is set to the OCI Console tenancy home page region. It also provides up-to-date tools and utilities.

Login to the OCI Cloud console and click the Cloud Shell icon in the top-right corner of the Console.

Step 1: Launch the Cloud Shell

Step 2: Download workshop source code

1. To work with application code, you need to download a GitHub repository using the following curl and unzip command. The workshop assumes this is done from your root directory.

cd ~ ; git clone https://github.com/oracle/microservices-datadriven.git--branch 1.0

You should now see microservices-datadriven in your root directory

2. Change directory into the microservices-datadriven/grabdish directory:

cd microservices-datadriven/grabdish

Step 3: Create an OCI compartment and a Kubernetes Cluster in that compartment

Creating OCI compartment

1. Open up the main menu in the top-left corner of the Console and select Identity > Compartments.

Step 3: Create an OCI compartment and a Kubernetes Cluster in that compartment

2. Click Create Compartment with the following parameters and click Create Compartment:

  • Compartment name: msdataworkshop
  • Description: MS workshop compartment

MS workshop compartment

MS workshop compartment

3. Once the compartment is created, click the name of the compartment and then click Copy to copy the OCID.

click the name of the compartment

click the name of the compartment

4. Go back into your cloud shell and verify you are in the ~/microservices-datadriven/grabdish directory.

5. Run./setCompartmentId.sh <COMPARTMENT_OCID> <REGION_ID> where your <COMPARTMENT_OCID> and <REGION_ID> values are set as arguments.

For example:

./setCompartmentId.sh ocid1.compartment.oc1..aaaaaaaaxbvaatfz6yourcomparmentidhere5dnzgcbivfwvsho77myfnqq us-ashburn-1

Step 4: Create Kubernetes Cluster

1) Click Delpoy to Oracle Cloud

If you aren't already signed in, when prompted, enter the tenancy and user credentials.

2) Review and accept the terms and conditions.

3) Select the region where you want to deploy the stack.

Select the msdataworkshop compartment for your stack and Kubernetes cluster.

msdataworkshop

4) Follow the on-screen prompts and instructions to create the stack. Select availability domain for the Kubernetes cluster.

Do not create the Autonomous Transaction Processing database during this process.

We will create it a couple of steps later (Step 4). Go with the default cluster name of msdataworkshopcluster.

msdataworkshop

5) Click Create

click to create

6) After creating the stack, click Terraform Actions, and select Plan.

terraform actions

Wait for the job to be completed, and review the plan.

To make any changes, return to the Stack Details page, click Edit Stack, and make the required changes. Then, run the Plan action again.

7) If no further changes are necessary, return to the Stack Details page, click Terraform Actions, and select Apply.

screenshot code 1

8) The Terraform job will take a few minutes to provision the cluster.

9) Navigate to Kubernetes Clusters on the Cloud Console.

Kubernate Clusrers

10) After the provisioning process of the cluster completes, the Cluster Status for the msdataworkshopcluster should show Active.

Kubernate Clusrers

11) Click the link for the cluster you've just created to see the detail page.

Kubernate Clusrers

12) Click the Access Cluster button.

Kubernate Clusrers

13) Click on the link to copy the oci CLI command.

Kubernate Clusrers

14) Return the Cloud Shell, paste and run the command to add the ~/.kube/config needed to access the kubernetes cluster.

Kubernate Clusrers

Step 5: Create OCI Vault Secrets for the ATP PDB users and FrontEnd microservice authentication

1. Open up the main menu in the top-left corner of the Console and select Security > Vault.

Kubernate Clusrers

2. Click Create Vault, specify a name and click Create.

Kubernate Clusrers

Click the link for the vault you just created.

COPY THE OCID FOR THE VAULT AND NOTE IT FOR LATER USE.

3. Click Master Encryption Key, click Create Key, enter a name, and click Create Key.

Kubernate Clusrers

4. Click Secrets , click Create Secret, enter a name, description, encryption key (created in previous step), leave the default Plain-Text Secret Type Template, and provide a DB password (in the Secret Contents field) for the database users you will create later and click Create Secret.

Kubernate Clusrers

COPY THE OCID OF THIS DB PASSWORD SECRET AND NOTE IT FOR LATER USE.

5. Repeat the process to create a secret for the FrontEnd microservice authentication.

Kubernate Clusrers

COPY THE OCID OF THIS FRONTEND MICROSERVICE AUTH PASSWORD SECRET AND NOTE IT FOR LATER USE.

6. Open up the main menu in the top-left corner of the Console and select Identity > Dynamic Groups.

Kubernate Clusrers

7. Click Create Dynamic Group, specify a name, add the following matching rule providing your compartment ocid

All {instance.compartment.id = 'ocid1.compartment.oc1..aaaaaaaaaaaputyourcompartmentidhere'}

and click Create.

Kubernate Clusrers

8. Open up the main menu in the top-left corner of the Console and select Identity > Policies:

Kubernate Clusrers

9. Click Create Policy specify a name and the following matching rule providing your compartment and vault ocids.

Allow dynamic-group yourdynamicgroupname to manage secret-family in compartment id ocid1.compartment.oc1..yourcompartmentid where target.vault.id = 'ocid1.vault.oc1.phx.yourvaultid'

Kubernate Clusrers

and click Create.

Step 6: Create ATP databases

Run the createATPPDBs.sh script providing the Vault Secret ocids (created and noted in Step 4) for the DB users followed by the ocid for the FrontEnd microservice user

./createATPPDBs.sh <REPLACE WITH VAULT SECRET OCID FOR DB USER> <REPLACE WITH VAULT SECRET OCID FOR FRONTEND USER AUTH>

Notice creation of the ORDERDB and INVENTORYDB PDBs and Frontend Auth secret.

Kubernate Clusrers

Kubernate Clusrers

OCIDs for the PDBs are stored and will be used later to create kubernetes secrets that microservices will use to access them.

Step 7: Create an OCI Registry and Auth key and login to it from Cloud Shell

You are now going to create an Oracle Cloud Infrastructure Registry and an Auth key. Oracle Cloud Infrastructure Registry is an Oracle-managed registry that enables you to simplify your development-to-production workflow by storing, sharing, and managing development artifacts such as Docker images.

1. Open up the main menu in the top-left corner of the console and go to Developer Services > Container Registry.

Kubernate Clusrers

2. Take note of the namespace (for example, axkcsk2aiatb shown in the image below). Click Create Repository, specify the following details for your new repository, and click Create Repository.

  • Repository Name: <firstname.lastname>/msdataworkshop
  • Access: Public

OCIDs for the PDBs

OCIDs for the PDBs

Make sure that access is marked as Public.

Go to Cloud Shell and run ./addOCIRInfo.sh with the namespace and repository name as arguments

./addOCIRInfo.sh <namespace> <repository_name>

For example ./addOCIRInfo.sh axkcsk2aiatb msdataworkshop.user1/msdataworkshop

3. You will now create the Auth token by going back to the User Settings page. Click the Profile icon in the top-right corner of the Console and select User Settings.

OCIDs for the PDBs

4. Click on Auth Tokens and select Generate Token.

OCIDs for the PDBs

5. In the description type msdataworkshoptoken and click Generate Token.

OCIDs for the PDBs

6. Copy the token value.

OCIDs for the PDBs

7. Go to Cloud Shell and run ./dockerLogin.sh <USERNAME> "<AUTH_TOKEN>" where <USERNAME> and "<AUTH_TOKEN>" values are set as arguments.

<USERNAME> - is the username used to log in (typically your email address). If your username is federated from Oracle Identity Cloud Service, you need to add the oracleidentitycloudservice/ prefix to your username, for example oracleidentitycloudservice/firstname.lastname@href="http://something.com">something.com

"<AUTH_TOKEN>" - paste the generated token value and enclose the value in quotes.

For example ./dockerLogin.sh user.foo@bar.com "8nO[BKNU5iwasdf2xeefU;yl"

8. Once successfully logged into Container Registry, we can list the existing docker images. Since this is the first time logging into Registry, no images will be shown.

docker images

Step 8: Install GraalVM, Jaeger, and Frontend Loadbalancer

Go back into your cloud shell and verify you are in the ~/microservices-datadriven/grabdish directory

Run the installGraalVMJaegerAndFrontendLB.sh script to install both GraalVM and Jaeger.

./installGraalVMJaegerAndFrontendLB.sh

You may now proceed to the next lab.

 

 

Lab 1: Deploying and testing an application

This lab will show you how to build images, push them to Oracle Cloud Infrastructure Registry and deploy the microservices on our Kubernetes cluster. You will also clone a GitHub repository.

These steps need to be executed from the Cloud Shell

Step 1: Set values for workshop in the environment

1. Go back into your cloud shell and verify you are in the ~/microservices-datadriven/grabdish directory

2. Run ./addAndSourcePropertiesInBashrc.sh to add the lab specific environment variables to the ~/.bashrc file:

./addAndSourcePropertiesInBashrc.sh

Step 2: Build and push the Docker images

Run the build.sh script to build and push the microservices images into the repository

1. cd $MSDATAWORKSHOP_LOCATION ; ./build.sh

Build and push the Docker image

In a few minutes, you should have successfully built and pushed all the images into the OCIR repository.

Build and push the Docker image

2. Go to the Console, click the main menu in the top-left corner and open Developer Services > Container Registry.

Build and push the Docker image

3. Mark all the images as public (Actions > Change to Public):

Build and push the Docker image

Step 3: Build deploy and access FrontEnd UI microservice

1. Run ./setJaegerAddress.sh and verify successful outcome.

It may be necessary to run this script multiple times if the Jaeger load balancer has not been provisioned yet.

./setJaegerAddress.sh

2. Source the .bashrc file with the following command.

source ~/.bashrc

 

Build deploy and access FrontEnd UI microservice

3. Change directory into /frontend-helidon folder:

cd $MSDATAWORKSHOP_LOCATION /frontend-helidon

4. Run the build script which will build the frontend-helidon application, store it in a docker image and push it to Oracle Registry

./build.sh

Build deploy and access FrontEnd UI microservice

After a couple of minutes, the image should have been successfully pushed into the repository.

Build deploy and access FrontEnd UI microservice

5. Run the deploy script from the same directory as build. This will create the deployment and pod for this image in the OKE cluster msdataworkshop namespace:

./deploy.sh

Build deploy and access FrontEnd UI microservice

6. Once successfully created, check that the frontend pod is running:

kubectl get pods --all-namespaces

Build deploy and access FrontEnd UI microservice

Alternatively, you can execute the pods shortcut command:

Build deploy and access FrontEnd UI microservice

7. Check that the load balancer service is running, and write down the external IP address and port.

kubectl get services --all-namespaces

Build deploy and access FrontEnd UI microservice

Alternatively, you can execute the services shortcut command.

 

Build deploy and access FrontEnd UI microservice

8. You are ready to access the frontend page. Open a new browser tab and enter the external IP and port URL:

https://<EXTERNAL-IP>

Note that for convenience a self-signed certificate is used to secure this https address and so it is likely you will be prompted by the browser to allow access.

You will then be prompted to authenticate to access the Front End microservices. The user is grabdish and the password is the one created and stored in a vault secret in Lab "Setup: Creating your microservices environment" Step 4.

Build deploy and access FrontEnd UI microservice

You should then see the Front End home page. You've now deployed and accessed your first microservice of the lab!

Note that links on Front End will not work yet as they access microservices that will be created and deployed in subsequent labs.

Build deploy and access FrontEnd UI microservice

You may now proceed to the next lab.

Lab 2: Data-centric microservices walkthrough with Helidon MP

1. Select Autonomous Transaction Processing from the side menu in the OCI Console.

Data-centric microservices walkthrough with Helidon MP

2. Select the correct compartment on the left-hand side (if not already selected) and select the ORDERDB.

Click the DB Connection button.

Data-centric microservices walkthrough with Helidon MP

3. Select Regional Wallet from the drop-down menu and click the Download Wallet button.

Data-centric microservices walkthrough with Helidon MP

4. Provide a password and click the Download button to save the wallet zip file to your computer.

Data-centric microservices walkthrough with Helidon MP

5. Select Object Storage from the side menu in the OCI Console.

Data-centric microservices walkthrough with Helidon MP

6. Select the correct compartment on the left-hand side (if not already selected) and click the Create Bucket button.

Provide a name and click the Create button.

Data-centric microservices walkthrough with Helidon MP

7. Select the bucket you've just created and in the bucket screen click the Upload button under Objects.

Data-centric microservices walkthrough with Helidon MP

8. Provide the wallet zip you saved to your computer earlier and click the Upload button.

Data-centric microservices walkthrough with Helidon MP

9. You should now see the wallet zip object you just uploaded in the list of Objects. Click the "…" menu to the far right of the object and select Create Pre-Authenticated Request.

Data-centric microservices walkthrough with Helidon MP

10. Click the Create Pre-Authenticated Request button (default values are sufficient).

Data-centric microservices walkthrough with Helidon MP

11. Copy the value of the Pre-Authenticated Request URL as it will be used in the next step.

Data-centric microservices walkthrough with Helidon MP

Step 2: Create Secrets to Connect to ATP Pluggable-databases (PDB)s

You will run a script that will download the connection information (wallet, tnsnames.ora, etc.) and then create kubernetes secrets from the information that will be used to connect to the ATP instances provisioned earlier.

1. Go to Cloud Shell and Change directory into atp-secrets-setup.

cd $MSDATAWORKSHOP_LOCATION/atp-secrets-setup

2. Run createAll.sh and notice output creating secrets.

./createAll.sh https://objectstorage.us-phoenix-1.oraclecloud.com/REPLACE_WITH_YOUR_PREAUTH_LINK/Wallet_ORDERDB.zip

msdataworkshop

3. Execute msdataworkshop and notice secrets for order and inventory database and users.

msdataworkshop

Step 3: Verify and understand ATP connectivity via Helidon microservice deployment in OKE

You will verify the connectivity from the frontend Helidon microservice to the atp admin microservice connecting to the ATP PDBs.

1. First, let’s analyze the Kubernetes deployment YAML file: atpaqadmin-deployment.yaml.

cat $MSDATAWORKSHOP_LOCATION/atpaqadmin/atpaqadmin-deployment.yaml

The volumes are set up and credentials are brought from each of the bindings (inventory and order). The credential files in the secret are base64 encoded twice and hence they need to be decoded for the program to use them, which is what the initContainer takes care. Once done, they will be mounted for access from the container helidonatp. The container also has the DB connection information such as the JDBC URL, DB credentials and Wallet, created in the previous step.

2. Let’s analyze the microprofile-config.properties file.

cat $MSDATAWORKSHOP_LOCATION/atpaqadmin/src/main/resources/META-INF/microprofile-config.properties

This file defines the microprofile standard. It also has the definition of the data sources that will be injected. You will be using the universal connection pool which takes the JDBC URL and DB credentials to connect and inject the datasource. The file has default values which will be overwritten with the values specific for our Kubernetes deployment.

3. Let’s also look at the microservice source file ATPAQAdminResource.java.

cat $MSDATAWORKSHOP_LOCATION/atpaqadmin/src/main/java/oracle/db/microservices/ATPAQAdminResource.java

Look for the inject portion. The @Inject will have the two data sources under @Named as “orderpdb” and “inventorypdb” which were mentioned in the microprofile-config.properties file.

4. Go into the ATP admin folder .$MSDATAWORKSHOP_LOCATION /atpaqadmin

5. Setup information necessary for ATP DB links and AQ propagation and create the atpaqadmin deployment and service using the following command.

./deploy.sh

Create Secrets to Connect to ATP Pluggable-databases

6) Once successfully deployed, verify the existence of the deployment and service using the following command. You should notice that we now have the atpaqadmin pod up and running.

pods

Create Secrets to Connect to ATP Pluggable-databases

7. Use the frontend LoadBalancer URL http://<external-IP>:8080 to open the frontend webpage. If you need the URL, execute the services shortcut command and note the External-IP of the msdataworkshop/frontend/LoadBalancer.

Create Secrets to Connect to ATP Pluggable-databases

8. Click Datasources tab and then Test Data Sources button.

Create Secrets to Connect to ATP Pluggable-databases

The frontend is calling the atpaqadmin service and has successfully established connections to both databases orderpdb and inventorypdb.

9. Open the frontend microservice home page and click Setup (and Tear Down) Data and Messaging from the Labs pane.

Click the following buttons in order: Create Users, Create Inventory Table, Create Database Links, Setup Tables Queues and Propagation.

Create Secrets to Connect to ATP Pluggable-databases

The results of Setup Tables Queues and Propagation should take a couple of minutes to complete, therefore we could open the Cloud Shell and check the logs, as we are waiting until all the messages have been received and confirmed.

10. (Optional) While waiting for Setup Tables Queues and Propagation to complete, open the Cloud Shell and check the logs using the following command:

logpod admin

Create Secrets to Connect to ATP Pluggable-databases

We will see testing messages going in both directions between the two ATP instances across the DB link.

If the process gets stuck, use Ctrl-C to exit.

11) (Optional) If it is necessary to restart, rerun the process or clean up the database:

If Setup Tables Queues and Propagation was executed, you need to run Unschedule Propagation first.

Afterwards, click Delete Users.

 

The next lab will show you how to deploy and run data-centric microservices highlighting use of different data types, data and transaction patterns, and various Helidon MP features. The lab will then show you metrics, health checks and probes, and tracing that have been enabled via Helidon annotations and configuration.

This lab assumes that you have already deployed the OKE cluster, ATP databases and the microservices from the setup lab and lab 1.

Step 4: Deploy GrabDish store services

1. After you have successfully set up the databases, you can now test the “GrabDish” Food Order application. You will interact with several different data types, check the event-driven communication, saga, event-sourcing and Command Query Responsibility Segregation via order and inventory services. Go ahead and deploy the related order, inventory and supplier Helidon services. The Food Order application consists of the following tables shown in the ER diagram:

Deploy GrabDish store services

The Food Order application consists of a mock Mobile App (Frontend Helidon microservice) that places and shows orders via REST calls to the order-helidon microservice. Managing inventory is done with calls to the supplier-helidon microservice.
When an order is placed, the order service inserts the order in JSON format and in the same local transaction sends an orderplaced message using AQ JMS. The inventory service dequeues this message, validates and adjusts inventory, and enqueues a message stating the inventory location for the item ordered or an inventorydoesnotexist status if there is insufficient inventory. This dequeue, database operation, and enqueue are done within the same local transaction. Finally, the order service dequeues the inventory status message for the order and returns the resultant order success or failure to the frontend service.

This is shown in the below architecture diagram.

architecture diagram

2. Open the Cloud Shell and go to the order folder, using the following command.

cd $MSDATAWORKSHOP_LOCATION/order-helidon

architecture diagram

3. Deploy it.

./deploy.sh

architecture diagram

4. Go ahead and execute the same steps for deploying the inventory Helidon service, using the following command.

cd $MSDATAWORKSHOP_LOCATION/inventory-helidon ; ./deploy.sh

architecture diagram

Once the image has been deployed in a pod, you should see the following message.

architecture diagram

5. Use the same method to deploy the supplier Helidon service. Use the following command.

cd $MSDATAWORKSHOP_LOCATION/supplier-helidon-se ; ./deploy.sh

architecture diagram

6. You can check that all images have been successfully deployed in pods by executing the following command.

pods

architecture diagram

7. The services are ready, and you can proceed to test the application mechanisms.

Step 5: Verify order and inventory activity of GrabDish store

1. Open the frontend microservices home page. If you need the URL, execute the services shortcut command and note the External-IP:PORT of the msdataworkshop/frontend/LoadBalancer.

services

Verify order and inventory activity of GrabDish store

2. Click Transactional under Labs.

Verify order and inventory activity of GrabDish store

3. Check the inventory of a given item such as sushi, by typing sushi in the food field and clicking Get Inventory. You should see the inventory count result 0.

Verify order and inventory activity of GrabDish store

4. (Optional) If for any reason you see a different count, click Remove Inventory to bring back the count to 0.

5. Let’s try to place an order for sushi by clicking Place Order.

Verify order and inventory activity of GrabDish store

6. To check the status of the order, click Show Order. You should see a failed order status.

Verify order and inventory activity of GrabDish store

This is expected, because the inventory count for sushi was 0.

7. Click Add Inventory to add the sushi in the inventory. You should see the outcome being an incremental increase by 1.

Verify order and inventory activity of GrabDish store

8. Go ahead and place another order by increasing the order ID by 1 (67) and then clicking Place Order. Next click Show Order to check the order status.

Verify order and inventory activity of GrabDish store

Verify order and inventory activity of GrabDish store

The order should have been successfully placed, which is demonstrated with the order status showing success.

You have successfully configured the databases with the necessary users, tables and message propagation across the two ATP instances. You may proceed to the next step.

Step 6: Verify metrics

1. Notice @Timed and @Counted annotations on placeOrder method of $MSDATAWORKSHOP_LOCATION/order-helidon/src/main/java/io/helidon/data/examples/OrderResource.java

Verify metrics

2. Click Tracing, Metrics, and Health.

Verify metrics

3. Click Show Metrics and notice the long string of metrics (including those from placeOrder timed and counted) in prometheus format.

Verify metrics

Step 7: Verify Health

1. Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) provides health probes which check a given container for its liveness (checking if the pod is up or down) and readiness (checking if the pod is ready to take requests or not). In this STEP you will see how the probes pick up the health that the Helidon microservice advertises. Click Tracing, Metrics, and Health and click Show Health: Liveness.

Verify Health

2. Notice health check class at $MSDATAWORKSHOP_LOCATION/order-helidon/src/main/java/io/helidon/data/examples/OrderServiceLivenessHealthCheck.java and how the liveness method is being calculated.

Verify Health

3. Notice liveness probe specified in $MSDATAWORKSHOP_LOCATION/order-helidon/order-helidon-deployment.yaml The livenessProbe can be set up with different criteria, such as reading from a file or an HTTP GET request. In this example the OKE health probe will use HTTP GET to check the /health/live and /health/ready addresses every 3 seconds, to see the liveness and readiness of the service.

Verify Health

4. In order to observe how OKE will manage the pods, the microservice has been created with the possibility to set up the liveliness to “false”. Click Get Last Container Start Time and note the time the container started.

Verify Health

5. Click Set Liveness to False. This will cause the Helidon Health Check to report false for liveness which will result in OKE restarting the pod/microservice

Verify Health

Click Get Last Container Start Time. It will take a minute or two for the probe to notice the failed state and conduct the restart and as it does you may see a connection refused exception.

Verify Health

Eventually you will see the container restart and note the new/later container startup time reflecting that the pod was restarted.

Verify Health

Step 8: Verify tracing

1. Notice @Traced annotations on placeOrder method of $MSDATAWORKSHOP_LOCATION/frontend-helidon/src/main/java/io/helidon/data/examples/FrontEndResource.java and placeOrder method of $MSDATAWORKSHOP_LOCATION/order-helidon/src/main/java/io/helidon/data/examples/OrderResource.java Also notice the additional calls to set tags, baggage, etc. in this OrderResource.placeOrder method.

Verify Health

2. Place an order if one was not already created successfully in Step 4 of this Lab.

3. Click Show Tracing to open the Jaeger UI. If the Jaeger UI doesn't open up for some reason, go to Cloud Shell, type "services" in the Cloud Shell window and grab the external-IP associated with the jaeger-query service, type http:// (make sure you put the IP address here) on your web browser. Select frontend.msdataworkshop from the Service dropdown menu and click Find Traces.

Verify Health

Select a trace with a large number of spans and drill down on the various spans of the trace and associated information. In this case we see placeOrder order, saga, etc. information in logs, tags, and baggage.

If it has been more than an hour since the trace you are looking for, select a an appropriate value for Lookback and click Find Traces.

Verify Health

Lab 3

Pre-requisite Step: Terminal setup on your laptop/desktop computer

If you are using a Windows operating system based laptop/desktop computer, you will need to install gitbash (a terminal application) and use it to run the commands below.

Instructions for downloading and installing gitbash are available here.

Mac users can use the native Terminal application to run the commands below.

Step 1: Configure OCI-CLI

On your local machine terminal, make sure oci-cli is installed using:

oci -v

If not, follow the below link to install and setup OCI-CLI.

https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm

Step 2 Generate OCIR token

Login to OCI console.

Click on your Profile -> User Settings. On the bottom left, click on Auth Tokens. Click on Generate Token.

Provide a description and then hit Generate Token. This will generate a token. Make sure to copy the token and save it for future steps.

Step 3: Install kubectl and configure kube-config

Install kubectl using below command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl;chmod +x ./kubectl;sudo mv ./kubectl /usr/local/bin/kubectl;kubectl version –client

Now, to setup kubeconfig, go to your OCI tenancy. On the left hand side click on Developer Services. Select Container Clusters (OKE).

Click on the cluster created by terraform earlier.

On the top, click on Access Kubeconfig and run the commands specified.

Once done, verify you can access the OKE nodes, by typing:

kubectl get nodes

You will see the details of the nodes running on the cluster.

Step 4: Push the images to OCIR

Install kubectl using below command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl;chmod +x ./kubectl;sudo mv ./kubectl /usr/local/bin/kubectl;kubectl version –client

Now, to setup kubeconfig, go to your OCI tenancy. On the left hand side click on Developer Services. Select Container Clusters (OKE).

Click on the cluster created by terraform earlier.

On the top, click on Access Kubeconfig and run the commands specified.

Once done, verify you can access the OKE nodes, by typing:

kubectl get nodes

You will see the details of the nodes running on the cluster.

Step 5: Update kubernetes deployment files

Clone the github repo for kubernetes deployment files as below:

$ git clone https://github.com/KartikShrikantHegde/k8s.git $cd k8s

You should see 4 files. Update the server-deployment.yaml.

In file server-deployment.yaml, go to line 17 and update the image label:

<region-prefix-name> - eg: iad.ocir.io (for ashburn region)

<your-tenancy-namespace> -> (look for namespace in tenancy details on your OCI console for )

Now, let’s create a secret for the cluster.

kubectl create secret docker-registry secret --docker-server= --docker-username='' --docker-password='' --docker-email='a@b.com'

- eg: iad.ocir.io (for ashburn region)

-> /oracleidentitycloudservice/ (look for namespace in tenancy details on your OCI console for )

<ocir-token> -> OCIR token we had created in Step 2

Finally, run below commands one after another to apply the configuration to the cluster.

kubectl apply -f redis-deployment.yaml
kubectl apply -f redis-cluster-ip-service.yaml
kubectl apply -f server-deployment.yaml
kubectl apply -f server-lb-service.yaml

Once applied, wait for 5 mins and then run:

$ kubectl get services

Copy the EXTERNAL-IP for the server-lb-service.

Run the below command replacing the EXTERNAL-IP to issue a PUT request:

curl -H "Content-Type: application/json" -X PUT -d '{"hello":999}' http://<EXTERNAL-IP>:5000/testurl

You should receive an output similar to this:

{ "hello": 999, "last_updated": 1583375217 }

On making a GET request:

curl http://<EXTERNAL_IP>:5000/testurl

You receive a successful response:

{ "hello": "999", "last_updated": 1583375217 }

Latest content

Explore and discover our latest tutorials

Serverless functions

Serverless functions are part of an evolution in cloud computing that has helped free organizations from many of the constraints of managing infrastructure and resources. 

What is a blockchain?

In broad terms, a blockchain is an immutable transaction ledger, maintained within a distributed peer-to-peer (p2p) network of nodes. In essence, blockchains serve as a decentralized way to store information.

OCI CLI

The CLI is a small-footprint tool that you can use on its own or with the Console to complete Oracle Cloud Infrastructure tasks. The CLI provides the same core functionality as the Console, plus additional commands.