by Sebastian Daschner
Setting up a Kubernetes environment can be quite challenging, especially for beginners. Rather than concerning yourself with manually installing Kubernetes on cluster environments, you could go for a managed, cloud option. Oracle Cloud Infrastructure provides the latest managed Kubernetes offering.
This article shows how to deploy an example Java Platfrom, Enterprise Edition (Java EE) application in a managed Oracle Cloud Kubernetes cluster.
In order to run enterprise applications in a Kubernetes cluster, they need to be packaged as Docker containers. We will use a Docker base image that already contains the application server, a Java installation, and the required operating system binaries.
The following shows the Dockerfile of our hello-cloud project:
FROM sdaschner/open-liberty:javaee8-jdk8-b2 COPY target/hello-cloud.war $DEPLOYMENT_DIR
We can distribute the created Docker image via the public DockerHub, another Docker registry cloud service. or a private Docker registry.
Kubernetes runs Docker containers in the form of pods. A pod contains one or more containers and is usually created and managed by a Kubernetes deployment. A deployment provides the ability to scale and update pods without too much manual effort.
Our example Kubernetes deployment's YAML definition looks as follows:
kind: Deployment apiVersion: apps/v1beta1 metadata: name: hello-cloud spec: replicas: 1 template: metadata: labels: app: hello-cloud version: v1 spec: containers: - name: hello-cloud image: docker.example.com/hello-cloud:1 imagePullPolicy: IfNotPresent livenessProbe: httpGet: port: 9080 path: / readinessProbe: httpGet: port: 9080 path: /hello-cloud/resources/health imagePullSecrets: - name: regsecret restartPolicy: Always ---
The liveness and readiness probe definitions tell Kubernetes when the container is up and running and enable it to handle incoming traffic, respectively. The deployment will cause one pod to be created on a cluster node with given the specification.
In order to pull the image from our repository—here,
docker.example.com—we usually have to provide a Kubernetes secret, which contains the Docker credentials. The secret
regsecret was created in the same namespace for this purpose.
To access the created pod from inside or outside of the cluster, we require a Kubernetes service. The service balances the load to all instances of the running containers:
kind: Service apiVersion: v1 metadata: name: hello-cloud labels: app: hello-cloud spec: selector: app: hello-cloud ports: - port: 9080 name: http ---
Kubernetes connects the service to the created pods by their labels and the defined selector. The
app selector is a de facto standard for grouping logical applications.
Kubernetes has an internal DNS resolution that enables cluster-internal applications to access our hello-cloud application via
hello-cloud:9080. This, by the way, is a big benefit of minimizing the URL conuration of applications that run inside of the cluster. No matter which cluster or environment runs our workload, the host name
hello-cloud will be resolved to the corresponding
To access applications from outside of the cluster as well, we usually use Kubernetes ingress resources. The following creates an NGINX ingress, which automatically routes ingress traffic through the external IP address:
kind: Ingress apiVersion: extensions/v1beta1 metadata: name: hello-cloud annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /hello-cloud backend: serviceName: hello-cloud servicePort: 9080 ---
In order to run our example application, we need a running Kubernetes cluster with an arbitrary number of nodes. Oracle Container Engine for Kubernetes provides a managed cluster that doesn't require us to set up the Kubernetes resources ourselves.
The documentation describes how to create a cluster with a desired network setup. We will use the recommended default options with two load balancer subnets, three worker subnets, RBAC authorization, and an additional NGINX ingress deployment. For more information, you can also have a look at my GitHub OKE repository.
The following screenshots show the creation of our cluster with a default cluster node pool, which manages the compute instances. We are creating a cluster called
oke-cluster-1 with the recommended networking options.
Figure 1. Creating a cluster
The node pool,
node-pool-1, is created with the worker subnets and will manage two nodes per subnet in
VM.Standard.1.2 shape. In total, our cluster will contain six nodes in three availability domains.
Figure 2. Node pool configuration
After that, our cluster and its nodes will be created.
Figure 3. The created node pool
The cluster detail page will guide us regarding how to connect to the newly created Kubernetes cluster. We can prove that our nodes have been created by using the
kubectl command-line tool:
$> kubectl get nodes NAME STATUS ROLES AGE VERSION 220.127.116.11 Ready node 2d v1.9.7 18.104.22.168 Ready node 2d v1.9.7 22.214.171.124 Ready node 2d v1.9.7 126.96.36.199 Ready node 2d v1.9.7 188.8.131.52 Ready node 2d v1.9.7 184.108.40.206 Ready node 2d v1.9.7
The cluster description page shows how to connect our local
kubectl with the newly created Oracle Cloud cluster.
Once we confirmed that the cluster has been set up successfully, we can start using the cluster by provisioning our workload. Therefore, we send our Kubernetes YAML definitions to the cluster. In this example, we packaged the deployment, service, and ingress definitions into a single YAML file:
$> kubectl apply -f deployment/hello-cloud.yaml service "hello-cloud" created deployment.apps "hello-cloud" created ingress.extensions "hello-cloud" created
We now can check that our service, deployment, and pod have been created successfully:
$> kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-cloud NodePort 10.96.27.211 <none> 9080:32133/TCP 1m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d $> kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-cloud 1 1 1 1 1m $> kubectl get pods NAME READY STATUS RESTARTS AGE hello-cloud-d6777c66-n24bw 1/1 Running 0 1m
The NGINX ingress service is exposed as a load balancer and we will use its IP address to access the cluster:
$> kubectl get services --namespace ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.96.76.149 <none> 80/TCP 1d ingress-nginx LoadBalancer 10.96.191.202 220.127.116.11 80:30979/TCP,443:32339/TCP 1d
Now we'll put it all together and access our hello-cloud example application via HTTPS, for example by using
$> curl -k https://18.104.22.168/hello-cloud/resources/hello Hello from OKE!
This uses the NGINX ingress that is accessed by the external IP address and routes traffic to the
hello-cloud service and ultimately to the container, which runs in the
Sebastian Daschner is a self-employed Java consultant, author, and trainer who is enthusiastic about programming and Java (EE). He is the author of the book Architecting Modern Java EE Applications. Daschner is participating in the JCP—helping to form the future standards of Java EE by serving in the JAX-RS, JSON-P, and Config Expert Groups—and collaborating on various open source projects. For his contributions in the Java community and ecosystem he was recognized as a Java Champion, Oracle Developer Champion, and double 2016 JavaOne Rock Star. Besides Java, he is also a heavy user of Linux and container technologies such as Docker. He evangelizes computer science practices on his blog, through his newsletter, and on Twitter. When not working with Java, he also loves to travel the world—either by plane or motorbike.