Technical Articles

Tech Articles from your friends at Oracle and the developer community.

Deploying a Java EE Application in a Managed Oracle Cloud Kubernetes Environment

Setting up a Kubernetes environment can be quite challenging, especially for beginners. Rather than concerning yourself with manually installing Kubernetes on cluster environments, you could go for a managed, cloud option. Oracle Cloud Infrastructure provides the latest managed Kubernetes offering.

This article shows how to deploy an example Java Platfrom, Enterprise Edition (Java EE) application in a managed Oracle Cloud Kubernetes cluster.

Docker Containers

In order to run enterprise applications in a Kubernetes cluster, they need to be packaged as Docker containers. We will use a Docker base image that already contains the application server, a Java installation, and the required operating system binaries.

The following shows the Dockerfile of our hello-cloud project:

FROM sdaschner/open-liberty:javaee8-jdk8-b2

COPY target/hello-cloud.war $DEPLOYMENT_DIR

We can distribute the created Docker image via the public DockerHub, another Docker registry cloud service. or a private Docker registry.

Kubernetes Deployments

Kubernetes runs Docker containers in the form of pods. A pod contains one or more containers and is usually created and managed by a Kubernetes deployment. A deployment provides the ability to scale and update pods without too much manual effort.

Our example Kubernetes deployment's YAML definition looks as follows:

kind: Deployment
apiVersion: apps/v1beta1
  name: hello-cloud
  replicas: 1
        app: hello-cloud
        version: v1
      - name: hello-cloud
        imagePullPolicy: IfNotPresent
            port: 9080
            path: /
            port: 9080
            path: /hello-cloud/resources/health
      - name: regsecret
      restartPolicy: Always

The liveness and readiness probe definitions tell Kubernetes when the container is up and running and enable it to handle incoming traffic, respectively. The deployment will cause one pod to be created on a cluster node with given the specification.

In order to pull the image from our repository—here,—we usually have to provide a Kubernetes secret, which contains the Docker credentials. The secret regsecret was created in the same namespace for this purpose.


To access the created pod from inside or outside of the cluster, we require a Kubernetes service. The service balances the load to all instances of the running containers:

kind: Service
apiVersion: v1
  name: hello-cloud
    app: hello-cloud
    app: hello-cloud
    - port: 9080
      name: http

Kubernetes connects the service to the created pods by their labels and the defined selector. The app selector is a de facto standard for grouping logical applications.

Kubernetes has an internal DNS resolution that enables cluster-internal applications to access our hello-cloud application via hello-cloud:9080. This, by the way, is a big benefit of minimizing the URL conuration of applications that run inside of the cluster. No matter which cluster or environment runs our workload, the host name hello-cloud will be resolved to the corresponding hello-cloud service.


To access applications from outside of the cluster as well, we usually use Kubernetes ingress resources. The following creates an NGINX ingress, which automatically routes ingress traffic through the external IP address:

kind: Ingress
apiVersion: extensions/v1beta1
  name: hello-cloud
  annotations: "nginx"
    - http:
        - path: /hello-cloud
            serviceName: hello-cloud
            servicePort: 9080

Enter Oracle Container Engine for Kubernetes

In order to run our example application, we need a running Kubernetes cluster with an arbitrary number of nodes. Oracle Container Engine for Kubernetes provides a managed cluster that doesn't require us to set up the Kubernetes resources ourselves.

The documentation describes how to create a cluster with a desired network setup. We will use the recommended default options with two load balancer subnets, three worker subnets, RBAC authorization, and an additional NGINX ingress deployment. For more information, you can also have a look at my GitHub OKE repository.

The following screenshots show the creation of our cluster with a default cluster node pool, which manages the compute instances. We are creating a cluster called oke-cluster-1 with the recommended networking options.

Figure 1. Creating a cluster

Figure 1. Creating a cluster

The node pool, node-pool-1, is created with the worker subnets and will manage two nodes per subnet in VM.Standard.1.2 shape. In total, our cluster will contain six nodes in three availability domains.

Figure 2. Node pool configuration

Figure 2. Node pool configuration

After that, our cluster and its nodes will be created.

Figure 3. The created node pool

Figure 3. The created node pool

The cluster detail page will guide us regarding how to connect to the newly created Kubernetes cluster. We can prove that our nodes have been created by using the kubectl command-line tool:

$> kubectl get nodes
NAME              STATUS    ROLES     AGE       VERSION   Ready     node      2d        v1.9.7   Ready     node      2d        v1.9.7   Ready     node      2d        v1.9.7   Ready     node      2d        v1.9.7    Ready     node      2d        v1.9.7    Ready     node      2d        v1.9.7

The cluster description page shows how to connect our local kubectl with the newly created Oracle Cloud cluster.

Once we confirmed that the cluster has been set up successfully, we can start using the cluster by provisioning our workload. Therefore, we send our Kubernetes YAML definitions to the cluster. In this example, we packaged the deployment, service, and ingress definitions into a single YAML file:

$> kubectl apply -f deployment/hello-cloud.yaml
service "hello-cloud" created
deployment.apps "hello-cloud" created
ingress.extensions "hello-cloud" created

We now can check that our service, deployment, and pod have been created successfully:

$> kubectl get services
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
hello-cloud   NodePort   <none>        9080:32133/TCP   1m
kubernetes    ClusterIP      <none>        443/TCP          1d

$> kubectl get deployments
hello-cloud   1         1         1            1           1m

$> kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
hello-cloud-d6777c66-n24bw  1/1     Running   0          1m

The NGINX ingress service is exposed as a load balancer and we will use its IP address to access the cluster:

$> kubectl get services --namespace ingress-nginx
NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                       AGE
default-http-backend     ClusterIP    <none>            80/TCP                        1d
ingress-nginx            LoadBalancer     80:30979/TCP,443:32339/TCP    1d

Now we'll put it all together and access our hello-cloud example application via HTTPS, for example by using curl:

$> curl -k
Hello from OKE!

This uses the NGINX ingress that is accessed by the external IP address and routes traffic to the hello-cloud service and ultimately to the container, which runs in the hello-cloud-d6777c66-n24bw pod.

See Also

About the Author

Sebastian Daschner is a self-employed Java consultant, author, and trainer who is enthusiastic about programming and Java (EE). He is the author of the book Architecting Modern Java EE Applications. Daschner is participating in the JCP—helping to form the future standards of Java EE by serving in the JAX-RS, JSON-P, and Config Expert Groups—and collaborating on various open source projects. For his contributions in the Java community and ecosystem he was recognized as a Java Champion, Oracle Developer Champion, and double 2016 JavaOne Rock Star. Besides Java, he is also a heavy user of Linux and container technologies such as Docker. He evangelizes computer science practices on his blog, through his newsletter, and on Twitter. When not working with Java, he also loves to travel the world—either by plane or motorbike.

Latest content

Explore and discover our latest tutorials

Serverless functions

Serverless functions are part of an evolution in cloud computing that has helped free organizations from many of the constraints of managing infrastructure and resources. 

What is a blockchain?

In broad terms, a blockchain is an immutable transaction ledger, maintained within a distributed peer-to-peer (p2p) network of nodes. In essence, blockchains serve as a decentralized way to store information.


The CLI is a small-footprint tool that you can use on its own or with the Console to complete Oracle Cloud Infrastructure tasks. The CLI provides the same core functionality as the Console, plus additional commands.