Learn more >
CloudContainersPlatform as a Service
by Max Shapiro Published April 30, 2019
Kubernetes and Cloud Foundry are both technologies that allow you to deploy and run applications on the cloud. In this blog, I talk about my experiences in using both technologies on IBM Cloud™ with the same code pattern, “Create a health records system with modern cloud technology and legacy mainframe code.”
In the code pattern, both architectures followed the following steps:
Since both follow the same architecture model, there is no favorite and both get a point.
The IBM Cloud Kubernetes Service uses containers to run the application. This means that I needed to containerize the code pattern; to do this, I used Docker. I had to find compatible containers for the different parts of the code pattern. This included two Node.js containers: one for the front end UI and the other for the data service and APIs. In addition, I also included a MongoDB container for the data lake database. Once all of the containers were configured and running successfully on my local machine, I pushed the containers to Docker Hub.
With the containers ready to deploy to the cloud, I then needed to provision a cluster on Kubernetes. Deploying this code pattern to IBM Cloud can be achieved by using either the Lite or Standard cluster. Once the cluster was done creating, I had to apply YAML files that configured and deployed each container from Docker Hub. For the standard cluster, additional YAML files were applied for configuring the ingress of the two Node.js containers.
Once all of the containers were deployed, I was able to successfully run and interact with the code pattern on IBM Cloud.
Cloud Foundry on IBM Cloud includes an SDK for Node.js for running Node.js applications on Cloud Foundry. For this code pattern, two instances of the SDK needed to be provisioned: One for the front end UI and the other for the data service and APIs. In addition, I provisioned an instance of Compose for MongoDB for the data lake database.
Once all three instances were running, I had to configure a manifest YAML file for the two Node.js parts of the code pattern. Once I configured the YAML file, I pushed it to IBM Cloud to deploy the code from my local machine to the cloud.
Once the code was successfully deployed, I was able to successfully run and interact with the code pattern on IBM Cloud.
Deploying to Kubernetes used more steps and required me to have a Docker Hub account, but in the end all parts of the code pattern were running in the same cluster on IBM Cloud. This means that all parts of the code pattern can be accessed from the same base URL.
Even though there were less steps to deploy to Cloud Foundry, the different parts of the code pattern were deployed separately and therefore had to be accessed from different URLs.
The containers for Kubernetes require a Docker image to create the container. As a result, the correct language and version needed to be specified. In addition, the install and initialize commands needed to be specified for the container to build and run. Alternatively, Cloud Foundry did not require any of this. The only thing that needed to be known was the language, which was required when provisioning the application. Everything else was automatically detected.
Unlike deploying to Cloud Foundry, deploying to Kubernetes on IBM Cloud has a free option. Using the free option means that you can’t configure the ingress and therefore must access the code pattern through the IP address and ports. In addition, it uses http rather than https.
Due to the simplicity in deploying, I would give the advantage here to Cloud Foundry.
With the code pattern running on Kubernetes on IBM Cloud, I could then focus on making updates. After these updates were running successfully on my local machine, I could deploy the updated code to the cloud. This process works similarly to the setting up process. First, the updated containers need to be pushed to Docker Hub. Next, the YAML files that were used for deploying the containers that are associated with the updated containers need to be deleted and reapplied. What this means is that if I updated code that was in only one of the Node.js containers, I would only need to redeploy that container.
With the code pattern running on Cloud Foundry on IBM Cloud, I could then focus on making updates. After these updates were running successfully on my local machine, I could then deploy the updated code to the cloud. This process works similarly to setting up. The same manifest YAML file gets pushed to IBM Cloud to re-deploy the code from the local machine to the cloud.
Similar to deploying, it is much easier to update on Cloud Foundry than Kubernetes. It only requires one command, whereas Kubernetes requires the deletion and redeployment of each container that needs updating. I also noticed that at times the container/service would not delete completely on Kubernetes and would require me to delete multiple times.
A key difference between the two is portability and versioning of applications to the cloud. Docker makes it easy to create different versions of a container. For deploying different versions of an application or even multiple instances of just one version, Kubernetes makes this process simple. For a different cluster provisioned, the same YAML files can be used. However, if a different version of a container is used, the image in the file must be changed to where it is located in Docker Hub. In fact, you don’t even need the code on your local machine to deploy, you just need the necessary YAML files. On the other hand, using Cloud Foundry is more complicated. For each additional application you want to deploy, the necessary components need to be provisioned on IBM Cloud, or any other cloud really. In addition, the code must be on your local machine. For versioning, one option that you can use is to create different branches on GitHub that are associated with a version and push the code for the branch you want to deploy.
Because of the difference in portability and versioning, I give the advantage here to Kubernetes.
Just because an application is running successfully locally does not mean that it will also run successfully on the cloud. I learned this when deploying the code pattern to the cloud. Debugging this code pattern locally through the browser and logs through the terminal console was pretty similar to how Kubernetes on IBM Cloud debugging works. Browser debugging works the same way as running locally, however the logs can be found by running kubectl logs <podname>. If you’re interested in reading a particular log, the podname is the pod where the container’s log is located. Unfortunately, sometimes debugging requires pushing multiple updated containers to the cloud, which slows down the process.
kubectl logs <podname>
The process of debugging on Cloud Foundry is similar to Kubernetes, where you can use the browser to debug and also check the logs. The logs can also be run by running a command: ibmcloud cf logs <appname>. Where appname is the name of the application that you’re interested in reading the logs for.
ibmcloud cf logs <appname>
I found reading the logs easier to follow on Kubernetes. In Cloud Foundry, the logs wrapped what was shown in the console into CF’s logs, which can be confusing to understand at first.
Sometimes when I was debugging, I needed to redeploy the code pattern multiple times in order to find the bug. While this did slow down the debugging process for both Kubernetes and Cloud Foundry, it was more significant when using Kubernetes.
I prefer debugging on Kubernetes than Cloud Foundry and as a result, Kubernetes gets the advantage.
Initially when working on this code pattern, I was working with a small data set (< 1MB) for sending to the Node.js API responsible for populating the MongoDB database. When I decided to use a larger data set (~ 6MB), I started to run into some issues. First when running locally, I had to increase the body parser limit in the Node.js application. After I fixed that and had the data successfully populating the database locally, I tried it on Kubernetes. Unfortunately, I was getting the following error when trying to send the larger data set:
<head><title>413 Request Entity Too Large</title></head>
<center><h1>413 Request Entity Too Large</h1></center>
I found that I needed to set the ingress.bluemix.net/client-max-body-size in the ingress file that was associated with the container service with the APIs.
Fortunately, Cloud Foundry has auto-scaling implemented, so sending a larger data set to my API was not a concern and worked without issues.
Cloud Foundry’s auto-scaling feature works great for this code pattern, because the scaling is something that does not have to be worried about since it is done automatically. With Kubernetes, a limit on the size of the data has to be set.
Cloud Foundry’s auto-scaling feature gives Cloud Foundry the advantage here.
The way I configured this code pattern for Kubernetes meant that all parts of the code pattern were running in the same Kubernetes cluster. This means that all parts of the code pattern can be accessed from the same base URL/IP address. If ingress files are used, you also have the ability to control what/how gets exposed to access. For example, with this code pattern, I have exposed the UI on https://some-url and the APIs on https://api.some-url.
The way I configured this code pattern using Cloud Foundry meant that each part was running separately on the cloud. Unfortunately, with the way this code pattern is structured, it cannot run all together in one application. This means that the code pattern is exposed on different URLs for each part.
As I mentioned in the deploying section, unlike deploying to Cloud Foundry, deploying to Kubernetes on IBM Cloud has a free option. Using the free option means that you can’t configure the ingress and therefore have to access the code pattern through the IP Address and Ports. In addition, it uses http rather than https. Cloud Foundry automatically uses https.
Because the parts of the code pattern are scattered on Cloud Foundry on IBM Cloud, I noticed that running this code pattern was slower than running it on Kubernetes.
I am giving the advantage to Kubernetes here due to the speed at which the application was able to run on Kubernetes compared to Cloud Foundry.
From the final tally of the score, Kubernetes beats out Cloud Foundry 4-3. With the score being so close, it shows that both Kubernetes and Cloud Foundry have advantages to using either over the other. When it comes to deciding which to use for your application, it is important to consider which aspects are the priority during the application’s lifecycle. For this application, if I had to choose one, I would go with Kubernetes. With the potential for a large amount of data to be handled by the application, the speed of the system and user experience should be the priority.
It depends on your own personal preferences on whether you decide to use Kubernetes or Cloud Foundry. But if you’re still unsure or want to see what other code patterns or tutorials we have, check out Cloud Foundry on IBM Developer or Kubernetes on IBM Developer. Again, check out the code pattern that I wrote, which inspired this blog post.
Get the Code »
Apache SparkArtificial intelligence+
Back to top