My first DevOps job interview Part 1 of 3
3 min read
796 words
This article is part 1 of a 3 part series of my job interview experience as a DevOps Engineer.
Introduction
In the middle of March I had my first interview for a DevOps Engineer job for the time after my studies. During the interview process I was given a task to demonstrate my Kubernetes skills. Basically, the task was to create a Docker image for a NodeJS app and then deploy it to a Kubernetes cluster. I had about 8 hours to complete the task. Below, I will go into detail about each of the tasks I was given. However, I will not formulate the task 1:1, but paraphrase what should be done in the task. Nevertheless, with this article I will try to give a feeling for how a practical task is structured in a DevOps job interview.
For information: I used a MacBook Pro with M1 chip for the following tasks. Why
I mention this will become clear in a moment. The NodeJS app I was supposed to
roll out was built on puppeteer,
which is a library for using a headless Chrome or Chromium browser. To create
the Dockerfile I followed the documentation of
puppeteer.
This shows that Chrome can be downloaded directly via apt
package management
using the URL http://dl.google.com/linux/chrome/deb/
. After I found out that
Chrome does not currently provide an arm64
build, I used an amd64
Docker
image of Node. However, this did not lead to the desired goal either. Finally, I
downloaded and installed Chrome directly via wget
. This led to a successfully
built container image, but shortly after starting the container it threw Chrome
errors and was not usable. After 2 1/2 hours I contacted the company to see if
it was okay to use a simple NodeJS app for the remaining tasks, so I could at
least solve the remaining tasks (there were still 5 tasks ahead of me).
At the end of the 3 articles is a link to the GitHub repository with all the files that can be used to deploy the following app in Kubernetes.
The rather unspectacular NodeJS app looks like this:
const express = require("express");
// Constants
const PORT = 3000;
const HOST = "0.0.0.0";
// App
const app = express();
app.get("/", (req, res) => {
res.send("Hello World");
});
app.get("/health", (req, res) => {
res.sendStatus(200);
});
app.listen(PORT, HOST, () => {
console.log(`Running on http://${HOST}:${PORT}`);
});
The Dockerfile
There was still a requirement to create a Dockerfile
for the NodeJS. The
following Dockerfile
is not 1:1 the same one I used during the task. I
extended it by some best practices afterwards. It is important to make sure that
the user running the program in the container is not root
. For this the node
images provide the node
user. This can be set up by adding the line
USER node
just before the app is launched. Another security for the container,
but also for the user of the images, is to set a specific node
image. This way
you can ensure at any time that the dependencies you find are the ones you
tested with. This is done by using the SHA
code of the respective image e.g.
FROM node:14@sha256:00e90d6cbb499653cd2c74a3770f4fa5982699145b113e422bdffe31a7905117
for an arm64
build of node in version 14. The same is true for the
apt-get update
command before installing new dependencies in a container.
Since the sources can change, it is otherwise not possible to ensure that there
are no problems when building new containers. Thus, it is better to test
directly with a new build of the base image. More best practices for Docker
images can be found at
Docker Security Best Practices from the Dockerfile
or
Docker Security - OWASP Cheat Sheet Series.
The next part will be about setting up a Kubernetes Cluster and deploying the NodeJS app in the cluster.
Thank you for reading,\Niklas
The code from this post can also be found on GitHub:
niklasmtj/kubernetes-exercise.
Additionally, I created an arm64
as well as an amd64
docker image for
niklasmtj/exercise-app:v1
. So the example app should be usable on other
devices as well.
The series: