Technical interviews are always daunting. This is exactly why preparing for them is so important. In this article, we’ll tackle the most common questions you can be asked during your DevOps interview. First of all, let’s define DevOps and outline its main benefits before we move on further.
DevOps is a set of practices that comprise software development and information technology operations. The main goals of DevOps are to shorten the software development lifecycle, provide continuous delivery, and sustain high-quality software.
With integrating DevOps, an organization becomes more performance-oriented (teams are more productive) and collaborative (development and operations team better communicate with each other). The code is released to production more often due to increased velocity. And the overall software development cycle becomes shorter. Since DevOps implies planning for any failure like bugs or code failure in deployment rollback, releasing new features is easier with no or significantly less downtime. In case of any defects, they become detectable earlier due to iterative testing before release to production. With that being said, let’s see what questions you can be asked while being interviewed for the position of DevOps Engineer.
What AWS services would you use for DevOps? How do they work and what main features of those services can you name?
AWS provides many tools to deploy and manage apps in AWS. As per DevOps, two of the most popular services used are CloudFormation and OpsWork.
CloudFormation is used to create and deploy AWS resources with templates, in which you can describe dependencies and pass particular parameters. A template is a text file that contains information about a stack (collection of AWS resources to deploy together as a group) on AWS. CloudFormation reads those templates and deploys the app in the AWS cloud.
OpsWork is used for configuration management by using Chef Framework. With OpsWork, you can automate server configuration, deployment, and management of EC2 instances in AWS and any on-premises servers. Some of the main features of OpsWork stacks are server support, scalable automation, dashboards, app support, and configuration as a code. Let’s walk through those main features below.
Server support means that with AWS OpsWork stacks, you can automate operational tasks on any server in AWS and your own data center.
With AWS OpsWork stacks, you get automated scaling support: each new instance in AWS can read configuration from OpsWork, respond to system events based on other instances.
You can create dashboards in OpsWork to show the status of all the stacks in AWS.
In OpsWork, you can define and maintain configurations like app source code and replicate the same configuration on multiple servers and environments.
Besides, OpsWorks supports all kinds of apps, so it’s universal by nature.
What is Continuous Integration? What are the Continuous Integration benefits and best practices?
Continuous Integration and Continuous Delivery are two different DevOps concepts that are complementary.
Continuous Integration is the DevOps approach where development work is merged to the main branch several times a day.
Continuous Delivery is the DevOps approach where software is delivered in short cycles.
Speaking of the best practices employed by the Continuous Integration approach, there are a few:
- Build automation
- Main code repo: the main branch in code repository with all Production-ready code, which could be deployed to Production at any time.
- Self-testing build: every build should be self-tested to ensure high-quality
- Every day commits to baseline: developers commit the changes to the baseline every day to ensure there’s no code waiting for integration with the main repo for long.
- Build every comment to baseline: every time a commit is made into the baseline, a build is triggered to ensure the changes are integrated correctly
- Fast build process
- Production like environment testing or pre-production/staging environment testing, which is as close to the Production environment as possible.
- Publish build results: publishing build results on a common site so that everyone sees them.
- Deployment automation: deployment is automated. Even in a build process, you can add the step of deploying code to a test environment.
These are a few benefits that come with adopting the CI approach:
- The current build is continuously available for testing, demonstration, and release purposes
- Developers write modular code that works well with frequent code check-ins
- Easy to revert to a bug-free state of the code
- Drastic reduction in confusion on release day
- Integration issues are detected much earlier
- Automated testing
- All parties involved (including stakeholders) can see changes deployed to the pre-production environment
How to make Jenkins secure and why does it matter?
Jenkins is secured by setting user authentication and authorization:
- Set up the Security Realm
- Integrate Jenkins with LDAP server to create user authentication
- Set authorization for users
In Jenkins, you can set up security with the following options:
- Using Jenkins’ user database
- Using LDAP plugin to integrate Jenkins with LDAP server
- Setting up Matrix-based security on Jenkins
Securing Jenkins is essential so that only users with an appropriate set of permissions can access Jenkins. This is done to protect Jenkins and its users from unauthorized access and indirect attacks.
What is Puppet? What are the Puppet architecture and its main use cases?
Puppet Enterprise is a software platform (Windows/Unix) that is used for the automation of infrastructure operations. System configuration can be defined by Puppet’s language or Ruby DSL and distributed to a target system by using REST API calls. Puppet Enterprise comes with a set of additional features that are not present in the free open source version of the software.
Puppet is based on client-server architecture, where the Client is called the Agent and the Server — the Master.
Puppet has the following architectural components:
- Configuration language: Puppet has its language (written in Manifests files) to configure resources. Every Action has to correspond to a specific Resource with three parameters, such as the type, title, list of attributes.
- Resource abstraction means that you can configure resources on different platforms. Puppet uses Facter to pass information about IP, OS, hostname, etc. of the environment to the Puppet server.
- Transaction: The Agent sends Facter to the Master server. The Master sends back the catalog to the Client. The Agent applies any changes in configuration to the system. The result is sent to the Server.
You can use Puppet for node and code management, reporting and visualization, provisioning automation (deployment automation and creation of new servers and resources), orchestration (in case of a large cluster of nodes), and automation of configuration.
What’s the architecture of Kubernetes?
The architecture of Kubernetes consists of the Master node and other nodes.
The Master node is responsible for managing the cluster by performing the following functions: scheduling apps, maintaining the desired state of apps, scaling apps, and applying updates to apps.
Nodes are responsible for running an app. The node can be a Virtual Machine or a Computer in a cluster. Kubelet software runs on each node and helps to manage the node and communicate with the Master with the Kubernetes API. When you deploy an app on Kubernetes, you request the Master to start an app container on Nodes.
How to perform Test Automation in DevOps?
To perform test automation, you first have to develop a test strategy with test cases. After these are ready, each test case is plugged into each Build run, where Unit tests, Integration, and Functional tests are run against each test case. To automate tasks like that, you can use Jenkins.
What’s the difference between a Container and a Virtual Machine?
A VM provides a full OS to run an app in a virtualized environment. A Container uses APIs of an OS to provide a runtime environment to an app. A Container is very lightweight but less secure compared to VM.
What is Serverless Architecture?
Serverless architecture (aka serverless computing or FaaS, function as a service) is a software design pattern where apps are hosted by a third-party service eliminating the need to manage software and hardware by the developer.
What is a Docker Container? Can you lose data when Docker Container exists?
A Docker Container is a system running on Linux or a virtual machine. It’s a package of an app and its related dependencies that can be run independently. A Docker Container is very lightweight; hence — multiple containers can run simultaneously on a single server or virtual machine. With the help of a Docker Container, you can create an isolated system with restricted services and processes. A Docker Container has a private view of the OS, its own process ID space, network interface, and file system. In an app running on a Docker Container, you write to its file-system, where data stays as long as the container exists. Only when you delete the container, the data will be deleted.
What is Ansible? When will you use Ansible?
Ansible is an excellent tool for the automation of large-scale and complex deployments. You use Ansible for the deployment of apps in a reliable way, automation and configuration management across different environments, implementation of complex security policies, compliance, provisioning of new systems and resources to other Ansible users, and orchestration of complex deployment quickly and sensibly.
This post is one in a series of Interview Questions for developers and software engineers. You may view other questions with answers here [link to the section on site with interview questions]