The DevOps mindset calls for efficient, automated workflows in almost every imaginable part of the software development lifecycle. These workflows are powered by tools that eliminate manual labor and take software quickly from development to production, implementing a “write once – run everywhere” paradigm. Assembling a DevOps toolset can be challenge, as there is a huge selection of tools to choose from.
In this article we’ll share considerations for selecting the right tools, and provide an overview of 15 common tools, along with the reasons we use them today at StackPulse.
What is a DevOps Toolset?
A DevOps toolset helps teams simplify and automate processes like testing, deploying software components, managing infrastructure as code (IaC), monitoring production environments, and scaling services upon need. This automation improves the consistency, reliability, and efficiency of these processes. All DevOps tools need to provide visibility into workflows that they enable.
DevOps tools are designed to help teams:
- Perform and evaluate development processes
- Reduce or eliminate repetitive manual tasks
- Improve communication and collaboration
- Easily implement, verify, and update workflows
- Reduce or eliminate misconfigurations and human errors
Choosing The Right DevOps Tools
With so many DevOps tools available it can be difficult to know which are right for you. The following tips are help you narrow down your list and ensure that you are selecting the tools that can benefit you most.
Focus on your entire stack
Selecting tools tailored to specific technologies in your stack leads to tool overload and requires substantial work to maintain. Alternatively, select tools that can apply to most – if not all of your environment.
For example, rather than choosing a tool that monitors a specific technological solution, look for one that covers many of your resources. This helps reduce your overall number of tools and makes management easier since teams need to become familiar with fewer interfaces or configurations. Tools that are customizable or extendable usually serve as a better “investment” as they can expand to meet your future needs.
Another reason not to select tools for specific tasks is to avoid redundancy. Having tools with overlapping capabilities can be a waste of resources and stands in the way of optimization, reuse and knowledge sharing. It is recommended to look at optimizing the pipeline “assets” as a whole, rather than choosing a marginally better tool for a specific task.
For example, if you adopt a build tool that is also capable of deployment processes, try to use it for those tasks as well. This isn’t always possible; sometimes tools are very good at only some of their capabilities. In these cases, try to keep your redundancy to a minimum and disable features that you are not using. This prevents team members from using similar features on multiple tools.
Choose simple licenses and pricing models
The simpler and more uniform your tool licensing is, the easier it is to operate your tool chain in compliance. Try to select tools with clear, and similar licensing models if possible. This limits the number of customization you need to make and the number of restrictions you must be mindful of.
Favor tools with APIs
APIs make it easier for teams to integrate, customize, and automate tooling. With API integrations, you can more easily perform unified monitoring, trigger events in other tools, and retrieve information or perform processes with tools.
APIs are also valuable for ensuring that tools remain relevant for a longer period and are not made redundant by rapidly changing technologies. Then, when it is time to swap a tool, you can easily do so since it is connected only through API.
Favor tools that store configurations in version control, supporting a GitOps lifecycle
Version control is essential to ensuring that DevOps implementations are consistent and easily recoverable. Part of this control includes tool configuration. Tools that enable you to easily expose and export configuration settings are easier to manage and automate. These tools also enable greater flexibility in terms of testing new configurations since you can easily roll back to previous versions.
Adopt tools at an adaptable pace
Adopting tools too quickly, or adopting highly complex tools can end up interfering with your productivity and harming your operations. It’s important that developers and DevOps teams understand all tools in their pipelines. However, rapid change or overly complex tooling makes this understanding difficult or impossible.
When adopting tools, a phased rollout enables team members to more easily adapt to new interfaces and workflows Selecting tools with multiple interface options (API, CLI, GUI) makes onboarding and training easier as well. Additionally, plan time for members to experiment with tools. This can help them adapt more quickly and provide opportunities for optimization that can make up for any lost productivity later on.
Our DevOps Tools List: Top 15 DevOps Tools
Now that you understand how to begin filtering out tools, you can begin looking at your options. Below are 15 of the tools we use, used to use, or evaluated using here at StackPulse.
Docker is an open source containerization platform that you can use to create, deploy, and run apps in containers. The main components of Docker are the Docker Hub, Docker Engine, and Docker Desktop.
The engine is a container runtime based on Linux and Windows OSes. It includes an integrated CLI and is based on containers. The hub is a repository of container images you can use to store your images or access images authored by others. The desktop includes developer productivity tools and a local Docker / Kubernetes environment for development and testing of deployments.
We use Docker mainly in our development environments and in our CI/CD pipelines.
Kubernetes is an open source container orchestration platform. You can use it to automate and manage the deployment of flexible cloud native applications at scale. Kubernetes enables you to operate self-healing clusters of containers in any environment.
We use Kubernetes in our testing and production environments as our main orchestrator. This ensures high availability of services and enables us to roll out deployments smoothly.
Ansible is an open source configuration management and automation platform that you can use for development, testing, and deployment. It enables you to manage configurations, provision resources, and roll out updates with declarative human-readable scripts.
In the past we used Ansible for complex repetitive configuration management tasks. As we moved to immutable infrastructure paradigms, the use of Ansible in our environments has been deprecated in favor of tools like HashiCorp Packer and Terraform.
Git is an open source distributed version control system that you can use to manage and collaborate on source code. It includes features for local branching, staging areas, and multiple workflows. With Git, you can centralize your codebase and enable teams to simultaneously develop without creating conflicts..
A new approach called GitOps leverages git repositories in a wider context, as a single “source of truth” for all organizational configurations and operational data. We are heavy adopters of GitOps ourselves – which might be expected, given that StackPulse helps teams build and deploy incident response and other operational patterns as part of a GitOps workflow.
GitHub is a platform that can be considered an industry standard for version control. At its core, it is a source code and package repository that includes features for CI/CD, code review, project management, team management, and security. GitHub includes a desktop GUI and integrates with a wide range of tools and platforms.
Similarly to Bitbucket, Github has both cloud and enterprise deployment options. Historically, Github has been widely popular with the Open Source community, serving as the source code repository for many of the world’s open source projects.
In recent years, Github has been extended and can now host more than just source repositories. For example, it becomes an increasingly popular way of hosting static web sites.
GitLab is a CI/CD platform that includes features for source code management, security, and automation. It includes ready to use pipelines and monitoring tools. GitLab can be used for repository management, code review, wiki creation, activity feeds, Agile project planning, and issue tracking.
While most of our team has been using GitHub for a long time, we hold GitLab in high admiration and are very happy to see this offering develop into a complete platform.
Bitbucket is a source code management platform that you can use as a cloud-based service or on-premises. It leverages Git for code management and includes features for continuous integration/continuous deployment (CI/CD), security, and code review. You can integrate it with Jira, Trello, Bamboo, and a variety of other applications.
Bitbucket can be consumed as a service (Bitbucket Cloud) or deployed on your own server.
Prometheus is an open source DevOps monitoring tool based on a time-series database. You can use it to collect, analyze, alert on, and visualize metric-based data for your resources and deployments. It includes a range of client libraries for customization and integrates with most common tools. Prometheus is also natively supported by Kubernetes. Learn more in our articles about Prometheus monitoring and Prometheus AlertManager.
When considering high-end Prometheus environments, Thanos is the right project to learn and evaluate.
ELK is a stack of four open source products (Elasticsearch, Logstash, Kibana, and Beats) that you can use to store document data, as well as query and visualize it. It enables you to aggregate data from a range of sources, search and analyze data, and report on your results in real-time. With ELK you can monitor the performance and security of your deployments.
Maintaining ElasticSearch clusters at scale can be quite a complicated task, requiring a lot of knowledge and effort. Many vendors are offering hosted and managed ELK, usually paired with various analytical plugins, to ease adoption.
Elasticsearch and Kibana are now de-facto a golden standard for the user experience of storing, accessing and analyzing document-based data. The popularity of this platform is so high, that many cloud providers are offering managed services powered by ELK.
Grafana is an open source analytics platform you can use to monitor your infrastructure, and applications. It enables you to visualize and alert on time-series data from a single dashboard. You can integrate it with numerous data sources and customize reporting with community supported plugins.
Grafana is a de-facto open source standard for analyzing time-series data.
Selenium is an open source browser automation framework that you can use to test web applications. It includes an integrated development environment, a remote control framework, a webdriver, and the Selenium Grid (which enables parallel testing).
While newer browser automation solutions (such as, for example, Cypress) are now fighting to conquer Selenium’s place in the market, it still remains the most prominently-used solution for end-to-end testing of modern web applications. Some managed offerings based on Selenium and providing value added features on top exist in the market.
JUnit is an open source unit testing framework that you can use to create and run tests on your Java applications. It includes features for conditional testing, nested testing, and dependency injection. You can integrate it with a variety of other tools, including Jenkins, Git, and Maven.
What’s even more interesting is that JUnit has defined a de-facto standard for storing and analyzing test results. Many automated testing frameworks for different modern programming languages, such as Golang, NodeJS, Rust and more, can convert their testing results into JUnit format. This format can later be consumed by CI/CD systems for analysis.
Terraform is an open source IaC platform that you can use to provision, version, automate, and deploy infrastructure configurations. It enables you to create and apply human-readable, declarative, reusable templates which can also serve as infrastructure documentation. Terraform is compatible with on-premises and cloud environments and can be integrated with a variety of orchestration and monitoring tools.
During recent years Terraform has become an indispensable tool in our software development lifecycle. We are adopting a Terraform-first (or, more accurately, Terraform-unless) approach, striving to managing any infrastructure or IT resource as code.
Pulumi is an open source platform that you can use to manage, deploy, and configure cloud infrastructure. It includes features for managing both infrastructure and policy as code and you can deploy it on public, private, and hybrid cloud platforms. You can use Pulumi with a wide variety of components, including containers, Kubernetes deployments, serverless functions, virtual machines, networks, and databases.
This tool strives to replace Terraform as the “go-to solution” for developing Infrastructure as Code. The basic notion (similar to one offered by AWS Cloud Development Kit) is that infrastructure development can be done using standard (rather than dedicated) programming languages, and can benefit from the pipelines that exist for these languages.
AWS Cloud Development Kit (AWS CDK)
AWS CDK is an open source framework that enables you to define and provision AWS resources. It includes default configurations, vetted by AWS, that you can apply with AWS CloudFormation for easy management. CDK also includes features that enable you to define and provision custom components.
Similar to Pulumi, but focusing on the AWS ecosystem, this is an attempt to extend Infrastructure as Code to real programming languages. This tool is, actually, built on top of AWS’s previous generation of Infrastructure as Code (or, more accurately, Infrastructure as Configuration) solutions – namely AWS CloudFormation. It offers two-step infrastructure development, where a code using CDK generates CloudFormation templates, that are later applied to the actual infrastructure.
Learn More About DevOps Toolsets
Read more in our series of guides about DevOps tools
DevOps Automation: How to Streamline Pipelines Without Over-Automating
Automation is one of the key elements of a DevOps pipeline. It enables teams to minimize repetitive tasks, standardize workflows, and increase productivity. Unfortunately, when trying to maximize the benefits of automation, you may end up over-automating your pipelines. Applying ineffective automation can be a nuisance at best and destructive at worst.
In this article you’ll learn what DevOps automation is, the benefits of it, how to avoid over-automation, and some DevOps automation best practices to apply instead.
DevOps Monitoring: The Abridged Guide
DevOps monitoring provides visibility into your operations and enables you to ensure that automation and timelines flow smoothly. Without monitoring you can’t ensure that your services remain available, diagnose issues, or optimize performance.
In this article you’ll learn what DevOps monitoring is, what metrics are and why they’re important, the four golden signals of DevOps monitoring, what type of data you should be tracking, and what factors may affect your monitoring strategy.
Read more: DevOps Monitoring: The Abridged Guide