Select the directory option from the above "Directory" header!

Menu
In search of the devops ideal

In search of the devops ideal

There’s no one-size-fits-all solution for devops, but we can describe a development process and toolchain that flex to absorb all the complexity we require. Let’s see how these pieces fit together.

Credit: Dreamstime

Devops takes the overhead view encompassing the activity of both development and operations, and choreographs them to interact in the most effective ways. That is the conceptual ideal, but from a technical standpoint, can we describe the ideal devops setup?

The answer is no, because the demands of a two-person startup are radically different from those of a multi-national embarking on a microservice project with hundreds of people involved in its care and feeding.

But we can describe an idealised flow of development that flexes to absorb increasing complexity as needed, and how the technologies, like CI/CD, Docker, and cloud computing, fit into this flow.

Development: The “inner loop”

Let’s suppose you are a developer. You naturally are making changes to software, and when you are satisfied with your changes, you commit them to version control. Version control is the hinge point between software development’s “inner loop” and devops’ “outer loop.”

The developer commit may go onto a dev branch, a feature branch, or (in an informal setting) straight into main, but ideally there will be an automated run of the unit tests. This could happen in a variety of ways—via pre-commit hooks, commit hooks, the options are endless. But the upshot is that the code change will not be accepted into the branch unless the unit tests pass.

Automated testing

Now that the code change is accepted, the next meta-thing that should happen falls under the “integration test” heading, and this is the essential step in continuous integration; that is, the code is continually being integrated into the larger system (whatever it may be) and deployed for automated and manual testing in a running environment.

The sky is the limit when it comes to automated testing. Everything is on the table, with all kinds of modern automated tools available to hammer your software, from Selenium-style automated UI testing to sophisticated load testing suites. Often, some kind of automated smoke testing ensures that software promoted to test is working nominally.

The mantra of testing is, Catch problems as early as possible.

Manual validation tests

Now that the software has cleared the unit and automated integration test hurdles, it may be ready for manual testing. Usually, manual testing occurs against a specific tag that captures a specific set of features and fixes. This is the organisation (be it one person or 20) getting serious about moving the changes to production.

This means that the software is (preferably automatically) deployed to a setting that mimics production, and allows for manually interacting with it, where QA will do its best to break things. Upon finding problems, fixes can be merged in.

Once the good people in QA are satisfied, the promising package might be promoted to UA, or user acceptance testing. This can happen in a number of ways, but the bottom line is that more people (the business analysts and other stakeholders) get a crack at the running software. Once they sign off, the software is ready for production.

Monitoring changes to production

Depending on how big the changes are to production, there is more verifying activity that takes place in that setting, but a new aspect also takes hold: monitoring. Monitoring and logging are essential to keeping tabs on how the overall system is performing. This is another area that has seen vast improvements in the cloud era. A multitude of logging and monitoring tools are available.

In the realm of microservices, deploying to production is sometimes a more intricate affair, as elements of the micorservice may be deployed in phases, with network traffic being routed incrementally to the updated nodes to verify they are interacting as intended with other components.

Roles in devops

Another way to consider these processes is in terms of the roles that people play in them. By roles I mean the hats people wear, not necessarily their job titles. At the highest level, you can say there are three roles: people who modify code, people who verify that things are working properly, and people who manage running systems.

We might call them developers, testers, and admins.

As we zoom into more detail, of course, there is more diversity. For instance, a QA engineer testing new code changes is quite distinct from a business analyst verifying requirements on a UA server, both of which are distinct from a devops engineer configuring monitors to validate that production systems are running within the specified parameters. But we can say these activities fall under the broad heading of verifying things are working as they should.

Let’s use this perspective to talk about some of the specific tools these hat-wearers bring to bear in their work.

Containers: The development-operations hinge

Perhaps the most cross-cutting of technologies is containerisation, which in practice means Docker. This is because Docker allows for componentising a developer’s software into deployable chunks that define their runtime needs. That means developers can target the container, and the admins can use the container as the common denominator across systems.

The container-level deployable unit is sufficient to undergird even the most high-volume and demanding requirements.

Kubernetes has become the most popular container cluster management system, and while it is not a simple piece of technology, its capabilities are impressive, with the ability to manage large clusters across multiple regions and interacting services.

Containers and development

Containerisation is not a no-brainer for the developer’s “inner loop,” however. That is because containers introduce extra complexity into the cycle of code, build, and test. Because of this, it is still more common for developers to use an “exploded” development environment, where they run their code against their local environment uncontainerised.

The unit test remains the developer’s front-line defence in code quality. Nevertheless, Docker can make some aspects simpler for the developer, such as by packaging a standardised version of a datastore like MongoDB or PostgreSQL, allowing it to be spun up with ease for developers to use when coding.

Containers and test data

By the same token, development environments can often benefit by using Docker and Docker Compose to spin up databases with test data.

So although Docker is key in unifying the devops landscape, and offers certain benefits to developers, there is some impedence mismatch when it comes down to actual coding tasks.

CI/CD pipeline tooling

Now let’s consider the tools that can be used to connect the various elements into a devops pipeline. There are many.

Run your own

The most ancient CI server is Jenkins, which remains very popular and capable. Its chief flaw is poor documentation. Virtually anything can be done with Jenkins and its universe of plug-ins, but it is often a figure-it-out-on-your-own experience.

Jenkins is a server that you install and run yourself, often on a cloud VM. It then acts as the orchestrator of things, pulling from GitHub or other version control system, running builds and tests, interacting with a Docker registry, deploying to target environments, and so on. Newer solutions include many SaaS offerings. Let’s consider a few.

SaaS options

GitLab CI/CD and CircleCI are two newer continuous integration offerings that have gained mindshare, but they are far from the only competitors in a hot space drawing new entrants hoping to solve devops problems in a convenient and effective way. Shippable is a newer option, from established vendor JFrog, that is also growing in popularity.

Testing tools

For testing, Selenium is the corollary to Jenkins in that it is free open source software you install and configure yourself. This in contrast to some of the SaaS offerings from Appium or the various cloud providers. Like the CI/CD space, testing is a very active marketplace.

Infrastructure tooling

Infrastructure-as-code tools like AWS CloudFormation, Ansible, Chef, Puppet, and Terraform offer the ability to control the underlying systems used to host Docker and Kubernetes. A certain level of system complexity is required to merit these tools, but once that threshold is reached, they become absolutely essential to the devops process.

Automate all you can

In general, we can say that the responsibility of devops is to unite development and operations into a cohesive, collaborative system. The ideal is to automate as much as possible, and wherever human intervention is desirable, to enable repeatable, single-click execution of the necessary tasks.

Every project and organisation is a work in progress. Given the nature of software, change always involves moving the goalpost. However, a good understanding of the big picture and the tools involved allows us to plan to deal with that change and all its complexity.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Devops

Show Comments