Select the directory option from the above "Directory" header!

Menu
Key devops practices to improve application reliability

Key devops practices to improve application reliability

Experts identify five key practices for gaining a better understanding of application health and resolving performance and reliability issues before they have business impacts.

Credit: Kevin

Having a customer mindset and business metrics guides teams on implementation strategy. “Understanding the effectiveness of your technology solutions on your day-to-day business becomes the more important metric at hand,” Blitzstein continues. “Fostering a culture and platform of observability allows you to build the context of all the relevant data needed to make the right decisions at the moment.”

Improve telemetry with monitoring and observability

If you’re already monitoring your applications, what do you gain by adding observability to the mix? What is the difference between monitoring and observability? I put these questions to two experts. Richard Whitehead, chief evangelist at Moogsoft, offers this explanation:

Monitoring relies on coarse, mostly structured data types—like event records and the performance monitoring system reports—to determine what is going on within your digital infrastructure, in many cases using intrusive checks. Observability relies on highly granular, low-level telemetry to make these determinations. Observability is the logical evolution of monitoring because of two shifts: re-written applications as part of the migration to the cloud (allowing instrumentation to be added) and the rise of devops, where developers are motivated to make their code easier to operate.

And Chris Farrell, observability strategist at Instana, an IBM Company, threw some additional light on the difference:

More than just getting data about an application, observability is about understanding how different pieces of information about your application system are connected, whether metrics from performance monitoring, distributed tracing of user requests, events in your infrastructure, or even code profilers. The better the observability platform is at understanding those relationships, the more effective any analysis from that information becomes, whether within the platform or downstream being consumed by CI/CD tooling or an AIops platform.

In short, monitoring and observability share similar objectives but take different approaches. Here’s my take on when to increase application monitoring and when to invest in observability for an application or microservice.

Developing and modernising cloud-native applications and microservices through a strong collaboration between agile devops teams and IT operations is the opportunity to establish observability standards and engineer them during the development process. Adding observability to legacy or monolithic applications may be impractical. In that case, monitoring legacy or monolithic applications may be the optimal approach to understanding what is going on in production.

Automate actions to respond to monitored and observed issues

Investing in observability, monitoring, or both will improve data collection and telemetry and lead to a better understanding of application performance. Then by centralising that monitoring and observability data in an AIops platform, you not only can produce deeper operational insights faster, but also automate responses.

Today’s IT operations teams have too much on their plate. Connecting insights to actions and leveraging automation is a critical capability for keeping up with the demand for more applications and increased reliability, says Marcus Rebelo, director of sales engineering of Americas at Resolve.

“Collect, aggregate, and analyse a wide variety of data sources to produce valuable insights and help IT teams understand what’s really going on in complex, hybrid cloud environments,” Rebelo says. But that’s not enough.

“It is critical to tie those insights to automation to transform IT operations,” Rebelo adds. “Combining automation with observability and AIops is the key to maximising the insights’ value and handling the increasing complexity in IT environments today.”

Optimise monitoring and observability for value stream delivery

By connecting customer needs and business metrics on the one hand with monitoring, observability, AIops, and automation on the other, IT operations have an end-to-end strategy for ensuring a value stream’s operational reliability.

Bob Davis, chief marketing officer at Plutora, suggests that monitoring and observability are both required to support a portfolio of value streams. “Monitoring tools provide precise and deep information on a particular task, which can include watching for defects or triggers on usage or tracking the performance of something like an API, for example,” Davis says. “Observability tools look at everything and draw conclusions on what’s going on with the entire system or value stream.”

Therefore observability tools have a special role in the value stream. “With the information provided by observability tools, developers can better understand the health of an organisation, boost efficiency, and improve an organisation’s value delivery,” Davis notes.

There are tools, practices, and many trade-offs, but in the end, improving application delivery and reliability will require aligning development and operations on objectives.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments