Select the directory option from the above "Directory" header!

Menu
What comes after Kubernetes?

What comes after Kubernetes?

Kubernetes solves only half the problem of modernising applications. The next stage will be filling the gap between Kubernetes and applications

Credit: Dreamstime

“Boring.” That’s one of the best compliments you can pay an infrastructure technology. No one wants to run their mission-critical applications on “spicy!” But boring? Boring is good.

Boring means that a technology has reached a certain level of ubiquity and trust, that it’s well-understood and easily managed. Kubernetes, in production at 78 per cent of enterprises, has arguably passed that point, having become widely recognised as standard cloud-enabling plumbing that “just works.”

Or, otherwise said, has become “boring.”

Even as the Cloud Native Computing Foundation helps coordinate the development of a range of other projects to fill in any blanks left behind by Kubernetes at the infrastructure layer, the Kubernetes conversation has started to shift to what’s happening higher up the stack.

In April, developer advocate superstar Kelsey Hightower observed that Kubernetes only solves half the problem in modernising applications, if that:

There’s a ton of effort attempting to “modernise” applications at the infrastructure layer, but without equal investment at the application layer, think frameworks and application servers, we’re only solving half the problem.

What do we do about this?

Filling the gap between apps and infrastructure

“There’s a huge gap between the infrastructure and building a full application,” said Jonas Bonér, CTO and co-founder at Lightbend, in an interview. Bonér helped to start the open source project Akka, which is aimed at solving a complex problem set between the infrastructure and application, above Kubernetes on the stack. As Bonér put it: 

“It’s an exercise to the programmer to fill in this huge gap of what it actually means to provide SLAs to the business, all the things that are hard in distributed systems but needed for the application layer to make the most of Kubernetes and its ecosystem of tools.”

This is where an organisation needs things that sit between the app and the infrastructure and make it all work, Bonér continued. It’s not about replacing anything, but rather about adding more tools in the toolbox and extending the infrastructure model of isolation, and constraints imposed by the network, into the app itself — delivered in an intuitive, flexible, and powerful, yet simple, programming model.

As two Tesla engineers discussed at a conference last year, Tesla relies on “digital twin” capabilities that power its electric grid, made possible by the combination of Akka and Kubernetes. “The majority of our microservices run in Kubernetes, and the pairing of Akka and Kubernetes is really fantastic,” said Tesla engineer Colin Breck. He explained:

Kubernetes can handle coarse-grained failures in scaling, so that would be things like scaling pods up or down, running liveness probes, or restarting a failed pod with an exponential back-off. Then we use Akka for handling fine-grained failures like circuit breaking or retrying an individual request and modeling the state of individual entities like the fact that a battery is charging or discharging.

According to Bonér, there are three generally unsolved areas that are still evolving above Kubernetes on the cloud-native stack, giving rise to new abstractions offered by technologies like Akka: application layer composition, stateful use cases, and data-in-motion use cases.

Enabling declarative app layer composition

“People too often use old tools, habits, and patterns, often originating from the traditional (monolithic three-tier) designs that inhibit and constrain the cloud model delivered by Kubernetes,” Bonér noted.

"We need to extend the 'amazingly good' model of containers, service meshes, and orchestration all the way up to the application/business logic, so we can make the most out of it while maintaining end-to-end guarantees on behalf of the application," he said.

Serverless points the way by raising the level of abstraction, and providing a declarative model where as much as possible of boilerplate, infrastructure, and operations is removed and managed by the platform, leaving the developer with the essence: the business logic and its workflow.

Improving support for stateful use cases

Most of the cloud ecosystem is mainly tackling so-called 12-factor style applications, i.e., stateless applications. Sometimes that might be all you need. But non-trivial apps are usually a mixture of stateless and stateful use cases.

“We need more and better tools to tackle state well,” Bonér said. “The value is nowadays often in the data, and it’s often in the stateful use cases that most of the business value lies — making sure you can access that data fast, while ensuring correctness, consistency, and availability.”

In the cloud, unless you have a really good model, and the tools supporting it, you’re forced back to the three-layer architecture of pushing everything down into the database every time, regardless if it is for communication, coordination, or long-term storage of state.

Importantly, you also need good state models to complement the stateless approach, giving you more options in the toolbox. The extent of Kubernetes’ handling of stateful use cases today is really only supported in its StatefulSets feature, but StatefulSets are designed for the people that implement infrastructure like databases, Bonér noted, not for the app developers.

“So there’s still a huge gap here,” Bonér said. “That’s where Akka really comes in.”

Handling fast data or data-in-motion use cases

Arguably, the Kubernetes ecosystem doesn’t yet offer great support for streaming and event-based use-cases. Service meshes like Istio are designed around a request-response model and “can get in the way,” Bonér said.

Streaming is also often stateful, with stages aggregating data in-memory while needing to be available, which ties into the discussion above. Work is underway in the Knative community to address this but we are just getting started, Bonér suggested.

Much of the thrust of these new directions seems to be the low code / no code / “decouple front end from back end” concepts that roll up under the serverless movement.

“Serverless gets us closer to addressing the problem of extending the model of Kubernetes into the application itself,” said Bonér. “That’s what it’s all about. Abstracting away as much as possible, and moving to a declarative model of configuration rather than coding, where you define what you should do and not how you do it.”

Application infrastructure that ‘just works’

As the cloud-native stack continues to evolve above the Kubernetes infrastructure level, it will be interesting to see how these concepts at the application layer play out to serve specific language developers.

While many of the toughest application architecture challenges have long been the domain of server-side Java development, we seem to be moving toward a Jamstack architecture where increasingly it’s the JavaScript developers who expect to access application infrastructure that just works, especially as endpoints multiply exponentially.

This isn’t to suggest that back-end infrastructure isn’t important. Rather, it’s to acknowledge, as Ian Massingham has argued, that front-end developers vastly outnumber back-end developers, and for good reason: There are far more applications that need to be built than there is infrastructure that needs to be created to host it

 Bridging the two worlds through open source projects like Akka becomes ever more important all the time.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Kubernetes

Show Comments