ARN

An inside look at Microsoft’s booming cloud business

The ability to host Azure infrastructure on premise in a hybrid role is a key differentiator
  • John Dix (Network World)
  • 05 May, 2016 03:34

As Director of Program Management for Azure at Microsoft, Corey Sanders heads the compute team which is responsible for the VM-based offerings on Windows and Linux, the new microservices platform, and container services, among other things. Sanders joined the Azure team about six years ago, before which he was a developer in the Windows Serviceability team. Network World Editor in Chief John Dix recently visited Sanders in his Redmond, WA, office to get a better sense of how Microsoft’s cloud business is taking shape.

Corey Sanders, Director of Program Management for Azure, Microsoft

Corey Sanders, Director of Program Management for Azure, Microsoft

Let’s start with a big picture view of Microsoft’s cloud efforts, what the company offers and how the pieces fit together.

We like to break down our overall cloud offerings into three big buckets. One is SaaS based solutions like Office 365 and Dynamic CRM, and we think of these as being the fastest time to value for customers. They can sign up and immediately get the benefits the cloud offers, like massive scale and global coverage, without much change to anything they’re doing. That’s the top level.

There there is the infrastructure level, which is a lot of the work I do. This offering is about virtual machines, application mobility, lift-and-shift type workloads, taking your data center and moving it to the cloud. And the third category is higher level or advanced services that make it simpler to scale, simpler to get high availability and build next-generation applications that really take advantage of cloud agility.

We have competitors that offer just SaaS solutions and we have competitors that offer just infrastructure and platform solutions. We feel we’re uniquely positioned to offer all three.

Do you break out revenue for the segments?

We don’t break out revenue, but our annualized cloud revenue is $9.4 billion. I will say that revenue for the infrastructure and advanced services that we combine as Azure is up 140% year over year. We’re adding 120,000 Azure customer subscriptions per month.

Can you give us a thumbnail description of the physical plant that supports all of this?

We break down our global coverage into regions, with the goal of being close to customers with low latency, and also having scalability within those regions. We’ve announced 30 of these globally and 22 are live today. They are of varying architectures and varying sizes, and a lot of that is based on when they were built. They keep up with hardware innovation and physical build-out and cooling systems, and so on. Almost all regions have multiple data centers.

Is there a total data center count or total server count that would help put it in perspective?

We don’t share those numbers publically, and frankly, even if we did, by the time we said it they would have already changed.

On the Azure front, is there a typical user profile emerging?

It’s a great question. One of the exciting parts about Azure is it’s a broad, open, diverse platform that includes everything from services built from .NET microservices to partnerships and solutions with Linux providers. This wide variety of solutions has resulted in an amazing variety of customers.

We’ve seen a huge uptake in enterprises using both infrastructure and advanced services. Folks like DE Healthcare, Alaska Airlines and NBC are using the platform for both existing applications and next-generation apps. But we’re also seeing new customers and startups using open-source parts of our platform. Jet.com is a good example, an e-commerce site based in New Jersey that is deployed completely on Azure. It’s been pretty impressive to see the amount of growth from startups and ISVs. In fact, I think 40% of our revenue today is driven by startups and ISVs.

So, we’re seeing small companies do big things and we’re seeing big companies do some small things. It’s running this amazing range of opportunity from customers and partners. I would be hard-pressed to give you a sense of either the size of the company or the size of the workload that the companies are typically deploying.

How about use cases? Are there particular use cases that are more prevalent?

The first wave for many customers follows similar patterns. We see a lot of development and tests in the cloud, and we see a lot of bursting workloads. We recently had Milliman [an actuarial and risk management services company] on stage at our Build conference showing how it can burst actuarial risk analysis workloads, which was a really exciting example of the flexibility and versatility the cloud brings.

So these use cases are getting really hot, and both take advantage of that immediate on/immediate off capability and having huge amounts of compute power to deploy quickly. But we’re also seeing an uptick in full data center migrations as enterprises big and small start looking at their next wave of capital expenditure. There is more excitement around, “Let’s just move everything.” We have a few customers that are in the process of doing just that, but none have actually made public statements yet.

I would imagine most enterprises would be afraid to put all their eggs in one basket.

We’re seeing a lot of interest in multi-cloud deployments and hybrid deployments where customers deploy both on-prem and in the public cloud and have bursting capabilities between the two. In fact, one of the things that differentiates us from other cloud providers is our focus on multi-cloud with our own Operations Management Suite (OMS) that allows monitoring across multiple clouds.

How about disaster recovery?

Disaster recovery and backup are two additional hybrid services we offer. Both are popular and growing quickly and we’re seeing them coupled with solutions like

Page Break

StorSimple and Stretch DB. All of those share this concept of “keep your workload on-prem but use the cloud for some aspect of inexpensive storage to be able to burst to when needed.” We’re seeing a lot of enterprise customers pick that up as an easy first step to the cloud.

Speaking of hybrid, you have introduced “Stack” that will enable customers to replicate their Azure infrastructure on premise, but how do you see customers using that?

When we talk with customers about deploying in public Azure, frequently they say they’re excited about getting that versatility but there are aspects of their workloads, aspects of their infrastructure, they want to keep on-prem, whether for compliance reasons or proximity reasons, or whether they just want that sense of control.

Azure Stack enables all those scenarios without having to compromise on the API and the portal experience. Customers can get that same Azure experience, that same Azure agility, but deployed in an on-prem situation. The power of that hybrid story is second to none and we’re seeing a huge amount of excitement from customers. Stack is still in preview, but should be generally available by the end of the year.

So Azure in the cloud is fully compatible with Azure Stack on-premise?

Fully compatible. Obviously it’s built to run in a smaller form factor, but it’s effectively the exact same Azure components around API, management, PowerShell, VAS scripts, and so on.

If I have both and I want to keep most compute on-prem but burst to the cloud to accommodate order spikes, how hard will it be to achieve that?

The capability to burst will be very simplified by having a consistent experience between your on-prem deployment and the public Azure experience. Your application will still need to support that type of bursting capability. If it’s a scale-out website in the example you mentioned, and Thanksgiving is rolling around, that type of scenario will likely work very well between public Azure and private. You’ll pay for the private environment with the Azure Stack purchase, and on the public side you’ll pay per minute based on whatever you use.

How do you keep data concurrent?

It really depends on the application makeup. For things like e-commerce sites there is something we call “eventually consistent.” You can spread your data across multiple regions and it basically catches up. That’s a fine approach because, even if the data is off by maybe a second or two, it’s not a huge problem as long as you get transaction for the customer done.

But other models will be possible as customers take advantage of the platform, and this is where those advance services become a valuable differentiator because, if you’re doing everything with VMs and dealing with data in the way you’ve always dealt with data, you’ll find that type of bursting a bit challenging. If you’re taking advantage of some of the more advanced level services and you’re building your app to be more cloud-friendly, taking advantage of the services both on Azure Stack and public Azure, suddenly that bursting, that scaling across multiple regions becomes much more feasible.

Does the company view Stack as a way to help migrate people a step at a time to a full cloud deployment?

When customers approach Azure, whether it be for Stack or public Azure, being able to offer a full range of choices for whatever you want to do, wherever you want to do it, is what we think is different from anyone else. Azure Stack is just one piece of that. It enables people to get that full cloud power without having to move everything into a public environment.

When VMware came out with its cloud story I assumed it would take off because it was a perfect complement to the company’s data center tools, but it doesn’t seem to have worked out that way. Why would your hybrid story be more successful?

We’re the only provider that has the ability to deploy fully managed, enterprise-grade resources on-prem while also having a full hyperscale cloud that spans up to 30 regions. That combination is unique and really what customers are looking for. They want that ability to grow in the cloud as their needs take them there, but also want the trust and enterprise focus they can get in an on-prem environment. I don’t think anyone else has both. Many other providers have one versus the other and I think that’s where we are really quite unique.

Azure supports both Windows and Linux virtual machines. Can you give us a sense of how usage is breaking down?

One in four virtual machines deployed by customers is Linux.

Has there been much demand for containers?

One of the exciting things with cloud in general is the rate of change and how quickly customers are picking things up. Things like software-defined networking, machine learning and containers. Enterprises are using containers for development and testing and to move those environments to the cloud without much change, which is an amazing step forward for the entire DevOps movement.

The next generation of containers is changing the full app model into a microservices model, and we just launched a platform called Service Fabric as one example of that. The idea is to build simpler and more agile OLAP applications that consist of more subdivided components versus large monolithic application components.

Can you explain that in more depth?

It’s actually one of the things I own. Service Fabric is our take on a microservice platform. The goal here is to be able to take applications and subdivide them into smaller components, 15, 20, even 30 of these different components that each have

Page Break

their own unique lifecycle. They can each be updated individually, they can be rolled back individually, they can be scaled individually.

The benefit is it increases your agility for application deployment without sacrificing availability, which is a really hard problem for a lot of apps today. We have monolithic apps -- front end, middle tier, back end -- it’s very hard to scale and have a lot of agility without risking your availability.

With microservices you can get all three. With Service Fabric we’ve got a lot of built-in capabilities that take advantage of that microservices awareness. You can update a single microservice. It will automatically roll back if it detects a problem and none of the other microservices will be impacted. It’s fully aware of the health state of all of those microservices and it even offers a stateful microservice so you can keep track of state for each one of those sets. I think it’s going to be pretty exciting to see that next generation of app deployed.

Is it related to Azure Functions?

I typically call Azure Functions a server-less platform, a sort of micro-compute platform, while Service Fabric is the full application … it’s all of the pieces put together and it has knowledge of the health and interaction between all of them. Azure Functions is really geared toward executing a single operation. It’s in the same ballpark of application design, but they’re addressing different parts of the platform.

The key thing with Azure Functions is the way you program and manage it as a simple function. You’re writing a snippet of code, whether it be JavaScript or C# or whatever, that will execute in the portal experience and you don’t worry about spinning up a VM. There is no concept of a server at all. All you’re doing is running that snippet of code and executing it based on some event. We demoed an IoT based signal that launched a function. The developer didn’t care whether a server popped up or not. It doesn’t matter. It just ran the code and it was done. That’s the concept behind the serverless approach. You don’t have to manage a VM and don’t have to pay for that. You’re paying for just the execution of that function.

When you said cloud is encouraging adoption of new technologies you mentioned software-defined networking, what are you seeing there?

Software-defined networking ends up being one of the more innovative aspects of the platform. The way in which we scale and the way in which we expose that scale to customers requires a huge amount of investment in our software-defined network. A good example of this is our load balancers which were all hardware-based when we started and we ended up converting them to a full software solution. It was really the only way we were going to scale to the size and the magnitude we need now.

We’ve also exposed the full software-defined network capabilities to customers. This comes in the form of customers being able to define their network security groups, customers being able to define their routing rules and enabling partners like Barracuda and F5 to be able to deploy virtual clients.

All of these are effective changes in the way customers can expect to work with the platform they’re deploying, and the rate of change they can now make is hugely improved. Suddenly you can change things that used to require modifying and updating a physical device, by going into our portal and entering a few numbers and hit submit and you’ve now changed all of your security rules. It makes it really agile.

It’s also an exciting recruiting opportunity because it drives a lot of interest. Five years ago software-defined networking and machine learning felt like science fiction, and now they are becoming practical science.

Is security still top of mind for customers considering Azure?

Network security and their control over their security is a big factor. Besides being able to expose some of the capabilities I mentioned, we have additional services like the Azure Security Center that integrates with these tools and will alert customers when things aren’t configured properly.

The best example of this is if you have a virtual machine that’s not behind the Web Application Firewall (WAF) and you want all of your virtual machines to be behind the WAF, this will tell the system administrator this is a security concern. That’s important because the cloud lets customers be more agile and they need security solutions that match that agility. So yes, it’s an important conversation with almost every customer.

Closing up here, let’s talk a bit about competition. I presume Amazon is your biggest competitor, and for your team in particular, how do you square up?

One of the two areas where we show a lot of strength in the market is of course hybrid, as we’ve discussed. And that’s the full capabilities … the Azure Stack story, the disaster recovery story, the backup story. A lot of customer are excited about our comprehensive hyperscale public cloud complemented by that full hybrid story.

And the other area driving excitement is the open and huge amount of choice we offer in Azure. One of the tenets of the platform is to offer customers the choice to deploy whatever they want … being able to deploy Linux solutions (we offer perhaps the best Red Hat support of any of the public clouds), being able to offer fantastic Cloud Foundry support with Pivotal, while also offering fantastic advanced services like Azure Functions and Service Fabric.

I think the combination of these services and partner solutions make us attractive to customers who are looking to take advantage of the full power of the cloud.

Any closing thoughts?

I knew this movement was going to take off, you could see the excitement brewing with customers, but the rate at which it’s happening has surprised me. The deployment and growth we’re seeing in Azure and the cloud in general is mind-blowing. It’s really exciting. Daunting, but exciting.