Americas

  • United States

Asia

jonathan_hassell
Contributing Writer

Microsoft System Center 2012 review: Streamlined cloud service management

reviews
May 14, 201212 mins
Data CenterSmall and Medium BusinessWindows

Pricing and the task of putting together components have been vastly simplified.

Microsoft released the final version of its System Center 2012 suite of components in April at a conference in Las Vegas. I’ve taken a hard look and used it in a variety of tests, and I find it a compelling product that has lost a lot of its licensing and complexity baggage. Let’s drill down.

System Center’s goal

The original idea behind System Center was to provide a product that delivered on three main tenets: to allow your infrastructure to become more productive and efficient, to make your applications and services more reliable and to bring the cloud into your on-premises systems.

With System Center 2012, Microsoft wants you to use what you already have in terms of hardware and software investment and simply wring more out of it in terms of efficiency, uptime and performance. It’s not just about Windows, either: Microsoft collects massive amounts of usage data from its Customer Experience Improvement Program, which shows that nearly 20% of customers are monitoring Linux with System Center.

System Center delivers tools and products that not only make the applications behind your business deliver predictable (and high) uptime, but also have excellent analytics for when things go wrong. With code that Microsoft acquired from AVIcode in 2010, System Center can now see deeply into applications. For example, an administrator can actually view the individual SQL calls that are taking the longest in an application — an issue that directly impacts the performance experience for end users.

Finally, and perhaps most importantly, Microsoft set out with System Center 2012 to bring the knowledge, experience and capabilities of cloud computing into the private data center so that you can use some of the same methodologies and features available in public cloud services right on premises — and even combine management of the two with a single tool set.

There are a lot of moving parts to System Center. Briefly running down the entire suite’s components, you have:

  • App Controller, a new tool that integrates with other System Center components to give a single view of on-premises applications as well as the applications deployed via the Windows Azure cloud platform.
  • Configuration Manager, basically the old Systems Management Server (SMS) product that has gotten much, much better and more flexible over time. There has not been a lot of work on this component in the 2012 edition, however.
  • Data Protection Manager, a backup and restore product that now supports continuous data protection.
  • Endpoint Protection, an enterprise-class anti-malware and protection engine that used to be sold under the Forefront brand name; there’s nothing really new in this edition.
  • Operations Manager, which now supports the discovery of routers, switches, network interfaces and ports (and monitoring them all, too) in addition to the other software- and application-based components that the product has historically found; it also monitors Web platform applications.
  • Orchestrator, a new tool that lets you graphically design scripts called runbooks that direct services and applications, and the workflows that go along with them.
  • Service Manager, a new user servicing and ticketing tool that helps organize support functions and provides a tracking mechanism for requested and assigned work.
  • Virtual Machine Manager, which manages virtual machines for their entire lifecycle, from installation of templates in a library to automated deployments to migrations and monitoring.

There are two key pieces in System Center 2012 that I see as the glue that makes it all work together: Virtual Machine Manager (VMM) and System Center Operations Manager (SCOM). That’s not to say the rest of the components haven’t been updated and freshened, but by far the most compelling additions in the System Center 2012 suite are VMM and SCOM and how they enable a better data center and private cloud environment.

Hands-on testing

I was able to test the release version of System Center 2012 in my lab and simulate the various tasks that, according to Microsoft, data center administrators interested in deploying private clouds would undertake. I also had the opportunity in January to test the release candidate edition of this software alongside Microsoft’s engineers and support people, with them pointing out specific areas in which they thought admins would find particular interest.

I broke my testing out into three pieces: requesting private cloud components, creating clouds from bare metal, and provisioning and monitoring services. This three-part test happened both times I tested — in January and again in April.

Requesting private cloud components

After the initial installation of the suite of components, which went as expected with no surprises (pleasant or otherwise), I prepared the system as an administrator:

  • I created a template for a service request, which allows me to define the types of resources that end users will be able to ask for from the system.
  • I then created a workflow for approving service requests, taking them from the submission stage and routing them over to another administrative account I set up. This other account represented a separate administrator who was responsible for, say, resource control.

Next, I launched App Control to view the resources to which I had access as an end user or junior administrator. Then I began a simulation sequence by submitting a new service request using Service Manager. I chose a template that I had created before in the Service Catalog, and I customized it for a new request, acting as if I were the owner of an application that required some cloud-based services from the data center. (Picture this as essentially a private version of the sign-up experience that a customer new to Amazon Web Services, Windows Azure or another public cloud platform provider would go through to begin service there.)

The service request wizard is clear and simple to understand, but the administrator has to do some background work to correctly set up the templates that owners will use to request services. Therefore, an understanding of the concrete resources available — such as which hosts support virtualization, what levels of virtual machine capacity are required, what networking resources should be made available and in what quantities and configurations, and more — a must for the data center administrative team.

Taking things a step further, I used the self-service portal functionality of VMM to submit a specific request for compute resources over the Web. I asked for a virtual machine with two virtual CPUs, 4GB of RAM, formatted with a sysprepped copy of Windows Server 2008 R2. (Sysprep is a tool that removes the security identifiers from an installed image of Windows, which then makes that image clear to be applied to other machines in a bulk fashion.)

Because I had set up an approval workflow within VMM, the request I submitted as an end user was directed to an administrator account I defined. I then logged on as an administrator, approved the request, allowed VMM to notify the user directly via email, and then watched as VMM kicked off the creation of the virtual machine, exactly to my specifications, as an automated job. I monitored the process of the virtual machine deployment from within the console of VMM.

Creating clouds from bare metal

Next, I tested VMM’s capability, called the “fabric workspace,” to deploy a completely new virtualization stack, including host OS and VM guests, to bare metal hardware. This is a very useful capability that allows data center admins to unload Hyper-V hosts directly from the truck, unbox them, insert network cables — and then leave.

The software can start up the new host right from the VMM console using the network card’s preboot execution environment (PXE), and then it can deploy either a standard or custom version of Windows Server 2008 R2, install and enable the Hyper-V role, and then deploy guests as needed.

The fabric workspace can be used to create new private cloud compute resources, or even to replace failing cluster node members. When used with failover clustering in Windows Server 2008 and the R2 version of the operating system, new clusters can be created and older clusters can be fixed almost magically once new hosts are brought online. By creating standard templates and definitions within VMM, deployment of new Windows Server-based hardware is completely standardized.

I saw this work in Microsoft’s lab, and apart from a couple of network glitches that mainly pertained to how we were accessing the lab environment, I was impressed with how automatic the process was. This capability isn’t supported for VMware hosts at this time, although it would be a nice addition for the future. For now, standardized deployment of new hardware is available for only Windows Server 2008 machines.

Once the host is deployed, VMM takes over with a New Private Cloud Wizard. I created a new private cloud through the simple screens designed to get information about the appropriate resources, network infrastructure information, load balances for the cloud (if any), available storage space, how many virtual machines I could use in this cloud, what memory was available and which hypervisor to use.

After a bit of waiting, my new cloud was set up and ready to serve up an application. If I wanted, I could have delegated capabilities to other administrators while limiting the scope of their selection; in Microsoft’s lab, for instance, I delegated the creation of resources to another user role that I defined, and I limited the number of virtual machines that users in that role could create to seven.

Other settings can be controlled in this way as well, and this is useful for giving junior administrators or business unit IT liaisons the ability to fulfill some requests while ensuring the larger resource picture is known by the data center group that owns the compute power.

Provisioning and monitoring services

Now that I’d configured the hardware and the cloud fabric, I was ready to test the process of deploying an application. Within VMM, I used a pre-created pattern — provided by Microsoft out of the box — that defines a three-tiered application: One tier each for the Web, the application itself and the data that is used within the application.

The Service Template Designer feature opened the template, and I customized the names, workloads and other details of each of the tiers. I also dragged and dropped the different tiers around the screen and established relationships between them, much like playing with Microsoft Visio or a mind-mapping tool. It didn’t take long to understand how to arrange the tiers. I then used this template to deploy an application on the cloud fabric I just created.

Finally, I simulated the failure of a website running on my lab implementation. Using System Center Orchestrator, I was able to create a runbook to specify what the normal operation of the website should look like and then what to do if an error was found.

In this case, I simply stopped a website within Microsoft’s Internet Information Services (IIS) console. System Center Operations Manager detected the website error after a few seconds and repaired it automatically, and then Orchestrator took over and altered the runbook so that future errors could automatically be remediated without intervention.

Clearly this is a simple example and not all errors can be fixed automatically, but through proper data center processes, runbooks can be updated and stored so that uptime will generally increase the longer a system is running and can find more errors that Orchestrator is aware of.

Pricing and component availability

Both the price and complexity of System Center have been reduced in the 2012 version. To begin with, System Center is now available in just two SKUs: Standard and Datacenter. The rule of thumb for which product to choose centers around virtualization: If you do any sort of serious virtualizing of servers or workloads, you want the Datacenter edition. It allows you to manage an unlimited number of virtual machines for one flat fee ($3,600 per license, before volume discounts, at the time of this writing).

The Standard edition is less than half the cost, at $1,300 per license. But with it, you can only manage two instances — the host and a virtual machine guest — making it significantly less capable than the Datacenter edition but a good bet for smaller enterprises.

This is an enormous improvement over the past editions of the product. One fee unlocks all of the components and you don’t have to piecemeal a custom suite of products.

The last word

If you’re putting together a private cloud, or if integrating a lot of compute resources spread around into a single fabric is in your plans, you’d be hard-pressed to find a more well-integrated product. My biggest complaint about System Center’s past versions is the complexity of the pricing and the offering itself. But now that you can buy a license and get it all for a reasonable price, it’s hard not to like it.

The product could better support VMware and other hypervisors, and I would like to see more of a unified interface for all of the administrative tools; it can get clunky to move among all of the individual product’s admin applications. But for the price, System Center 2012 is the state of the art.

Jonathan Hassell runs 82 Ventures, a consulting firm based out of Charlotte, N.C. He’s also an editor with Apress Media LLC. Reach him at jhassell@gmail.com.