Select the directory option from the above "Directory" header!

Menu
​Australia and New Zealand leading the Software-Defined datacentre revolution

​Australia and New Zealand leading the Software-Defined datacentre revolution

The software-defined data centre is the future of agile, automated, scalable business

A/NZ leading the way. Source: Emma Jane Hogbin Westby (Flickr)

A/NZ leading the way. Source: Emma Jane Hogbin Westby (Flickr)

The trend of Australia and New Zealand being early technology adopters, combined with the fact we’re used regularly as the test bed for global organisations trialling new solutions and systems, places us in a perfect position to demonstrate the potential magic such revolutionary technologies can bring to businesses.

One of the biggest opportunities lying before us in the current market is to use virtualisation to drive competitive business advantage and cost savings. Software-defined infrastructure is already a reality in many of our businesses, and a significant step further – the software-defined data centre – is the future of agile, automated, scalable business, and we’re leading the charge.

Virtualisation itself is nothing new to us. In the A/NZ region we've been abstracting business applications from hardware for some time now, whereas other countries in Asia Pacific, such as Korea, are still early in their adoption cycle.

It all started with compute, where multiple virtual machines with their own dedicated virtual hardware, operating systems and ultimately applications are abstracted from the underlying server hardware. This provides the ability to consolidate isolated workloads onto a centralised pool of resources (think CPU/memory) enabling some compelling business benefits like reducing hardware complexity, management overhead and power consumption/cooling.

With so many benefits it’s not surprising to see a uniform approach to virtualising the other pillars of the data centre including network and storage creating what’s been termed SDDC (software-defined datacentre), but what impact does virtualisation ultimately have on internal IT?

In the Asia-Pacific region we are seeing a wide adoption of SDDC, where hardware has become a commodity. It no longer provides any capabilities other than being built with minimal component redundancy so it can be replaced with minimal impact. Enterprises are standardising on converged infrastructures and using automation tools to programmatically provision virtual infrastructures based on application development requirements. A whole new way of operational management has emerged with DevOps now starting to challenge traditional IT governance frameworks like ITIL (Information Technology Infrastructure Library).

At the heart of all this is the ability to cluster compute, network, and storage hardware together into a generic pool of resources which can then be manipulated by software. SDDC enables us to provision networks with micro segmentation to secure critical business information via policy. We can also provision all-flash storage using RESTful APIs just in time for an application to process business data and provide a result before the whole virtual infrastructure is torn down with minimal effort.

This abstraction of hardware from software means we no longer depend on the physical limitations of the underlying hardware. Let’s look at storage as an example.

Software-defined storage doesn’t rely on physical disk RAID technology - instead data availability is provided within software via parity or replication. This means if a physical drive fails, it doesn’t matter as the data on that drive exists elsewhere in the storage system, it’s just a matter of controlling how many copies of data the system should maintain. This also means physical hardware component failure has very little impact on the rest of the infrastructure. We can further diminish this by utilising NAND flash thereby reducing the number of mechanical moving parts while increasing overall application responsiveness.

The by-product of this shift to software means commodity hardware can be manufactured with fewer moving parts and cheaper components, thereby simplifying overall system management while constructing availability further up the stack in the software/application layer. This architecture also enables businesses to only purchase hardware they need today and grow their investment incrementally overtime, instead of a large up-front purchased solution sized to last 3-5 years. We have witnessed internal IT teams becoming less concerned about operationally managing hardware compatibility/firmware updates and maintenance windows for their IT infrastructure because of these benefits.

As we start to see Cloud native applications emerge as the next application development standard, hybrid data centre infrastructure approaches begin to evolve. While marketing would have you believe that everyone is doing this now, the reality today is that data (and the underling infrastructure needed to manage it) will continue to remain on-premises, close to internal business consumers. This will require ongoing support for internal enterprise application architectures.

Virtualisation software becomes a gateway to automation and leads to the adoption of a hybrid private/public cloud architectures based on commodity pay-as-you-grow infrastructure hardware, ultimately providing CAPEX/OPEX benefits to most organisations.

Craig Waters is Virtualisation Architect APJ, Pure Storage


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags businessdatacentersstorageData storagestorage virtualisationSDDC

Show Comments