
Werner Vogels - CTO, Amazon.com
In surpassing 30,000 attendees - up from 19,000 the year previous - AWS re:Invent 2016 continues to capture the imagination of the partner, customer and developer communities.
Yet despite the bumper crowds, it was intelligence exhibited by machines that stole the show in Las Vegas.
Artificial intelligence to be precise, heralded as the next great disrupter in cloud, and the weapon of choice for vendors fighting for increased market share.
While nothing is certain in life but death and taxes - well, perhaps for some - when it comes to public cloud, the dominance of Amazon Web Services is both predictable and undeniable.
Yet the battle for control of the skies has been raised a notch further with the tech giant enhancing its services across its broad portfolio, with its new cloud-native database offerings designed to lure large enterprise accounts.
Outlined during the conference, AWS announced a number of new AI services that embed deep learning (DL), as well as providing enhanced GPU support such as elastic GPUs, which can be attached to all EC2 instances, and with varying grades of numbers of GPUs consumed.
“These new AI offerings place AWS at the forefront of providing developers with advanced AI services for embedding in custom applications and benefiting from the latest DL technology,” Ovum research analyst, Michael Azoff, said.
“Overall the message from re:Invent was that large enterprise accounts are moving to Amazon’s cloud.”
Power of AI services
During re:Invent 2016, AWS also announced a number of new AI services powered by deep learning (DL) neural network technology.
“AWS made a switch to the MXNet DL library, an open source project that it now contributes to, having hired a number of engineers from Carnegie Mellon University involved in the project,” Azoff explained.

“This move away from its home-grown DSSTNE library to MXNet, a computation and memory-efficient DL library that targets heterogeneous devices from mobile to distributed GPU clusters, applies to both internal use within AWS and its AI service offerings.
“AWS will further support MXNet with new tools for developers. It will also continue to support a range of leading DL libraries including TensorFlow, Theano, Caffe, and Torch.”
Azoff said AWS offers GPU support for DL with one click, and has created P2, an EC2 instance designed for DL workloads.
Furthermore, AWS is also supporting FPGAs, which are seen as offering a degree of flexibility between hardware acceleration of software algorithms on CPUs and GPUs and hardwiring AI systems.
“FPGAs provide flexibility in hardwiring algorithms,” Azoff added. “This accelerates their running, but designing and testing these systems is not as rapid as when working with pure software algorithms.”
Delving deeper, AWS announced three (more are planned for next year) new AI services: Amazon Rekognition, Amazon Polly, and Amazon Lex.
“The services are powered by DL for real-time and batch analysis and are designed to be easy to use and low cost,” Azoff said.
Specifically, Amazon Rekognition offers image recognition and analysis, such as that for facial analysis and categorisation, while Amazon Polly converts text to life-like speech (in an MP3 audio stream) with the option of 47 voices across 27 languages.
Completing the set, Amazon Lex is a conversation engine also used in Amazon Alexa. For Amazon, Lex represents a third-generation conversation technology, the first is machine-oriented, the second is control-oriented, and the third is intent.
“Amazon’s internal use of AI technology, which it has over a thousand developers working on, include applications such as discovery and search recommendations, fulfilment, and logistics, Alexa and Echo, and it uses AI/machine learning to enhance many other features and products,” Azoff added.

“These internal AI capabilities are what Amazon has exposed to its customers as external services.”
Enterprise cloud
During the conference, Peter Weis, VP and CIO of Matson, a large multibillion dollar enterprise, spoke about why the US-based public shipping company went all in with AWS.
“It was with another major cloud provider and had tried to work with it but faced a number of obstacles,” Azoff explained.
“Foremost there was a culture gap, as because the provider did not have cloud-native in its DNA, and a large part of the provider’s business was still legacy/monolith-based and some its divisions saw cloud as cannibalising their business. This resulted in cultural and technology friction.”
As explained by Azoff, Matson had made a commitment to move to cloud-native technologies because it saw this as the future, and its systems had already made the transition to a modular architecture.
A key factor in moving to AWS and successfully adopting its cloud-native technologies was therefore Matson’s readiness and suitability to make the transition to micro-services and containers.
Not least of the advantages was being able to hire grade-A developers who were unlikely to want to continue building monolith systems.
“Amazon’s relationships with vendors it is targeting for the enterprise market, such as Oracle, Salesforce, and SAP, is complex because these very same vendors are also partners,” Azoff said.
“To stay ahead of the other cloud providers, Amazon builds its own digital hardware and also runs its own global private network to connect its data centres.”
Azoff said relying on existing equipment providers and the internet “simply does not scale well enough” for the huge capacity needs and low-latency requirements of AWS.
“At the scale Amazon AWS requires, building and owning its own equipment reduces costs as well as speeding up maintenance and updates,” he added.