Select the directory option from the above "Directory" header!

Menu
AWS 'standing at the edge of a new technical era'

AWS 'standing at the edge of a new technical era'

Tapping further into genAI.

Swami Sivasubramanian (Amazon Web Services)

Swami Sivasubramanian (Amazon Web Services)

Credit: Amazon Web Services

AWS has revealed a raft of new AI services and functionality after stating it was "standing at the edge of another technological era" describing the powerful relationship between humans and technology "unfolding right before us”.

This is what AWS vice president of data and AI, Swami Sivasubramanian stated during his re:Invent keynote delving into the new data and artificial intelligence (AI) offerings coming into the market.

“Generative AI is augmenting human productivity in many unexpected ways but also fuelling human intelligence and creativity.”

Some of the new AI services and functionality include the Anthropic Claude 2.1 model, which has a 200K token context window and claimed to have improved accuracy over long documents, and the Meta Llama 2 70 billion parameter model, or Llama 2 70B.

Next were updates to Titan, such as Amazon Titan Multimodal Embeddings, which converts images and short text into “embeddings” — numerical representations that help the model understand semantic meanings and relationships in data — and stores them in a customer’s vector database.

Amazon Titan Image Generator was also announced in preview, which is aimed at users in the advertising, ecommerce, and media and entertainment industries in creating images and enhance existing ones using natural language prompts for rapid ideation and iteration on large volumes of images at low cost.

AWS’ Generative AI Innovation Center – a program aimed at pairing people with AWS science and strategy experts with experience in AI/machine learning (ML) and generative AI – is also expanding to include a custom model program for Anthropic Claude. From the first quarter of next year, users can collaborate with experts to fine-tune Anthropic Claude models with their own data.

For SageMaker, HyperPod was announced, with AWS stating it can remove the undifferentiated heavy lifting for building and optimising ML infrastructure for training models and claimed it can cut training time by 40 per cent.

Also new for SageMaker is Inference, which the cloud giant stated that it could reduce model deployment costs and latency, enabling customers to deploy multiple models to the same instance to better utilise the underlying accelerators, and claimed it could reduce deployment costs by 50 per cent on average.

New updates were provided for AWS’ vector engine for Amazon OpenSearch Serverless, a scalable, high-performing similarity search capability, with it leaving preview and entering general availability. Vector search capabilities were also made available for Amazon DocumentDB and DynamoDB, as well as Amazon MemoryDB for Redis. Amazon Neptune Analytics was also announced for general availability, an analytics database engine for data scientists and application developers to quickly analyse large amounts of graph data.

In addition to the four integrations for data connection and analysis without building and managing complex extract, transform and load (ETL) data pipelines that were announced yesterday, Sivasubramanian announced one for Amazon OpenSearch and S3, which can Sivasubramanian analyse infrequently queried data in cloud object stores and simultaneously use the operational analytics and visualisation capabilities of OpenSearch.

A preview for Amazon Clean Rooms ML was also announced, which allows users and their partners to apply privacy-enhancing ML to generate predictive insights without having to share raw data with each other.

AWS’ Amazon Q AI-powered business chatbot, which was announced by AWS CEO Adam Selipsky with more details shared about its functionality with Redshift, like Amazon Q generative SQL in Amazon Redshift Query Editor, an an out-of-the-box web-based SQL editor for Redshift for query authoring via expressing queries in natural language and receiving SQL code recommendations. 

Additionally, Sivasubramanian also said that the capability to use Q to create data integration pipelines using natural language would be coming soon.

Wrapping up Sivasubramanian’s announcements was the preview of model evaluation for Amazon Bedrock, which can help users evaluate, compare, and select models for their specific use case, using either automatic or human evaluations.

Sasha Karen travelled to re:Invent 2023 as a guest of AWS.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags amazonAmazon Web ServicesAWS

Show Comments