Developers can now use simple natural language to build or enhance chatbots with Amazon Lex, a tool for crafting conversational interfaces. Using new generative AI features, programmers can describe tasks they want the service to perform, like "organise a hotel booking including guest details and payment method," as highlighted in a recent blog post by the company.
“Without generative AI, the bot developer would have to manually design each element of the bot — intents or possible paths, utterances that would trigger a path, slots for information to capture, and prompts or bot response, among other elements,” Sandeep Srinivasan, a senior product manager of Amazon Lex at AWS, said in an interview. “With this approach, you get started easily.”
Lex can also help with tricky human-bot interactions. If Amazon Lex can't figure out part of a conversation, it asks an AI foundational large language model (LLM) chosen by the bot maker for help
Another new Amazon Lex feature simplifies creating chatbots by automatically handling frequently asked questions (FAQs). Developers set up the bot's primary functions, and a built-in AI finds answers from a provided source — a company knowledge base, for example — to answer users' questions.
Amazon is also introducing a built-in QnAIntent feature for Lex, which incorporates the question-and-answer process directly into the intent structure. This feature utilises an LLM to search for an approved knowledge base and give a relevant answer. The feature, available in preview, uses foundation models hosted on Amazon Bedrock, a service that offers a choice of FMs from various AI companies. Currently, the feature allows you to switch between Anthropic models, and “we are working to expand to other LLMs in the future,” Srinivasan said.
Amazon Lex can be thought of as a system of systems — and many of those subsystems employ generative AI, Kathleen Carley, a professor at the CyLab Security and Privacy Institute at Carnegie Mellon University, said in an interview.
“The key is that putting a large language model into Lex means that if you build or interact with an Amazon Lex bot, it will be able to provide more helpful, more natural human-sounding, and possibly more accurate responses to standard questions,” Carley added. “Unlike the old style analytic system, these bots are not task focused and so can do things other than follow a few preprogrammed steps.”
Lex is part of Amazon’s AI strategy, including building its LLM. The model, codenamed “Olympus,” is customised to Amazon’s needs and has 2 trillion parameters, making it twice the size of OpenAI’s GPT-4, which has over 1 trillion parameters.
“Amazon’s LLM is likely to be more flexible than GPT-4, better able to handle nuance, and may do a better job with linguistic flow,” Carley added. “But it is too early to really see the practical differences. The differences will depend on both differences in what the tools are trained on and the number of parameters.”
The latest features in Amazon Lex could be part of a coding revolution powered by generative AI. Developers are trying out ChatGPT for coding tasks, and it looks promising, especially for checking code. Developers will still likely need to do some coding for really complex software, but AI will likely change how we use simpler, no-code and low-code tools that require little technical knowledge.
When GitHub Copilot came out in 2021, it sometimes made mistakes or didn't work, but it was still helpful. People thought it would get better and save time in the future. Two years later, Copilot has improved, and you must pay for it, even if you're just using it yourself. Coding assistants like Copilot now do more, like explaining code, summarising updates, and checking for security problems.