Select the directory option from the above "Directory" header!

Menu
Large language models: The foundations of generative AI

Large language models: The foundations of generative AI

Large language models evolved alongside deep-learning neural networks and are critical to generative AI. Here's a first look, including the top LLMs and what they're used for today.

Credit: Phalexaviles/Shutterstock

Large language models (LLMs) such as GPT, Bard, and Llama 2 have caught the public’s imagination and garnered a wide variety of reactions. This article looks behind the hype to help you understand the origins of large language models, how they're built and trained, and the range of tasks they are specialised for. We'll also look at the most popular LLMs in use today.

What is a large language model?

Language models go back to the early twentieth century, but large language models (LLMs) emerged with a vengeance after neural networks were introduced. The Transformer deep neural network architecture, introduced in 2017, was particularly instrumental in the evolution from language models to LLMs.

Large language models are useful for a variety of tasks, including text generation from a descriptive prompt, code generation and code completion, text summarisation, translating between languages, and text-to-speech and speech-to-text applications.

LLMs also have drawbacks, at least in their current developmental stage. Generated text is usually mediocre, and sometimes comically bad. LLMs are known to invent facts, called hallucinations, which might seem reasonable if you don’t know better. Language translations are rarely 100% accurate unless they’ve been vetted by a native speaker, which is usually only done for common phrases. Generated code often has bugs, and sometimes has no hope of running. While LLMs are usually fine-tuned to avoid making controversial statements or recommending illegal acts, it is possible to breach these guardrails using malicious prompts.

Training large language models requires at least one large corpus of text. Training examples include the 1B Word Benchmark, Wikipedia, the Toronto Books Corpus, the Common Crawl dataset, and public open source GitHub repositories. Two potential problems with large text datasets are copyright infringement and garbage. Copyright infringement is currently the subject of multiple lawsuits. Garbage, at least, can be cleaned up; an example of a cleaned dataset is the Colossal Clean Crawled Corpus (C4), an 800GB dataset based on the Common Crawl dataset.

Along with at least one large training corpus, LLMs require large numbers of parameters, also known as weights. The number of parameters grew over the years until it didn’t. ELMo (2018) has 93.6 million parameters; BERT (2018) was released in 100 million and 340 million parameter sizes; GPT (2018) uses 117 million parameters; and T5 (2020) has 220 million parameters. GPT-2 (2019) has 1.6 billion parameters; GPT-3 (2020) uses 175 billion parameters; and PaLM (2022) has 540 billion parameters. GPT-4 (2023) has 1.76 trillion parameters.

More parameters make a model more accurate, but models with higher parameters also require more memory and run more slowly. In 2023, we've started to see some relatively smaller models released at multiple sizes: for example, Llama 2 comes in sizes of 7 billion, 13 billion, and 70 billion, while Claude 2 has 93-billion and 137-billion parameter sizes.

A history of AI models for text generation

Language models go back to Andrey Markov, who applied mathematics to poetry in 1913. Markov showed that in Pushkin’s Eugene Onegin, the probability of a character appearing depended on the previous character, and that, in general, consonants and vowels tended to alternate. Today, Markov chains are used to describe a sequence of events in which the probability of each event depends on the state of the previous one.

Markov’s work was extended by Claude Shannon in 1948 for communications theory, and again by Fred Jelinek and Robert Mercer of IBM in 1985 to produce a language model based on cross-validation (which they called deleted estimates), and applied to real-time large-vocabulary speech recognition. Essentially, a statistical language model assigns probabilities to sequences of words.

To quickly see a language model in action, just type a few words into Google Search, or a text message app on your phone, with auto-completion turned on.

In 2000, Yoshua Bengio and co-authors published a paper detailing a neural probabilistic language model in which neural networks replace the probabilities in a statistical language model, bypassing the curse of dimensionality and improving word predictions over a smoothed trigram model (then the state of the art) by 20% to 35%. The idea of feed-forward auto-regressive neural network models of language is still used today, although the models now have billions of parameters and are trained on extensive corpora; hence the term "large language model.”

Language models have continued to get bigger over time, with the goal of improving performance. But such growth has downsides. The 2021 paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, questions whether we are going too far with the larger-is-better trend. The authors suggest weighing the environmental and financial costs first and investing resources into curating and documenting datasets rather than ingesting everything on the web.

Language models and LLMs explained

Current language models have a variety of tasks and goals and take various forms. For example, in addition to the task of predicting the next word in a document, language models can generate original text, classify text, answer questions, analyse sentiment, recognise named entities, recognise speech, recognise text in images, and recognise handwriting. Customisng language models for specific tasks, typically using small to medium-sized supplemental training sets, is called fine-tuning.

Some of the intermediate tasks that go into language models are as follows:

  • Segmentation of the training corpus into sentences
  • Word tokenisation
  • Stemming
  • Lemmatising (conversion to the root word)
  • POS (part of speech) tagging
  • Stopword identification and (possibly) removal
  • Named-entity recognition (NER)
  • Text classification
  • Chunking (breaking sentences into meaningful phrases)
  • Coreference resolution (finding all expressions that refer to the same entity in a text)

Several of these are also useful as tasks or applications in and of themselves, such as text classification.

Large language models are different from traditional language models in that they use a deep learning neural network and a large training corpus, and they require millions or more parameters or weights for the neural network. Training an LLM is a matter of optimizing the weights so that the model has the lowest possible error rate for its designated task. An example task would be predicting the next word at any point in the corpus, typically in a self-supervised fashion.

A look at the most popular LLMs

The recent explosion of large language models was triggered by the 2017 paper, Attention is All You Need, which introduced the Transformer as, “a new simple network architecture ... based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.”

Here are some of the top large language models in use today.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments