How does generative AI work?

Generative AI technology typically uses large language models (LLMs), which are powered by neural networks—computer systems designed to mimic the structures of brains. These LLMs are trained on a huge quantity of data (e.g., text, images) to recognize patterns that they then follow in the content they produce.

For example, a chatbot like ChatGPT generally has a good idea of what word should come next in a sentence because it has been trained on billions of sentences and “learned” what words are likely to appear, in what order, in each context.

This makes generative AI applications vulnerable to the problem of hallucination—errors in their outputs such as unjustified factual claims or visual bugs in generated images. These tools essentially “guess” what a good response to the prompt would be, and they have a pretty good success rate because of the large amount of training data they have to draw on, but they can and do go wrong.