
In artificial intelligence, the context window refers to the maximum amount of text, measured in tokens (roughly 0.75 words each), that a large language model can process simultaneously. This window must accommodate the AI's rules, chat history, and response generation. Exceeding the limit can cause older parts of the conversation to be lost. Larger context windows require significantly more computational power and are thus more expensive to operate. Models may also struggle to retrieve information from the middle of a long context, a phenomenon known as 'lost in the middle'.
Select a news story to see related coverage from other media outlets.