Ted Chiang, writing in the New Yorker (New Yorker link, Archive.org link) finds a neat analogy for what ChatGPT does with the information it was trained on:
Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
Lossy JPEGs served the World Wide Web well for the first decade or so of its' existence, but nobody should mistake them for the original images. And yet almost everyone does precisely that. They're good enough for most practical purposes, so long as they're not littered with compression artefacts.
[Via MetaFilter]