Author:: [[Noam Chomsky, Ian Roberts Jeffrey Watumull]] DateFinished:: URL:: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?smid=nytcore-ios-share&referringSource=articleShare Rating:: Tags:: # Noam Chomsky: The False Promise of ChatGPT ![rw-book-cover](https://static01.nyt.com/images/2023/03/08/opinion/08Chomsky-Promo/08Chomsky-Promo-facebookJumbo.png) [[Large Language Models are limited by overly moral and unbiased language]] [[Large Language Models deepest flaw is their inability to critically think about what they are saying]] ## Highlights However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. ([View Highlight](https://read.readwise.io/read/01gv9kx61a78fkatjsfk9j66gs)) The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question ([View Highlight](https://read.readwise.io/read/01gv9ktrgqbf3ddkfjtxzq13xz)) [[Large Language Models deepest flaw is their inability to critically think about what they are saying]]. ([View Highlight](https://read.readwise.io/read/01gv9krynxj0vfa0kj0d1ef54h)) ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible ([View Highlight](https://read.readwise.io/read/01gv9kzmwyxcp6jkqa4bdn4aj4)) - Note: This is why currently they are so useful in uncovering summarizing and introducing topics, but lose their worth when creating new knowledge or going a level deeper. To do so they would have to create their own rules and code. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time. For this reason, the predictions of machine learning systems will always be superficial and dubious ([View Highlight](https://read.readwise.io/read/01gv9m0w8zhgwjg659x0wax5n6)) True intelligence is also capable of moral thinking. ([View Highlight](https://read.readwise.io/read/01gv9m5wkgq1zkkfrem8nc8hcr)) To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance ([View Highlight](https://read.readwise.io/read/01gv9m5ms3yf2959xzkmzn1z05))