>[!question] How does a machine *know* something? Infant metaphysics is a [concept](https://www.sciencedirect.com/science/article/abs/pii/S0010028596900055) is cognitive science that describes the development of basic ideas such as space and [[Types of Time|time]] in young children. Applied to artificial intelligence, it is a useful concept in describing how [[foundation model|foundation models]] may arrive at first principles through final ones. Large language models (LLMs) are one step in development for natural language understanding in artificial intelligence. Most LLMs are comprised of "neural networks" that ingest large amounts of data and form linguistic rules based on probability chains. Whether these models or a similar AI can *understand* input or output is a [point of contention](https://www.quantamagazine.org/what-does-it-mean-for-ai-to-understand-20211216/) in the AI research community. Turing's [seminal argument](https://archive.org/details/MIND--COMPUTING-MACHINERY-AND-INTELLIGENCE/mode/2up?q=%22computing+machinery+and+intelligence%22) for computer intelligence equates *thinking* with *understanding*.[^1] His argument relies on a number of unrealistic premises, but mainly relies on the subjectivity of participants and judges to determine machine intelligence. #saymore The [Winograd schema challenge](https://en.wikipedia.org/wiki/Winograd_schema_challenge) narrows the scope of LLM understanding by providing an explicit structure in which to test AI understanding. This structure utilizes two nouns and an [[cognitive ambiguity|ambiguous]] pronoun to introduce the challenge of composability, wherein the machine is forced to evaluate the relationship between referents and their composability. As of 2019, this benchmark is [believed](https://www.sciencedirect.com/science/article/abs/pii/S0004370223001170) to have been met, however more recent research suggests that LLMs and similar foundation models use [[Third Intelligence|shortcuts]] to produce the *appearance* of comprehension.<sup>[source]</sup> LLMs might be considered an interesting application of the philosophical zombie. LLMs provide interesting applications for Law and legal language, since Law is an abstraction from real world messiness. [^1]: My personal opinion is that machines achieved the milestone of *thinking* when they began to outperform human computers. This is because I equate thinking with *calculating*, rather than [[consciousness]].