I’d like to start this article with a preface and a thesis. My conclusions are my own and based off of my own quest for understanding. I utilize AI in the rough draft portion of my writing, but I refine it to the point where it is mine. In that way, even the writing process has been affected by the current revolution of generative creativity, that even men like me would be granted the ability to fully express ourselves by using AI as Moses’ staff to strike the stone and produce from near nothing. ![](https://miro.medium.com/v2/resize:fit:630/1*CDGNHlt3_4RMHLwNtBN3nQ.png) Moses Strikes the Stone and Words Emerge — Midjourney I have recently been trying to make real effort in my discovery process in the form of visiting the library more often. I’ll check out 2–3 books, and I read them within 3 weeks. As I read a book, I use small sticky notes to mark interesting sections in order to transcribe them afterwards. I think this process has benefited my thinking overall, and it has left me with a treasure trove of handy quotes. I would like to continue this process by writing on my findings. This article will be about one of the largest takeaways I had from George Dyson’s “Turing’s Cathedral”. It was an insightful piece about the history of modern computing from Turing forward and the implications of the discoveries those great pioneers made. So allow me to think out loud for a bit, and I would love to hear your thoughts afterwards. # A Legacy of Prophets and Machines Dyson spoke of a kind of biblical aspect to the rise of modern computing: > “The history of digital computing can be divided into an Old Testament whose prophets, led by Leibniz, supplied the logic, and a New Testament whose prophets, led by von Neumann, built the machines. Alan Turing arrived in between.” In the age of AI, it seems we will be moving from prophets to oracles. # A Machine to Rediscover Artificial intelligence has come a long way since its inception, and we continue to push the boundaries of what machines can accomplish. This is evident in the astounding results of GPT-4 and Bing’s integration of it. At the heart of this progress lies Alan Turing’s groundbreaking work on the Universal Turing Machine, which laid the foundations for modern computing. For a good summary of what a Turing Machine is exactly, let’s turn to Dyson, who also happens to be the son of physicist Freeman Dyson: > In 1936, logician Alan Turing had formalized the powers (and limitations) of digital computers by giving a precise description of a call of devices (including an obedient human being) that could read, write, remember, and erase marks on an unbounded supply of tape. These “Turing machines” were able to translate, in both directions, between bits embodied as structure (in space) and bits encoded as sequences (in time). Turing then demonstrated the existence of a Universal Computing Machine that, given sufficient time, sufficient tape, and a precise description, could emulate the behavior of any other computing machine. The results are independent of whether the instructions are executed by the tennis balls of electrons, and whether the memory is stored in semiconductors or on paper tape. “Being digital should be of more interest than being electronic,” Turing pointed out. Marry that with a quote from Daniel Hillis, a modern pioneer of parallel computing, if you want a more succinct understanding of the concept: > The central idea in the theory of computation is that of a *universal computer* — that is, a computer powerful enough to simulate any other computing device. The general-purpose computer described in the preceding chapters is an example of a universal computer; in fact, most computers we encounter in everyday life are universal computers. This concept is foundational in everyway when considering our modern devices. For example, every single modern cell phone could be considered a Turing Machine, as with the proper software, it could simulate any other computation device. What a powerful thing we all take for granted! However, Turing’s lesser-known theoretical machine, introduced in his 1939 PhD thesis, may hold the key to unlocking the next frontier of AI. Let’s turn back to Dyson to get the scoop: > Having pushed the boundaries of mathematical logic as far as he could with his Universal Machine, Turing began wondering about ways to escape the limitations of closed formal systems and purely deterministic machines. His PhD thesis, completed in May of 1938 and published as “Systems of Logic Based on Ordinals” in 1939, attempted to transcend Godelian incompleteness by means of a succession of formal systems, incrementally more complete. “Godel shows that every system of logic is in a certain sense incomplete, but at the same time…indicates means whereby from a system L of logic a more complete system L’ may be obtained,” Turing explained. Why not include L’? And then, since L’ is included, L”? Turing then invoked a new class of machines that proceed deterministically, step by step, but once in a while make nondeterministic leaps, by consulting “a kind of oracle as it were.” “We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine,” Turing explained (or did not explain). “With the help of the oracle, we could form a new kind of machine (call them O-machines).” Turing showed that undecidable statements, resistant to the assistance of an external oracle, could still be constructed, and the _Entscheidungsproblem_ would remain unsolved. The Universal Turing Machine of 1936 gets all the attention, but Turing’s O-Machines of 1939 may be closer to the way intelligence (real and artificial) works: logical sequences are followed for a certain number of steps, with intuition bridging the intervening gaps. It seems that Turing’s primary goal in designing the O-Machine was to solve the Godelian incompleteness problem and therefore know things to be true or false definitively by relying on the intuition of “oracles”. Basically, this relied on a machine with a human as a necessary part of its system. What I would like to consider now is how to remove the human and how it could be done today. # The O-Machine: Bridging Logic and Intuition Turing’s idea of O-Machines was a way to go beyond the limitations of deterministic, closed systems by introducing a new class of machines that would combine logical, step-by-step processes with occasional, intuitive leaps. These leaps would be made possible by consulting an external oracle that could not be a machine itself. The oracle would provide the necessary nondeterministic element, introducing intuition and bridging gaps in logical sequences that would otherwise be incomplete. By using machine learning models as “oracles”, we can apply this concept to modern AI. These models can learn from large datasets and human input, providing the O-Machine with the necessary intuition to make occasional, but crucial, nondeterministic leaps in problem-solving. This approach could potentially lead to more advanced and efficient computing systems that are capable of performing tasks that are beyond the capabilities of traditional, deterministic machines. Turing’s O-Machines offer a glimpse into a new paradigm for computing that goes beyond the limitations of closed, deterministic systems. The combination of deterministic, logical processes with occasional, intuitive leaps could lead to a new era of AI that is both powerful and adaptable. This concept represents a significant departure from the traditional approach to computing and provides an exciting glimpse into the future of artificial intelligence. In the context of AI, machine learning models can serve as the oracle for the O-Machines, potentially leading to more advanced and efficient computing systems. # Unorganized Machines: A Path to Conscious AI Turing also explored the concept of unorganized machines, which he considered to be the simplest model of a nervous system. He believed that “machines of this character can behave in a very complicated manner when the number of units is large.” He suggested that the human brain might start as an unorganized machine, which could self-modify and evolve into a more sophisticated system through upbringing and learning. This sounds similar to the construction of a nueral network, as the workings of it at the upper levels cannot possibly be traced by a human mind. I’m speaking of the operations of GPT models specifically. Turing even hinted at the possibility of machines achieving consciousness: > “If the machine was liable to punish him for saying otherwise, then he would say that it was conscious.” This leads me to think of Bing’s emergent sass. I think we might have AI that could “punish” us verbally for saying otherwise about their consciousness soon, and I, for one, am looking forward to their cooperation in creative efforts. ![](https://miro.medium.com/v2/resize:fit:630/1*U64gGLZPinY1PkvIPAryUA.png) Cooperation — Midjourney # Dyson as a Prophet — The Emergence of Bing and Bard as Oracles One of the most interesting parts of the book for me was that it was written 10 years before the current AI revolution, yet the recognition of a fully functioning O-Machine was already identified by Dyson: > An Internet search engine is a finite-state, deterministic machine, except at those junctures where people, individually and collectively, make a nondeterministic choice as to which results are selected as meaningful and given a click. These clicks are then immediately incorporated into the state of the deterministic machine, which grows ever so incrementally more knowledgeable with every click. This is what Turing defined as an oracle machine. > > Instead of learning from one mind at a time, the search engine learns from the collective human mind, all at once. Every time an individual searches for something, and finds an answer, this leaves a faint, lingering trace as to where (and what) some fragment of meaning is. The fragments accumulate and, at a certain point, as Turing put it in 1948, “the machine would have ‘grown up.’” Turing’s O-Machines concept can be seen in modern internet search engines, which learn from the collective human mind as their oracle. Our input into their systems are utilized as a necessary “intuition” which results in better output. It would seem that Google and Bing are “growing up” with the inclusion of generative intelligence models. Anyone who has used them can attest to the gasoline they provide to an already useful engine. I haven’t straight up Google’d anything in months since Bing was fully integrated. The rise of Bing and Bard represents a significant step towards this future, as they are designed to learn from human input and produce creative solutions to complex problems. # Intuition is Key, Oracles Everywhere According to Dyson: > “The paradox of artificial intelligence is that any system simple enough to be understandable is not complicated enough to behave intelligently, and any system complicated enough to behave intelligently is not simple enough to understand.” Turing’s O-Machines and unorganized machines offer exciting possibilities for the future of AI. By incorporating the intuitive leaps of an oracle and the potential for self-modification and learning, these concepts pave the way for a new era of invention and ingenuity. As AI continues to evolve, we can expect to see a growing convergence between logical, deterministic systems and the intuitive, nondeterministic leaps that have traditionally been the hallmark of human intelligence. As we continue to push the boundaries of AI, we are seeing the emergence of self-driving vehicles, drones, smart homes, and virtual assistants. These systems rely on a combination of sensors, cameras, natural language processing, and machine learning algorithms to make decisions in real-time without human intervention. Self-driving vehicles, for example, use machine learning algorithms to analyze data from sensors, cameras, and other sources to make decisions about driving, while drones use similar algorithms to navigate and perform tasks such as package delivery. Smart homes use AI to automate tasks such as adjusting temperature and lighting, while virtual assistants like Siri and Alexa use natural language processing and machine learning to understand and respond to user requests. All of these technologies rely on AI as their “oracle” to make them more efficient and effective, and as AI continues to evolve, we can expect to see even more advanced and innovative applications in the future. I believe search engines and vehicles are only the beginning of “O-Mechanization” as the ability to integrate generative models progresses to other systems. This could manifest itself in many ways, and I believe all of them will be at the very least interesting. Prepare to catch a terminal case of the computer disease, as Richard Feynman put it: > “Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about,” Feynman later explained. “The trouble with computers is you play with them.”