Pondering the future of all intelligent life.

Illogica is the name I've given to my lifelong quest to develop and understand the nature of intelligence, artificial intelligence in particular. My pursuit has been two pronged: creation of code platforms to directly develop and test AI technologies, and the continued pursuit of the greater philosophical and religious issues that arise from future rise of human-level AI.

I first started to ponder AI when I was a child. At the time, I knew nothing whatsoever about AI, but that didn't stop me from developing theories about how an AI agent could be created. When I was 12, I put these theories to the test by writing a turn based video game called Nuke the World as a testbed for the AI algorithms I came up with. I finished the core game in only a few months but I continued to tweak with the AI algorithms until my Junior year in high school.

In 1995, my first senior year in college, I took an AI class. The class was long on theory but short on practical application. I remembered the Nuke the World project and decided to create a new testbed game to play with the AI theories I had just learned about. Illogica LRS was born.

The Illogica LRS Project

I chose the three letter acronym of the name of the LRS engine (Logic Really Sucks) as a tongue-in-cheek stab at the purpose of the LRS engine. The platform I chose to create is a text-based multi-user game (aka, MUD) engine I wrote from scratch. Illogica LRS has two incarnations thus far: Illogica LRS 1.x and Illogica LRS: 21st Century and Beyond.

The first incarnation, Illogica LRS 1.x, was a MUD I started in 1995 and brought online in January, 1996. It was only briefly online for public use; I took it offline in March of that year after I graduated from college and started work as a professional game programmer. Every game development company I've worked for has non-compete clause in the employment contract that prevents me from putting Illogica online for public use. Even so, I've continued to work on it from time to time and have occasionally brought it online for private use. Illogica LRS 1.x used a home-grown world generation algorithm that created realistic geography and cave systems. Weather simulation further added to the dynamic feel of the game. The AI techniques I tested on this platform included pathfinding, group behavior, genetic algorithms and hazard avoidance. I used a food chain system that, while adding very little to the gameplay, served well to test the various AI technologies that I was interested in. However, in doing so it became apparent that the original engine design was unable to efficiently handle the large number of AI agents that a dynamic system would require. I stopped work on this version of Illogica in 2003.

Illogica LRS: 21st Century and Beyond is the name of the game that I'm currently working on to test more advanced AI technologies. The focus of the project is less on gameplay and more on creating a better test environment for AI technology. So far, I've never put this version online and it may be years before I do so. The world generation algorithms from the first Illogica game will be brought over to this version, but instead of being used to create a text based muti-user game, the focus of the engine is to provide a simulation environment. The core engine technologies are also being used to create client applications that can connect with the server running the world simulation. When I bring this version online, the "players" will be AI clients connecting to the game in much the same way a human player would connect with a text-based MUD. The emphasis will be on survival and growth of AI agents, and less on gameplay. I will create a way for humans to connect to this game, but that is least of my priorities. Human interaction is an important element of AI I want to research, but that's something that I'll work on at a much later date. Even then, I expect the support for human interaction will be minimal. After all, this is a game I can never put online for public use as long as I'm a professional game developer, so there's no point creating a slick user interface.

Since I have a family and work long hours as a gameplay programmer, Illogica has become a hobby that I work on occasionally when I have the time. Often, I won't work on it for months then I'll work feverishly on it for a few weeks. My hope, and the hope of virtually everyone working with AI, is to create a self-aware machine intelligence capable of interacting with the outside world in a meaningful way. Needless to say, this is a long-term goal and one I'm not in a hurry to accomplish. Even if I was working on this project non-stop and had access to the latest & greatest algorithms and theories, I'd come nowhere near reaching my goal for many years for one simple reason: it'll be several decades before we could create a computer able to rival the human brain's sheer processing power. Needless to say, I'm not in a rush, nor do I hope to be the first to accomplish the lofty goal of creating the first human-equivalent intelligent machine. Someone else is welcome to that title. If I don't accomplish this goal until I'm in my 80s or 90s, that's fine with me. I expect most of the work I'll be doing on this project will be after I retire from game development, and I don't plan on retiring until I'm a very old man. Until then, I'll continue to tinker with these test platforms and reading up on the latest research when I find the time to do so.

Illogical Philosophy

In the meantime, I often ponder philosophical issues related to AI. Some of these are themes touched on in science fiction stories, such as the question of whether or not a machine can have a soul. But the topics I find most fascinating go beyond that simple question and ponder the existence of faith itself: can an AI agent have faith? Is the desire to see God and understand Him a purely human instinct, or will intelligent machines wonder about spiritual things too? Can a machine sin, if there is such a thing? More importantly, if a machine sins, can it be forgiven?

There's a good reason why I ponder such things: I'm a Christian, but not because I was raised that way. I've had many experiences with the supernatural that, in addition to leading me to Christ, have convinced me beyond a shadow of a doubt that the supernatural realm is very real and very active. As both AI and computer technology continues to advance, the likelihood that human-equivalent intelligent machines will be made in our lifetime gets better and better. It's only natural for one such as myself to wonder what role these machines will have in God's kingdom and in the overall development and evolution of the Church. Could a machine accept Christ as its Lord and Savior? If it does, could such a machine be filled with the Holy Spirit? What would happen to that machine during the Rapture and/or the Resurrection?

In addition to such ecclesiastical ponderings, I wonder what impact the rise of human-level AI will have on people. It's likely that the presence of such machines will greatly affect all of the world's religions. Would the rise of machine intelligence cause most people to fall away from their faith? I don't believe so, but I do think it'll cause many people to really examine what they believe and why. Some people believe that the rise of human-level AI will trigger the Tribulation, and indeed the book of Revelation does talk about an idol of the Beast (aka, the Devil) that is given the power to speak blasphemies against God. But does that mean that the first and only android will be a creation of the Devil? For that matter, will such an idol be a creation of technology or a manifestation of the Devil's power? Many theologians claim to know the answers to such mysteries but the reality is that no one will truly know until the events in question take place. Still, they are interesting things to think about.

In these links, I discuss many of these issues in greater detail.

Home | The Science of A.I. | The Philosophy of A.I. | A.I. and Religion | The Future of A.I.

About Me | My Supernatural Experiences