0
0 ITEMS IN SHOPPING BAG
0
0 ITEMS IN SHOPPING BAG

The Accidental Chatbot

Sheldon Linker, Ph.D. — [email protected]

Abstract

This article details how a research project turned into a chatbot, and how that chatbot turned into a robot. This article contains both history and technical information.

Sections

The History of the Research Project

The Chatbot

How It Works

Deeper in

The Robot

Future Work

Usefulness

More Detailed Info

The History of the Research Project

Quite some time ago, Dr. James Cooke Brown began some research into the Sapir Whorf Hypothesis. Basically, the Sapir Whorf Hypothesis states that language limits thought. You will find it hard to conceive of something that you don’t have words or grammar for, and conversely, if you have words and grammar for a concept, it’s much easier for you to understand it. In order to test this hypothesis, Dr. Brown devised a language to be minimal (and thus experimentally controllable), formal (having a grammar 100% known), and having a word list and structure and grammar that are completely unambiguous, unassuming (meaning that no assumptions are placed on a world-view and no world-view is required), and suitable for quick logical calculations (up to and including Boolean calculus [which sounds hard, but English speakers do this in their heads all the time]). He called this language Loglan, short for Logical Language. Dr. Brown had published his first materials in this subject in the same year I was born.

When I was in college, Dr. Brown’s research, and publishing of papers, was still going on. I found that I was able to invite professors from elsewhere to give lectures, and that Dr. Brown was willing and available to give a talk on the progress of the Loglan project. We spent some time together before, during, and after the lecture. One of the problems he mentioned was in proving whether the Loglan grammar was unambiguous. Since unambiguous SLR-1 (Simple Left-to-Right parsers with 1 look-ahead) grammars are a subset of unambiguous grammars, and since YACC (a program to help in the writing of compilers, named Yet Another Compiler Compiler) would tell me whether a SLR-1 grammar was unambiguous (having no reduce-reduce conflicts), I decided to try to run the Loglan grammar of the time through YACC. It wasn’t free of conflicts, and wasn’t even close. There were a number of proven ambiguities. With Dr. Brown’s consent, I tried to make slight grammar changes so that Loglan’s grammar would be unambiguous SLR-1. It took years, but we finally got to the point where we had a grammar that Dr. Brown, I, and YACC liked.

The hypothesis is true, by the way. My mind changed. On the good side, understanding the equations of grammar that encompass the capabilities of every grammar in the world and a vocabulary that does the same allowed me to go from being very bad at learning languages to reasonably good. On the down side, my expanded concepts of what is possible sometimes mean that I’m not sure what someone means. For instance (Dr. Brown’s favorite example), “pretty little girls school” (punctuation left out to mimic hearing this) probably means one or two things to you. I hear all 26 possible meanings.

I very much liked Arthur C. Clarke’s 2001, a Space Odyssey, best known for its HAL-9000 character. That enters into the story because I realized that given a parseable grammar, one could develop a system that understood that grammar. I figured (and so stated in an article) that it would take about 5 years work to create such a program. At the time, I thought someone else would do it. Much later on, when I was doing graduate work, I realized that it was time to do that work, as I needed a thesis topic. In fact, it did take 5 years work.

The Chatbot

What I ended up with was a program (first written in Java and later in C [for a speed improvement of about 30:1]) that could understand a written language extremely similar to English in its look and sound and extremely similar to Loglan in its grammatical design. I hadn’t realized it at the time, but I had written a chatbot with a completely different purpose than any other I could find. I called this new language JCB-English, and the ‘bot simply JCB (both named after Dr. Brown).

Most chatbots have either the design criterion of looking like they’re a person carrying on a conversation (such as Eliza, Perry, or Cleverbot), or the design criterion of carrying out some business function, in some cases giving out information, and in other cases doing two-way transactions, giving data out and accepting certain data to be stored or acted upon.

JCB is quite different. In a way, it has a mind of its own. JCB has a dictionary, a knowledge base, and logic. JCB, like you, doesn’t make heavy reliance on its dictionary. Instead, the dictionary is only used to show similarity in words, and special logic rules for words. For instance, the dictionary might show that “small” is the same concept as “tiny”, that “small” is the opposite concept of “large”, that “small” follows the chain rule, in that if A is smaller than B and B is smaller than C, then A is smaller than C, and that B is not smaller than A, and that “small” is “poquito” in Spanish. The dictionary also contains definitions for classifications. For instance, that “car” and “truck” are exclusive members of the class “vehicle”. JCB’s knowledge base is a list of facts about the world, stored in data structure that closely matches the JCB-English grammar. Again, the knowledge base doesn’t have to start out with an encyclopedic knowledge of the world. The knowledge base is segmented for different views of reality. For instance, the fact that Pluto is not considered a planet as of this writing is something that would be stored as global knowledge in the knowledge base. But, the fact that liver tastes bad would be in my personal knowledge base, but is not assumed to be a fact for others.

JCB never initiates a conversation, but it does reply. It can accept text in JCB-English, and can respond in JCB-English, in English, or in any other language in its dictionary. If you use a word it doesn’t know, it will begin to learn about he word from context. For instance, “the slithy toves” (from Jabberwocky) has a number of words that don’t appear in the dictionary, but even so, it tells JCB that a “tove” is capable of being “slithy”. When JCB sees a statement, it evaluates it. If the statement was known or can be proven from what’s known, then JCB states that it knew that. When a statement is neither known nor provable, then JCB accepts the statement as plausible and thus true. (In the case of an error, it can be told to forget a fact, or modify a fact as having an end date and time.) If the speaker is fully trusted, then the fact is a new global fact, and stored in the knowledge base. If the speaker is not fully trusted, then the statement is accepted as true in the speaker’s worldview. For instance, if I say that Kermit is a frog, JCB will accept the fact. in accepting the fact, the conversion has permanently modified the knowledge base. If I then say that Kermit is a pig, JCB will reject the assertion, since it already knows that Kermit is a frog, and given dictionary entries for “frog” and “pig”, it knows that something can’t be both a frog and a pig. This shows the biggest difference between JCB and most other chatbots. The determination that a frog can’t be a pig is not the result of any specific programming. It’s purely the result is knowledge and logic.

JCB also answers yes/no questions. The answer can be the result of a calculation or of a query against the knowledge base. For instance, if I ask whether the sum of two plus three is four, the answer would be “No”. If I ask whether Kermit is an animal, the answer would be “Yes”, since JCB has already been told (maybe in today’s session, maybe in a previous session) that Kermit is a frog. If I ask whether Kermit wears
glasses, JCB would respond that it doesn’t have enough information to answer the question.

JCB also answers fill-in questions. For instance, if (given the above) we ask who is a frog, the answer will be “Kermit”. If we ask what Kermit is, the answer will be “frog”. We can also ask relational questions. For instance, if I tell JCB that Rachael is older than Shannah, and that Shannah is older than Rebecca, I can ask that Rachael is blank than Rebecca, and get “older”. Of course, fill-in questions can also get the response that there is insufficient data to answer.

The separate knowledge bases come into effect in answering questions. If I ask whether liver tastes good, then answer will be “No”. If you ask the same question, the answer would be prefaced with “According to Sheldon,”, depending on your level of trust in me.

When facts are stored, anything in the form “Both A and B” is stored as two separate facts. There is also a stage called factoring. For instance “The ball is either red or green” factors to “Either the ball is red or the ball is green”.

How It Works

JCB was originally written in Java. I find Java to be a nice prototyping language for some jobs, and a fine final development environment for others. I started in Java because of the lack of memory management concerns. However, there is a lot of computation and a lot of I/O involved. Use of the C language for the current version allowed me to use a memory manager optimized for the problem space, and allows for certain objects (or at least portions of all objects) to be read and written in binary form, without having to undergo any form of conversion. Together these gave huge increases in speed.

The main JCB module is written as a service, listening on a given TCP port, or to a keyboard/display pair. When a message (containing session info, language selection, and the text), the session is verified, and then the text is parsed. The grammar in use is SLR-1, but with some context rules (shift-reduce conflicts, always resolved in favor of the shift, much like the IF/ELSE construct in most computer languages). The language allows for separate executable text blocks to be evaluated one at a time, separated by the word “execute”. From this point, everything is handled as operations on a grammar tree (much like you did in elementary school). Those executable blocks are evaluated one at a time.

Each block is first sent to the optimizer. The optimizer looks for any logical construct that can be simplified, and simplifies it. For instance, “not not A” would be simplified to “A”, and “either not A or not B” would be simplified to “not both A and B”. It can also cut out chunks of logic. For instance, “Either A or True” simplifies to “True”. Next the block is evaluated as to type. Each block must be a statement, a yes/no question, or a fill-in question. Something malformed, such as a yes/no fill-in is rejected as too complicated.

A statement is the simplest case. JCB contains three provers, called the Fast Prover, the Medium Prover, and the Slow Prover. The Fast Prover attempts to find a match to a statement component in the knowledge base. For instance, in “Either A or both B and C”, A, B, and C are all separate components. The Medium Prover (categorized in AI terms as admissible [never wrong] but incomplete [not always returning an answer]) does some more complicated matching but still runs relatively quickly (up to a few seconds). Finally, the Slow Prover comes in, and does a complete solution (hopefully in a reasonable amount of time). Each level of prover will typically make calls to the next lower level of prover to do portions of the work. If a component can be shown to be true or false, then the component is replaced by True or False, and is re-optimized and re-evaluated. If the optimization results in True or False, then the statement is rejected as either known or false. If the optimization can’t reduce to True or False, then the statement is plausible and is stored as new knowledge.

Yes/no questions are first proven, and then answered as Yes, No, or Unknown. Fill-in questions are similarly reduced, but start with the Medium prover, trying to find something to put in the blank (“who”, “what”, “when”, or “blank”) that will result in a true statement, and then reports the word, phrase, or sentence (or in some cases, train of logic) that will make the sentence true. Of course, “unknown” is always a possible answer.

Deeper in

Java, C++, and just about everything else uses a memory manager in which you ask for a new object, and get one. That object has a type locked in for its entire lifetime. Since JCB uses an optimizer, objects can change. Let’s take the example “Either not A or not B”. That gets stored as something on the order of EITHER(NOT(A),NOT(B)). When the optimizer finishes with it, what we’re left with is NAND(A,B). One way to handle this would be to create a new NAND object, and have it point to the A and B objects, and then go to ever object that points at the old expression, and re-point it to the new expression. But, that takes creation of a new object and a lot of backtracking. The memory manager in JCB’s C implementation allows for the conversion of an object of one type to an object of another type, saving allocation and free steps, and all of the backtracking.

The memory manager doesn’t call on the base-level memory manager at each allocation. Since allocations are more costly than memory in modern systems, the memory manager will allocate an array of a million objects at a particular time, and not bother cleaning things up until it needs to.

Rather than accessing items by address, the memory manager uses object numbers. This allows object trees to be written to the knowledge base as binary structures, and later loaded into different parts of memory. Since objects are numbers rather than addresses, it doesn’t matter where the tree is loaded.

Logical states go farther than just True or False. Instead, there is a likelihood and a veracity. Likelihood 0 veracity 100% means known to be false. Likelihood 100% veracity 100% means known to be true. Veracity 0 means not worth considering.

The Robot

Michael S. P. Miller (first a coworker, then a friend, and then a co-researcher) has an interest in human-like AI and robotics. Toni Poper (first a coworker, then a friend, then wife, and then a co-researcher) is an algorithm expert. Together, we added to the JCB system, coming up with an extended chatbot, JCB-Robotics. This version of JCB allows for imperatives and commands. For instance, items beginning with “You must” or “You shall”. Since one mind can have several bodies, and the bodies can have names, you can command a specific body to do something, as in “Huey shall” or “Dewey must” or “Either Hewey or Louis must”. Toni Poper pointed out the such a system should adhere to something akin to Asimov’s Laws, and reject dangerous instructions.

So, JCB-Robotics was born. If a block of text can be categorized as an imperative, then JCB attempts to prove that it is legal to execute. For instance, if I say “You must kill Julio”, the system will respond with something on the order of “Kill is a subset of harm. Julio is a human. This conflicts with ‘You shall not harm a human.’.”. Given that a command is valid for execution, it goes into the execution queue. Items in the execution queue are handled on an as-available basis. Once something has been executed, it is marked as True in the execution queue, and then re-optimized. For instance, if the command is “Make a quesadilla or a PBJ”, then this gets factored to “Either make a quesadilla or make a PBJ”. Assuming the quesadilla ingredients are more readily available than the PBJ ingredients, the quesadilla gets made. Then, “make a quesadilla” is replaced in the queue as “Either true or make a PBJ”, which optimizes to “True”, which drops out of the queue.

Future Work

The Slow Prover would benefit greatly by being rewritten to run in a massively parallel environment, such as a multi-core system, a cooperative network, or Watson. Having a relatively complete dictionary and encyclopedic knowledge base already built would be nice, too.

Usefulness

Right now, when we do a Google search, we get a list of documents that might have the answer to the question we’re really trying to ask. JCB has the potential to answer questions directly. JCB also has the potential to give us systems that operate with no domain-specific programming whatsoever, or in other cases with no domain-specific programming past a simple explanation as to how things work. For robotic systems, it has the capability for the robot to do its own planning in a logical manner.

More Detailed Info

You can find out more about all of this, including references, at the thesis and the latest patent.


The Accidental Chatbot was originally published in Chatbots Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.