Sunday, January 24, 2010

The difference between AI and AGI

I think it's difficult to understand what people are talking about when they discuss AI (Artificial Intelligence) and AGI (Artificial General Intelligence). The difference is actually fairly simple on the surface with AGI being much more difficult to accomplish. AGI is so much more difficult, I would venture to say that no one has done it yet.

Definitions:

Both AI and AGI are typically characterised as the intelligence of machines and the branch of computer science that aims to create it.

AI is also known as Narrow AI or Weak AI

AGI is also known as Real AI or Strong AI

The Difference:

AI basically has the outcome already preprogrammed; i.e. the domain is already known and we are simply making the decision process based upon a set of inputs that are weighted and attaching that weight to the desired results. AI has very little chance of making an unpredictable outcome if the training process is thorough. The training process is the process that enables the developer to attach the weighted sums to the desired outcome.

Inputs -> A process that creates a Summation of weights -> Outcome
where the trainer decides what sets of weights equate to the desired outcome




We can do a lot with AI and we do. We have also been able to create some very complex systems but we cannot escape the domain that is configured for the AI we wish to use. As an example, I can create an AI system to to decide mortgage risks but I cannot "ask" that same AI system to tell me what is the best results for my web search. I would have to create and train a seperate AI system with the web search domian to assist me with my web searches.

AGI on the other hand is a system where the domain is unknown and the AGI process has to figure out the correct choices. The training process is the same as teaching any animal (human) in that the developer must allow the AGI system to review the input(s) and then tell it when it has a correct outcome.

Input 1 -> cortical correlates -> outcome
Input 2 /



With AGI, the domain is everything we present as inputs and there are no pre-defined outcome. The system learns when we tell it that it is correct and solidifies the neuronal correlates. This requires a means to give positive and negative reinforcement and can have less than optimal results.

AGI can be setup in such a way to isolate the inputs so for instance it is recognizing specific types of images, taught to only interpret these types of images but if released into an open domain, it is impossible to prevent it from only interpreting the image domain it was taught unless you set it up to stop learning all together. This is why AGI is so difficult to work with. It's almost an all or none proposition. You can narrow the domain so the system only analyzes the images that you want it to see so it can continue to learn and discriminate but then you have to take those extra measures to stop it from being exposed to other domains, as well as risking turning it into narrow AI.

AI is congruent with narrow AGI so this is the problem. We have to teach our AGI system our world and everything about it to make it truly useful and effective. We can lower the intelligence of our AGI which does give us the advantage to making a more narrow AGI so we can train it at say more of a dog level intelligence and thus be able to train for specific domains above and beyond general intelligence. We always have the mishap that our AGI systems might be more "interested" in something other than what we want it to learn. I'll talk about these challenges in later posts.

3 comments:

  1. Thanks for this blog really helpful , probably the most helpful I found concerning this topic

    ReplyDelete
  2. "The system learns when we tell it that it is correct and solidifies the neuronal correlates. This requires a means to give positive and negative reinforcement and can have less than optimal results.” We? Yevgeny Zamyatin here we come…

    ReplyDelete