Frame problem Already Solved -Dark Side of AI History from the Frame Problemー

Frame problem Already Solved -Dark Side of AI History from the Frame Problemー

The biggest knot for artificial intelligence ”the frame problem” does not exist

このエントリーをはてなブックマークに追加

 

What is the Frame Problem?

When you search for the history of AI, this biggest knot in the history of artificial intelligence always comes up: ”the frame problem”.

Every time the frame problem was mentioned, I used to think “how can this be the hardest problem to solve?”.
“Isn’t this actually a simple problem that is easily solvable but twisted as a difficult one?”.
Nonetheless, when I did a research on the frame problem taking into consideration of its historical context, I began to see its true significance slowly.

Computer scientist John McCarthy proposed the frame problem,
The most popular example of the frame problem among many is the story of robots stated by a philosopher called Daniel Dennett.

Here are the excerpts from his article on the story of robots by Dennett.

 

Once upon a time there was a robot, named R1 by its creators.
Its only task was to fend for itself.
One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon.
R1 located the room, and the key to the door, and formulated a plan to rescue its battery.
There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT (Wagon, Room, t) would result in the battery being removed from the room.
Straightaway it acted, and did succeed in getting the battery out of the room before the bomb went off.
Unfortunately, however, the bomb was also on the wagon.
R1 knew that the bomb was on the wagon in the room, but didn’t realize that pulling the wagon would bring the bomb out along with the battery.
Poor R1 had missed that obvious implication of its planned act.
Back to the drawing board.

`The solution is obvious,’ said the designers.
`Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side-effects, by deducing these implications from the descriptions it uses in formulating its plans.’ They called their next model, the robot-deducer, R1D1.
They placed R1D1 in much the same predicament that R1 had succumbed to, and as it too hit upon the idea of PULLOUT (Wagon, Room, t) it began, as designed, to consider the implications of such a course of action.
It had just finished deducing that pulling the wagon out of the room would not change the colour of the room’s walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon – when the bomb exploded.
Back to the drawing board.

`We must teach it the difference between relevant implications and irrelevant implications,’ said the designers, `and teach it to ignore the irrelevant ones.’
So they developed a method of tagging implications as either relevant or irrelevant to the project at hand, and installed the method in their next model, the robot-relevant-deducer, or R2D1 for short.
When they subjected R2D1 to the test that had so unequivocally selected its ancestors for extinction, they were surprised to see it sitting, Hamlet-like, outside the room containing the ticking bomb, the native hue of its resolution sicklied o’er with the pale cast of thought, as Shakespeare (and more recently Fodor) has aptly put it.
`Do something!’ they yelled at it.
‘I am,’ it retorted.
`I’m busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore, and…’ the bomb went off.

All these robots suffer from the frame problem.
If there is ever to be a robot with the fabled perspicacity and real-time adroitness of R2D2, robot-designers must solve the frame problem.
It appears at first to be at best an annoying technical embarrassment in robotics, or merely a curious puzzle for the bemusement of people working in Artificial Intelligence (AI).
I think, on the contrary, that it is a new, deep epistemological problem – accessible in principle but unnoticed by generations of philosophers – brought to light by the novel methods of AI, and still far from being solved.
Many people in AI have come to have a similarly high regard for the seriousness of the frame problem.
As one researcher has quipped, `We have given up the goal of designing an intelligent robot, and turned to the task of designing a gun that will destroy any intelligent robot that anyone else designs!’

End of citation

 

So what? What’s Difficult About the Frame Problem?

We certainly know what this theory states.
According to the explanation, there are infinite possibilities to issues in the world – so to solve one, instead of taking into consideration every possible solution, we need to utilize a frame to exclude irrelevant matters from the frame.
But the problem with the method lies in the inability to judge what is considered “inside the frame” (relevant matters” and what is outside the frame “irrelevant matters). We can never reach to the resolution of issue.

I did understand the explanation, but I thought there must be some other solution.
Why should I think about the color of the wall change, when I want to remove the bomb?
Which database is this assumed on?
If it was in relational database, I have no idea what sort of SQL sentence was used when the frame problem occurred.

Moreover, what sort of designing was it on if the robot stopped until the bomb exploded?
If the OS was for controlling robots, it should have used real time OS for implanting. Yet, if it was to make robots work on critical processing like handling explosives, it should have the time-out setting at utmost priority. And yet, the robots were thinking before explosion…

 

Why Can’t Robots Solve Such A Simple Problem?

But when we look at the historical background of the frame problem, we start to see what John McCarthy was trying to say.
The frame problem was published in 1969.
This is about when the first wave of AI ended.

When relational database was first introduced was in 1974, and when real time OS was introduced was in 1979. I suppose John McCarthy had not anticipated such inventions.

Then, what was he anticipating?
In the first wave of AI, the mainstream was simple estimation/exploration. Artificial intelligence that could solve a maze was popular in the era.
When a maze branches out, it will explore one divergence at a time, and when it reaches to a stop, it will go back to the original divergence and explore the next one.
It will at some point reach the goal with this method, starting from square one. This is estimation/exploration.

Let us interpret the story about robots based on estimation/exploration.
There are various paths to the problem – to retrieve only the battery from a wagon that has a bomb and a battery on – such as what happens if a robot moves the wagon, what happens if a robot moves the bomb etc.
Secondary consequences refer to these different paths of the maze.
There will be more paths from the first set of paths.

What happens if a robot moves the wagon out of the room, and what happens if it does not.
What happens if a robot moves the wagon out of the room, and the color of the wall changes or does not change; or
what happens if a robot does not move the wagon out of the room and the color of the wall…

There are infinite paths that could potentially occur in the actual world, it will take infinite amount of time to think of them all.
This is the R1D1.

It is the same with R2D1 which is improved from the R1D1, that there are infinite number of paths for possibilities that are irrelevant to the actual purpose, it too will take infinite amount of time.

What John McCarthy was attempting to point out was that, in estimation/exploration, AI could solve simple problems like a maze, but the real world is much more complicated than that, and it is not something AI could easily solve.

 

The first wave of AI

Then why did he start proposing such ideas?
Let’s look back on the historical context at the time.

The first wave of AI from 1950s to 60s represent the era in which computer was proven to solve problems like maze and it was anticipated that AI will solve all problems. That’s why the industry was receiving a lot of investment from the governments and companies.
The atmosphere is just the same as today’s third wave of AI trend which started at the rise of deep learning.
John McCarthy was maybe equivalent of Demis Hassabis, the CEO of DeepMind whose product has won the world champion of Go (board game that is popular in South East Asia) and has been purchased by Google.
Nonetheless, they could only solve toy problems like maze but it did not bring any impact in the real world. In the 1970s, the flow of investment to the AI studies suddenly dropped  and that’s when the first wave of AI ceased.

From a shrewd perspective, once a star in the first wave of AI John McCarthy was maybe trying to put an end to the trend for not being able to keep up with expanding anticipations and pressures towards the AI industry.
Maybe he proposed the frame problem so to prove that one cannot use AI to solve real-life problems from the roots-level.

One and only John McCarthy proposed that AI is theoretically incapable of solving real-life problems, the AI trend began to cease in the 1970s.
Considering the historical background of the theory, we can nod at the peculiarity of the frame problem.
We could almost feel the frustration and desperation of John McCarthy for AI not being able to solve real-life problems, like having to think about whether the color of the wall will change if it moves the bomb.

The frame problem is a problem that was made up so to forcefully explain why AI is incapable of solving real-life problems.
This theory is an excuse for John McCarthy.

 

 Another Star of The first wave of AI 

There is another star I need to mention in the first wave of AI.
It is Marvin Minsky who is famous for his work of neural network.
Neural network, which is a replication of human’ neural circuit, was discovered to have similar learning function, it was widely talked about during the first wave of AI. Minsky was one of the passionate researchers of the field during the time.

However, Minsky proved that for perceptron to become the basis of neural network there is a certain limit in his book “Perceptron”. It kept down the studies of neural network at once.
The book ”Perceptron” was published in 1969.

Yes, that’s right – the same year as ”the frame problem” was proposed.

In McCarthy’s ”the frame problem”, it was proposed that a simple problem like to retrieve a bomb out of a room is impossible for AI to solve; and Minsky proved the theoretical limitation of neural network, both of which fostered the cease of the first wave of AI.

Then, is neural network really useless?
That can’t be true.
Later on, neural network evolved to backpropagation algorithm, convolutional neural network and to deep learning – now it has a higher validity rate in image recognition than humans.

This is actually what usually happens in the history of science.
Something that had been believed to be impossible was made to be possible thanks to advancement of technology.
Chess and shogi (Japanese chess) used to be believed something that computer could never defeat humans.

 

Let’s Solve the Frame Problem

So now that it’s been more than 50 years since the first wave of AI, can we solve the frame problem?
Let’s do it for a start.

First, a robot needs to perceive the external situation.
This can be done with image recognition.  
It perceives that there is a battery and a bomb in a wagon inside the room.
This can be done with deep learning.

Next, it creates 3D models from the perceived objects. It replicates the situation where a bomb in the wagon inside the room in the form of 3D models.
This also can be actualized with today’s technology.
When a robot perceives objects, it also absorbs different information about the objects.
Such as there is a ceiling and wall inside the room, and that which color is the wall of the room.

Moreover, with 3D models it can simulate the physical situation.
As it simulates the scenario, it can confirm that if it moves the battery outside of the room, the bomb moves with it as well.
So it can figure out that it should not just move the battery.

The purpose is to move only the battery outside of the room, leaving the bomb which is on top inside the room.

Here let’s break down the steps necessary for pursuing the goal.
First, the current situation is, there is a bomb on top of the wagon.

Next, what’s the situation surrounding the purpose?
There is the battery outside the room and inside the room lies the bomb.

If the bomb was not on the battery, all it needs to do is to move the battery and now battery is moved outside the room.
I’ll call this situation where the bomb is not on top of the wagon, an interval situation.

Next, we will compare the current situation and the interval situation.
The difference here is whether there is the bomb on top of the wagon or not.
Hence, all we need to do is to change the situation of the wagon with the bomb on top to the one where it does not have the bomb on top.
In order to change the situation, we need to move the bomb from the wagon to somewhere else.
To move the objects, a robot can use its arm.
Once it uses its arm to move robots the bomb from the top of the wagon to outside, it creates the interval situation. Now we are able to bring only the wagon outside the room.
As long as we have it simulated, all we need to do is to execute actions accordingly.

How’s that sound?
No problem, right?  

 

Hold on, where did the frame problem go? 

Well then, where did we manage to avoid the frame problem?

The presumption in the frame problem was a problem like maze exploration.
A maze has different paths and only on the right path, one can reach the goal.

In a maze, maximum number of diversions to each path could be around two to three, so it is possible to calculate all paths available.
However in the real world, there are factors arise to the situation like the wall; and the wall has a factor of color, hardness, and more to illustrate.

It will take infinite amount of time to go through all factors in the real world.
However, can you say that a maze exploration and a problem to retrieve a battery only from a room are the same kind of problem?

In a maze, all paths look exactly the same.
You don’t know which path is the correct one from its appearance.
You don’t know it until you actually go into the path and discover if it leads to a dead end or the goal.

Do you think we can only find out the right answer for the case of retrieving the battery only from the room by going through all possible options?
Can we figure out which one is important from its appearance?
For instance, between these cases: ”if I move the wagon, will the bomb move as well? ” and ”if I move the wagon, will the color of the wall change?“.
Which one is more important?

It should be obvious.
Whether the color of the wall changes or not is not important at all that it’s not worth researching.

“Humans know it immediately but for robots, we can’t know which is important and which isn’t ”

I see then, let’s replace the degree of importance to physical distance.
The closer to robots, the more important; and the farther from robots, the less important.
If we put it like that, robots too can easily figure out that  the bomb is important and the wall of the room is less important than the bomb.

In the real world, we can easily set the degree of importance unlike in a maze.
The important ones are inside the frame, and the less important ones are outside the frame.
If we differentiate the importance like this, we can easily avoid the frame problem.

John McCarthy pointed out that if we let AI robots operate in the real world, it will stop working as it starts thinking about possible scenarios.
However, it will only arise in special situations like in a maze, where all paths look identical and cannot be differentiated, and this will never happen in the real world.
This very unrealistic problem has been called as the biggest problem of AI we have to this day.

Why don’t we stop saying that the frame problem can never be solved?

 

このエントリーをはてなブックマークに追加

Leave a Reply

Your email address will not be published. Required fields are marked *

keyboard_arrow_up