There are many problems in which it makes little difference how we reach particular nodes in the graph or tree. An example of such a problem is the water jug problem. On the other hand in the case of intelligent programs which are asked to discover new interesting facts it is essential to know what is meant by interesting. The system will need heuristic rules to sieve facts into interesting and not so interesting and then at each stage the system can choose the most interesting facts. This corresponds closely to the best first search which chooses the most promising nodes. The additional factors in this problem are that the paths that reach a given node are important because if a given node is common to many paths it seems important or in this context interesting. Thus the path gives a reason wehy this fact is interesting. An agenda is defined to be a list of tasks a system can perform and associated with each task are two concepts. The list of reasons why this fact is proposed as interesting called a justification and the weight of evidence that suggests that the task is useful.

The Agenda driven search

1 Repeat

2 Choose the most promising task from agenda, the task can be expressed in any suitable form such as which is the next node to be expanded;

3 Execute the task giving it the time and space it deserves, in the process creating additional successor nodes and for each

4 Check for presence on agenda

If present and reason is new add it to the list of justifications

If absent insert the task on the agenda with reason

5 Compute the new task's rating using all the justifications

6 Until goal state reached or agenda empty

Maintaining the list of items in the agenda is pretty expensive and unnecessary. The objective at any time is to choose the correct first element in terms of rating. Keeping a list of the top five or ten elements gives the best compromise; at each stage the new nodes are compared against this short list and are only inserted if they belong in this shortened list. This approach also caters for problems where negative evidence is encountered. This method is good for focussing attention on points which are highlighted by a number of indicators. Care must be taken in deciding on the level of detail otherwise the system becomes bogged down in housekeeping. Agenda driven control structures provide an excellent way of integrating information from a range of sources into one program for example as in a blackboard system.Problem Reduction

AND- OR GRAPH

See figs 3.6 and 3.7 onwards. For certain problems where the solution involves decomposing the problem into smaller problems and synthesising the partial solutions using either some or all of the partial solutions an AND- OR GRAPH based method is useful. Here the alternatives often involve branches where some or all must be satisfied before we can progress.

For example if retired or unemployed and healthy and free to go on holiday

1 enter a competition and win

2 save up money and find a vacant holiday in the brochure

Note the use of arcs to indicate that one or more nodes must all be satisfied before the parent node is achieved. To find solutions using an AND- OR GRAPH the best first algorithm is used as a basis with a modification to handle the set of nodes linked by the AND factor. The basic algorithm is inadequate for in Fig 3.7a the top node A has been expanded giving three nodes B , C and D; the latter two being linked by an AND. The costs or the values of f' are indicated. Each arc has a uniform cost of 1 for simplicity. The cost of B is less than the cost of C and D. Proceeding along B would appear best but the difficulty in this graph is that in expanding nodes And links appear and this can upset the expected result see especially fig 3.7b, where nodes E and F makes the value of B become 17 which is worse. Then expanding C and D we get into an even worse position. The most promising node G is part of so many subtrees that its individual value is lost and it would appear best to investigate another route, i.e. the previous one.

In order to develop an algorithm to handle an AND- OR GRAPH we introduce a value called FUTILITY; if the cost of the solution exceeeds FUTILITY we abandon this route.

Problem Reduction is a simplified form of the AO* algorithm which is an amendment of the A* algorithm to handle and-or graphs.

Algorithm

1 initialise the graph to start node

2 Repeat

3 traverse the graph folowing the current path accumulating nodes that have not yet been expanded or solved

4 pick any of these nodes and expand it and if it has no successors call this value

FUTILITY otherwise calculate only f' for each of the successors.

If f' is 0 then mark the node as SOLVED

5 Change the value of f' for the newly created node to reflect its successors by back propagation. Wherever possible use the most promising routes and if a node is marked as SOLVED then mark the parent node as SOLVED. This algorithm differs from best first as expanded nodes need to be reexamined.

6 Until starting node is SOLVED or value greater than FUTILITY

Commentary

In fig 3.8 the arrows indicate the chosen nodes remembering to add 1 in each case for the arc length. In step 1 A exists and this is expanded into B,C and D. D is chosen for expansion its value 6 or 5+1 is less than 9 or 3+4+2. Proceeding to E and F we find that the value of D becomes 10 and so B is reconsidered. As a result this node is revalued at 6 and this path as 12 making D more attractive again.

In fig 3.9 we have an interesting situation where there are two ways to get to E which perhaps is the road to glory. The nodes have been expanded and labelled in alphabetical order and on reaching J an additional link to E is detected. This means that the node E is present in two separate paths or that the node E can be reached fro two separate routes. We cannot make progress on the natural short route as D is impossible and A can only be solved on this pathway by solving D and C and fourthermore the node D is impossible to solve at the moment. However bearing in mind that this is a graph we can solve B by solving G and H; and H is solved by solving I an J; and J relies on E. This is an example of a case where a longer route can yield success in preference to a shorter route that is impossible.

This algorithmic method fails to take account of the interaction between nodes as indicated in fig 3.10. There is no recognition of the fact that C is needed twice to solve A and D. In the fig 3.10 it appears necessary to solve nodes C and D independently, wheras in practice solving D required both C and E to be solved. Thus C is solved before D can be solved and solving D means that A is solved. This is far too subtle for this algorithm in its current state.

Considering the AO algorithm in 2(c)v ancestors of nodes are revised which can result in nodes being revised on paths that appear not to be optimal. This is essential otherwise the algorithm may breakdown. For example in fig 3.11 it is clear that reaching E through B is worse than through C but if E is revalued it could invalidate the argument. If the cost of E is revaluated at 10 rather than 5, it could make B more attractive and lead to D being expanded. Then the value of B would be reconsidered and in the process the updated value of E would be included. Hence no damage is done but if the change had occurred at greater depth it might not have been detected. An example of this occurs in fig 3.12 where the revision of G from 5 to 10 is never detected and a non optimal route through B is chosen. On expanding C the value of G is changed from 5 to 10 and without back propagation the route through B seems the best. The better path is through C but if the estimate of E is changed to 11 then this makes C 15 and B appears better. But B must have its value back propagated to make it more up to date, this can involve even more work on D. It is abortive as C is still better but in this case failure to back propagate could give false impressions. If the cost off G is not adjusted and propagated then a non optimal path through B results.