In theory the algorithm functions quit well, and the same goes for the actual implementation. However, during the implementation of the algorithm I have run into a few details that needed to be fixed. This once again shows that theory and practice usually do not match completely :)
The following is a (small?) list of some of the refinements made during the implementation.
The rules(et)
One assumption that was made during the design of the algorithm is that rules must rewrite the current term into a term that is closer to the answer. For example, the rule A + B => B + A is correct from a mathematical point of view, but it does not help a student to get closer to an answer.A second assumption that was made for the algorithm is that the ruleset is extended by combined rules. For example, the rules A => B and B => C are combined into a single rule A => C. The calculation of the combined rules is not very difficult, but getting the calculation to stop is a bit trickier.
The first experiment with rules from the domain of fractions went reasonably well, the only restriction that needed to be added was that restrictions on variables may not depend on themselves. So the restriction ... where B := B is not allowed.
The second experiment with rules from the domain of logic went into endless recursion, a clear sign of an additional problem. It turned out that rules where merging with themselves, resulting in an extra restriction to prevent this kind of behavior.
Calculating the rule
My first idea about the calculation of the rewrite-rule was a simple bottom-up traversal which returned a meta-variable as long as the (sub)-terms are equal. This naive algorithm worked reasonable during the first tests, but failed to perform well at larger examples. In these larger examples the calculated rule contained too much noise to see the important parts of the rewrite rule.In an attempt to cut down the noise the algorithm was extended to find out whether a sub-tree of a rule can completely be removed. An example of this is the calculation of the rewrite rule between 4 + 5 + 2 and 4 + 6. The naive algorithm would give A + B + C => A + D as an answer. However, inspecting the terms gives the rule B + C => D as a more accurate match. This last rule still contains the same information, we have simply removed the noise of the A + .-sub-tree.
The first implementation of this extension only checked operators with an equal amount of children, and only with a left-to-right match. Even though this worked in some situations, it was still not good enough to be useful in general. Therefore, the matching part of the extension was improved by taken into account the associativity and commutativity of the operators. The different combinations of these properties either result in an exact-match (left-to-right), an assoc-match (left-to-Right, right-to-left) or an assocComm-match (all combinations of children are tried with an assoc-match).
Distance between rules
As with the calculation of the rules, the distance between the rules started with a naive algorithm covering several cases:- When we have two equal (sub)-trees we return a distance of 0.
- When we have two nodes in the trees we compare the name of the nodes and all the children from left-to-right, adding distance when the two operators are not equal.
- When either one of the trees is a leaf we check whether it is a meta-variable. If this is not the case we simple add a distance of 1 (for the leaf) and the size of the other (sub)-tree. If the leaf is a meta-variable we check whether it is a free variable, and return a distance of 0 if it is. Otherwise the same '1 + size (sub)-tree' is returned as distance. A variable is free when it is not previously bound by a match or a restriction.
Note that the above algorithm does not explain what should happen when two nodes have a different amount of children. After some testing I found out that it works quit well to just take the size of the extra children together with a penalty.
Another thing that is not considered in the above algorithm is the 'forgetting' of an operator. An example of this is the distance between the rule ~~A -> A and ~~A -> ~A. With the naive algorithm the distance between these rules can be quit large when the A in the rule is a large sub tree. In order to model this 'forgetting' the algorithm checks whether a node has a single child on one side. If this is the case the algorithm calculates the distance between the node with operator and the distance between the node without the operator. It then takes the smallest one of the two, taken into account some extra costs for adding the operator.
Putting it together
Now that all the sub-parts worked correctly the top-level algorithm needed to be implemented and tweaked. A first thing that was tweaked was the filtering of a list of matches. When more then one rule has the small distance, the algorithm first returned the first element of that list. Since this behavior makes the ordering of the rules important it needed to be changed. Now the algorithm takes the rule from the list in which the LHS of the rule is closest the current Previous Term.Speaking of Previous Terms, the original Previous Term does also influence the choice for a rule. When we go into recursion by applying rules, the current PT changes. The changed PT matches better with rules that have a similar LHS, thus this LHS is different from the original PT. Assuming that students apply rules that at least partly match the LHS of a rule, we need to take the distance between the original PT and the LHS of the rule into account. Furthermore, we also take into account the number of rules that are applied to the PT in choosing a rule.
Does it work?
After all the tweaking and the bux-fixing the answer to this question is: Yes! You can take a look at some of the test-cases which list a PT, a CT and a desired result from the defined set of rules.The tests-modules also include examples of situations which do not produce the correct result. This has to do with the fact that the virtual student made more then one mistake in the application of the rule. This shows that the algorithm does not work in all situations, but this was also not the goal of this algorithm. Luckily, the other examples show that the algorithm is capable of producing desirable output in a number of situations.
No comments:
Post a Comment