It has been three months since I started writing about the third phase of my thesis. Given the fact that I thought it only needed some fine-tuning a few weeks ago, you might think that there is nothing new to write about it. If so, the next few paragraphs will proof you wrong.
During the development of the third phase I have worked with a set of test-cases which needed to pass in order to proof that the algorithm works on a reasonable level. When there where failing tests the algorithm was tweaked to let those tests pass. This resulted in putting heuristics in the algorithm until everything worked. Unfortunately, the heuristics seemed to work in practice, but where not always correct in theory. Even though I still believe that purity is not necessarily needed for the correct functioning of tools, it is needed in order to argue that an algorithm works.
So the first heuristic that was wrong was the introduction of meta-variables in the calculation of rewrite rules. Even though this works for a large number of cases, it also introduces dependencies which are not necessarily correct. For example, the rewrite rule between 1+1 and 2 would be A + A -> B. The LHS of this rule implies that the rule can only be matched a plus with two equal terms, creating a dependency that can easily be a coincidence. Therefore, the generated rewrite rules should not contain meta-variables, only 'normal' terms. This considerably simplifies the algorithm for rule-calculation and term-distance, while the results are still what is expected. So exit meta-variables.
The second heuristic that was introduced was the penalty given to guesses coming from recursion. I thought that this was needed in order to support the case in which a student applies a rule incorrectly, followed by a correct application of a different rule. Thinking about this a bit more I realized that it is impossible to support this situation. When two rules are applied and the first one is applied incorrectly, you can never know whether the second one is applied correctly from just looking at the terms. Therefore, a guess for either rule is can be a valid guess. So exit penalty.
Now that the algorithm is made even more simpler by removing the penalty there where 6 test-cases that still failed. In two of these cases the returned result is as good a guess as the expected result. The other four cases simply show the limitations of the algorithm.
In all four cases the expected outcome is the rule EqElem: A <-> B => (A /\ B) \/ (~A /\ ~B), but the actual outcome is ImpElem: A -> B => ~(A) \/ B. As you can see, the rules have a pretty similar LHS, the only difference is the operator, and a somewhat similar RHS. It turns out that as soon as the student makes a mistake with the priorities in applying the rule EqElem, for example by forgetting the parenthesis, the algorithm fails and returns the rule ImpElem as a result. This is because a change in priorities causes the AST of the RHS to be heavily transformed, which results in a large distance to the rule EqElem. This rule encodes a special priority in its RHS, creating a distance when this priority is not followed.
The conclusion that can be made from all this is that the algorithm of this phase can be kept simple, while still supporting a considerable amount of situations. For those situations for which it does not work, the problem is usually a (large) change in priorities. Luckily, these kind of errors are easily captured in rules for the fourth phase. So the battle might be lost, at least in the end the war is won.
No comments:
Post a Comment