Calculating the complexity of PHP

Not only is the question of calculating the cyclomatic complexity showing up in our mailinglist and bugtracker, it is also showing up on other mailing-lists! This shows that the time has come to introduce a solution to this problem based on PHP-Front:
          php-cyco
(indeed, yet another creative name :)

This tool is added to the PHP-Tools project as of revision 421. The implementation is not very big, the code that does the actually work is about 20 lines, but it relies heavily on PHP-Front. Thats where libraries are for, right?

So what does it do? The tool basically counts the number of branches that exists in your code. This is done for the main-flow and separately for each function. By default, each function (and the main-flow) starts with a complexity of 1 since there is always a minimal path through the code. For each construct that introduces a branch within the control-flow, for example if-, for- and while-statements, the complexity is increased by 1. In the end the number represents the number of paths that exists from the start until the end of the function (or main-flow). The higher the number, the more difficult it becomes to understand and test the block of code.

The complete list of constructs that increase the complexity can be found at the bottom of the source. I think that the list is understandable, even though it uses 'Sorts'. If this is not the case please let me know, and I will put up a list using php-syntax.

As you can see the complexity is increased by logical operators. This is because the semantics of the code:
if($foo || $bar){ //code }
is equal to the following code:
if($foo) {  
//code
} elseif($bar){
//code
}
in other words, we can rewrite a logical expression as a set of if-else-statements. Since they share the same semantics they should share the same complexity.

One last note, the tool actually calculates the minimum complexity of a function. Even though php-cyco is capable of including files, using the usual --*-inclusion flags, it might encounter situation in which it cannot resolve a filename. In this case the calculated complexity can be lower than the actual complexity because the included file could contain some of the complexity-increasing constructs.

So if you are curious to know which function within your php-project has the largest complexity I suggest you download the PHP-Tools project and start analyzing your code!

Please let me know if you have any comments or feedback.

[Trivia] Pentaho Mondrian Slow MDX Query

Finally, another trivia-post. I could not find the answer easily on the web, but there might be people that are struggling with the same problem we had.

While working on my current project we ran into a problem with a MDX-query. In short, it retrieved about 14,000 rows from a database and displayed a subset of these rows on the screen in a variable ordering. The problem we where facing was that it took about 4-5 minutes before the data showed up.

We knew that the query was not completely trivial, but 14,000 rows is not that much data. Furthermore, one would expect that an OLAP-database could handle this kind of volume very easily. In the beginning we thought that it took so long because all of the data was first retrieved from the database and then sorted within the Java-code. On the other hand, sorting a list of 14,000 items should not take that long. An other theory we had was that the bottleneck was the number of queries that is generated (about 4000) to retrieve the 14,000 rows. This was not the case, every query only took 1-2 ms to complete.

So we fired up YourKit, a Java memory/CPU-profiler. I can highly recommend this tool to anyone who wants to profile there code. It provides detailed reports about all sorts of interesting things and comes with a nice interface.

While going to the profiling output we noticed that the code was spending about 50 percent of the time in a 'Sort'-method, and 40 percent of the time in a 'Debug'-method. Since we didn't see that much debug-output we did what any programmer would do when has not written the tool (s)he works with: we assumed that the profiler was wrong and concentrated on the 'Sort'-method.

Examining the profiler output some more we discovered that 40 percent of the time in the 'Sort'-method was actually spent by calling the 'Debug'-method above! So we looked up the code and found out that the people of Mondrian where actually calling this 'Debug'-method every time they compared two objects.

Question: When sorting 14,000 objects, how many lines of debug-output would you get?
Answer: A lot.

The funny thing was that we did see some debug-output, but not as much as you would expect from the code. Furthermore, the 'Debug'-method was only called when the log4J-level was set to DEBUG. We did use a log4j-configuration, the default that was shipped with the application, but this file did not mention Mondrian anywhere. It did mention a file called 'server.log' which we eventually found. The size of the log-file was about 1 gigabyte, which might explain where all the debug-lines went to.

Our current solution is to simply remove the log4j-configuration from our project. This still gives us enough feedback to track the usual things, but it also prevents Mondrian to go into debug-mode. It reduced the running time from 4-5 minutes to 4-5 seconds, a considerable speed-up don't you think?

The new release plan

Last week I made a list of things that I wanted to do after reading Production OSS. Today I took the time to implement two of the items on the list, which reduces the list to only thirteen items. Luckily, the tasks on the list are not difficult to implement, but they do take some time to think about the best way to implement them.

The first thing on the list was bringing the website up to date. It still had some pointers to the build-farm in Utrecht and some tips to work around missing functionality that actually exists nowadays. I also tried to make it more clear what the intention of the project is, although I think it can still be made more explicit. Fortunately, this is also an item on the list :)

A second item on the list was the cause for a large number of mails from Jira send to the psat-commit-list. I made up a new release plan of PHP-Sat / PHP-front and also removed some issues/features that where no longer wanted. Let us take a look at the new milestones for the two main projects:

PHP-Front:
The 0.1 release must be capable of parsing, pretty-printing and reflection on PHP4 and PHP5 code. Furthermore, constant propagation need to be supported for procedural code without function calls. Within the 0.2 release this support is to be extended towards function calls and object creation. Support for more information about the environment of PHP, for example the types of internal functions, is planned for version 0.3.

PHP-Sat:
The release structure is closely related to that of PHP-Front. Within the 0.1 release it must support procedural code without function calls and object creation. These language constructs must be supported within release 0.2. The purpose of the 0.3 release is to add many new bug patterns.

In both projects we can see that the number of issues that need to be fixed before the first release is quit low. So hopefully we can make a first release around the end of the year!

So much interesting to read ...

... and less time to do it!

Since I started my full-time job I noticed that it is a lot harder to stay up to date with all the latest (online) developments. During the writing of my thesis I had a considerable amount of time which I could spend on reading blogs and news-items, and following mailing-lists. When you spend most of your day actually working you learn to prioritize which things you want to read :)

One of the things I did managed to read, albeit a bit late, is this thread on the PHP.internals list written by Wietse Venema. It is a follow-up on this thread in which a first proposal was made to integrate a perl-style taint-mode into the core of PHP. The results posted in the follow-up thread look promising. It is also interesting to see that he has gone from a black-and-white taint-mode, to a more leveled approach. Currently the proposal only contains a subset of the levels available in PHP-Sat, but I think that the most fundamental ones are definitely there.

Even though Wietse is developing a prototype I think he will have a hard time getting this taint-mode into the actual core of PHP. Within both threads the general opinion seems to be that the idea is nice, but the developers of PHP seem to think of many situations in which it could fail. I hope to see more results of this idea soon!

Going over some other interesting threads in the internals-list I found a reference to a tool called PHPLint. (Isn't is funny to see that there are all sorts of initiatives popping up that that try to make PHP more secure/stricter). I haven't have time to take a better look at this tool, but a first glance definitely showed potential. I'll try to examine this tool more thoroughly at the end of this week.

While there is less time to read on-line material, there is more time to read off-line stuff. Since I am using public transportation to get to work I have an extra hour a day to read actual books and publications. One of the books I have read in the past few weeks is a printed version of "Producing Open Source Software" (ProducingOss) written by Karl Fogel. My conclusion: absolutely worth reading!

ProducingOss contains all sorts of tips, hints and best practices. Even if you are not involved in an open source project it is still useful to read. Almost everything in the book can also be applied to closed-source projects. Furthermore, it contains many pointers to other interesting literature. One of these pointers lead me to The Cathedral and the Bazaar, my current read-while-traveling-to-and-from-work book.

I intend to use several things from ProducingOSS within PHP-Sat. I will just have to think about how I can fit the project in my current schedule, but it will definitely be fitted in.

Attending J-Fall 2007

This Thursday (October 10th) I was lucky enough to get a ticket (and approval) to go the J-Fall conference organized by the NL-jug. My conference day started when I met some colleagues at the train-station to travel to Bussum-zuid. Within this nice-looking town the J-Fall found its home in 't Spant, a professional conference-facility.

The keynote of the conference was given by the "Sun Java Technology Outreach Team", which is basically a group of four engineers that talk about what you can do with Java. The keynote started with a nice overview of the different technologies Sun is working on. After this, a demo of JavaFX was given. Chuck Munn-Lee showed how easy JavaFX makes it to create a circle that morphs into a rectangle when you click on it. I also liked the fact that his editor did not only support code-completion, but also 'Color-completion'. Unfortunately, the other demos were are a bit disappointing because of several technical problems.

The first 'real' talk I attended was called From Java to Ruby … and Back. The speaker provided lots of material on the slides which made them a little bit overwhelming. Fortunately, he assured us that the slides would also be distributed digitally. The contents of the presentation was an introduction to Ruby and a comparison with Java. One of the conclusions was that the speaker would happily code 80 percent of his applications in Ruby instead of Java. Quit a statement on a conference promoting the use of Java :)

The second talk I visited was about JUnit, or actually JUnit 4. I was a little bit disappointed by this presentation because all of the content was already discussed in an article printed in the last issue of the Java magazine. Furthermore, the pace of the presentation was terribly slow, which made it hard to focus. The only interesting bit of information I extracted was that JUnit4 allows you to define parameterized tests, something I might have missed in the article.

After a little bite, a longer walk and a stroll along the companies on display it was time for the second keynote. This keynote was very slick, had nice graphics and a strong story. Three different people talked about what Adobe offered, which activities are done by Adobe for the open source community, and what we can expect from Adobe in the future. They even tried to answer the question why Adobe was sponsoring the J-Fall. The answer was probably not very clear because I cannot reproduce it. The talk basically felt like one long commercial for the Adobe company.

Right after the second keynote I went to the talk of Peter Hendriks about Eclipse Mylyn. I very much enjoyed this presentation. The story was clear without telling to much detail and the demo was well-prepared. Peter even turned on the magnifying glass in Windows to zoom in to the interesting parts of the demo. These are the kind of details that show that this presentation was well-prepared.
On to the contents, the Eclipse Mylyn plug-in has all kind of nice features that can help you focus on your task. For example, the file-list can be filtered to show only those files that you have edited during the current task. Also, I especially liked the fact that you can define a test-target that only runs the tests associated with the current task. If you use Eclipse you should definitely take the time to check this plug-in out!

After getting another cup of coffee I set down in front of the biggest stage to listen to Java-specialist Heinz Kabutz. He explained The Secrets of Concurrency using ten (humorously named) laws. Each of the laws had an explanation of the name, a link to the actual problem and a solution. The talk was relatively straight-forward which made it easy to follow. A perfect talk for the late afternoon!

The last talk I visited was about the new JSR-286 specification for portals and portlets. Although the announcement implicated that the talk was only about the specification we hoped for a dynamic talk. Unfortunately, we got what was announce. An overview of the current version of the spec, a small list of shortcomings and a larger list of new features. Even though I was interested in the topic it was hard to stay focussed during the complete talk. Especially when the volume of the music in the bar next-door was turned up.

As a conclusion I can say that, despite some of the less entertaining talk, I really enjoyed the day. It was interesting to hear about a lot of different topics, and fun to see different companies present themselves. If I get the change I will definitely attend the upcoming J-Spring conference.

My last words ...

... about my study that is!

After seven years of attending college I am no longer a student anymore. It still has to sink in a little bit, but I will probably get used to it very soon. It is quit confronting to transfer all of your (bank-)accounts to a non-student one :(

Thesis

I defended my thesis yesterday morning in front of a crowded room. Although I was a bit nervous before the talk this feeling faded away when I started the presentation. I thought it went reasonably well and this was confirmed by my supervisor(s). Please allow me to thank Johan Jeuring and Jurriaan Hage for their time and effort to supervise me.

If you are interested in the final result of the thesis you can visit my thesis page, or just download the complete thesis here. The slides of my presentation can be found on this page. Even though the project is finished (in the sense that I am not obligated to work on it anymore) I am still interested in remarks and comments you might have.

Work

And what do people usually do when they graduate college? Right, they take a vacation!

In my opinion I already had some vacation the past couple of weeks. I finished the major part of my thesis at the end of August, so in September I only had to integrate comments from my supervisors. This gave me lots of spare time which I used to relax a little bit.

Therefore, I took some time to found a nice job. After visiting several companies I decided to accept a job at the Software Improvement Group. During the interviews I got the feeling that this is a challenging job where I can learn a lot. Whether this feeling is correct is something that only time can tell us.
I will just wait and see what happens Monday morning!

A little migration helper

You probably already heard about it, but support for PHP version 4 is partly dropped as of 31-12-2007. And after 08-08-2008 the support ends completely.

Luckily, the PHP documentation team provides you with a set of migration guides. Going through these guides can take some time, but it enables you to upgrade your code easily to be PHP5 compatible.

To make your life even easier, the PHP-tools project is extended with the tool test-migration. This tool performs some checks that are described by the migration guide to detect whether the code can be run under version 5 of PHP. These checks include:
  • Is there a function definition with the same name as a function newly defined in PHP5?
  • Is an object created without the class being defined first?
  • Where are the functions strrpos, strripos and ip2long used?
  • Is there any place in which there is reflection within PHP that uses changed behavior?
The first two checks are rather easy to understand, PHP5 will simply halt execution with an error when these issues are detected at runtime. Therefore, the warnings that are generated for these kind of patterns are shown with a 'serious'-level.
On the other hand, the last two checks do not find constructions that can halt execution. They detect places in which certain constructions are used. These constructions where already available within PHP version 4, but their behavior changed in version 5. To easily find these constructions they are flagged by a 'minor'-warning. More details about the changed behavior can be found here.

The first version of this tool is quit basic and performs only a few checks. If you would like to see more check included, don't hesitate to drop me an email or put them in the comments.

Finished coding (?)

Monday was the day that I got back to working on my thesis. I spent the past few days coding and I think I am reasonably finished! The basics of the algorithm of the fourth phase are implemented, the GUI is updated with all kinds of small improvements and some documentation is written. The only coding that is left to do is making a solver for equations, a small todo compared to the overall work.

Now that the coding is done you might want to take a look at it. Unfortunately, the actual tool is not that nice to look at. Some people may like a command-line interface, but I guess most people prefer a graphical interface. Those that really want to run the tool from the command-line can do a checkout of the svn-repository, others can just visit the RFG web interface. There is some documentation available on using the interface, but you can always send an email to my gmail-acocunt with any questions. It would be great if you could just visit the test-page, make some exercises and send me some feedback about it.

I know that there are some things the in the GUI that can be improved, but I am mostly interested in whether the feedback helps you in solving the exercises. A correct step generates standard feedback which is not going to help you get to the answer. However, a step which contains an error triggers the algorithm of the third phase and gives more detailed feedback. Even though the feedback may be wrong sometimes, it is hopefully more useful than the message 'This step is incorrect'.

Experiencing a World Jamboree

It has been one month since the last blog post. The reason for this is that I have traveled to England to (finally) attend the World Scout Jamboree. Describing everything that happened in the past weeks does not only result in an overly long post, it is also a bit boring. On the other hand, only giving the facts is also boring. Therefore, let us just scan through the trip and talk about the more interesting highlights.

Pre Jamboree Tour

Our trip started with a pre-jamboree tour organized by the our Contingent staff. We camped on Kibblestone, a reasonably big camp-site in England. Getting there involved two buses, a baggage-lorry and a boat. All the luggage of our troup, consisting of 40 persons, had to go with us on each of these vehicles. The boat was easy, but getting everything into the other three was a bit of a challenge. However, everything arrived safe and sound around 2 AM in the morning. After putting up the tents everybody slept like a baby.

The wheather

The first day of our stay in Kibblestone was accompanied by lots of sunshine. During my first meeting there where already complaints that some of the female participants had to put on more clothing when they walked around. Luckily for the complainer, this was forced down upon us by the weather-gods. It did not rain all the time, but it rain enough to turn the grass, the roads and the toilets into a big mud-pit. During the jamboree itself it was common to pull an item out of its storage place and see the original Kibblestone-mud still sitting on it. It was just everywhere!

Activities

The weather also influenced the activities we did. Most of the activities could also be done in the rain, in between two showers or indoors. Most of the participants enjoyed the numerous excursions, sport games (of all kinds) and campfires. For some of the participants the visit to Stratford-upon-Avon was a good excuse to buy all sorts of goods, including the new Harry Potter. Unfortunately for them they had to wait to get it because the first event in this town was a visit to the shakespearience. However, upon arriving in the right street it was quit clear that we would not be entering the theater. Between us and the theater was not the expected road, but a small river of about 30 centimeters deep. More time to shop, more time to start with Harry Potter!

Home Hospitality

An even more relaxing and special activity in the pre-jamboree tour was the home hospitality. The original concept was that each duo in our troop would stay with a family for two days. This was canceled by the English organization because there was a shortage of families that would host our scouts. Luckily, there where several scout-troops that wanted to host complete troops! After traveling for about six hours we arrived in Cardigan, a small city on the coast of Wales. We stayed wit the "3rd Cardigan Sea Scouts", a great group of people that terribly spoiled us. Our stay was a bit at the end of our pre-jamboree tour and the scouts of Cardigan made sure that we got our well-needed rest before we headed off to the real Jamboree. I cannot thank them enough for this fantastic time!

Surprising News?

On the last day of the pre-jamboree tour I finally spoiled a game that a part of our Contingent staff was playing for over a week. Each day, my girlfriend got one or two letters that together would form a sentence. Several people tried to work it out, but they all failed before I finally showed the answer to my girlfriend. For those of you that are curious, please visit the news-site of our troop on the 26th of July.

World Jamboree

It took a while before we left Kibblestone, one of the buses broke down after loading, but everybody of the Dutch contingent was on their way to the Jamboree. Some of the participants thought that there were a lot of people on Kibblestone (roughly 1350), they had not seen the Jamboree site yet. After only 3 hours on the bus we got out in the main bus terminal, only a short 20 minute walk from our campsite. I was told that many of our participants finally understood that taking to much luggage can really, really hurt your back.

Opening Ceremony

Even though the 20 minute walk showed a little bit of the size of the Jamboree, the opening ceremony did the rest. After 3,5 hours everybody was assembled into the arena, roughly 40.000 scouts from over 150 countries. During the opening ceremony, every country was welcomed by both a real and a digital flag. One can imagine that it took a while before every country was named.
The opening ceremony was to be held in the evening, around 8pm. This was changed once, twice and again because of a special agenda that needed to be respected. It turned out that Prince William himself was attending the opening ceremony. Our places in the arena give us a change of seeing him close by and even hitting him on the head, although nobody thought that would be wise.

Organization

Speaking of being wise, it was a good habit to pack as much as possible for a complete day. Walking to the other end of the campsite took about 45 minutes, that is without any chitchat-time. Furthermore, it could take some time to get you your food, a place to shower or a place on the bus. The shortest waiting time for a seat in the bus was about one hour, but it was not uncommon to wait for two or more. This showed that the English organization is pretty good in PR, everything was said to be completely organized, but not so good in actually making things happen. I still have to get a staff handbook which explains rules, procedures and activities for the Jamboree, something that could have been arranged a few months back. Furthermore, some of the activities themselves could have been prepared better. Sometimes even the people in charge of the activity did not know how things should be done.
Don't get me wrong, the majority of the activities were fun and exciting. Our participants and the staff had a great time! Everyone just expected the English organization to be more organized.

Scouting Sunrise

One of the biggest events during this World Jamboree was the Sunrise ceremony, held on the exact moment that the Scouts movement turned 100 years old. During the opening (and closing) ceremony the arena was filled with all sorts of different flags, during this ceremony everyone had the same Purple Scouts flag. Even though the ceremony was a bit too religious for my taste, I enjoyed the ceremony very much. Especially the moment that everybody was trying to collect 100 signatures on their sunrise scarf, 40.000 people wanting each others autograph is kind of cool to see. The rest of the day was filled with the food festival, each country on the sub camp prepared a dish typically for its country, sharing this with other countries. It was very nice to see so many cultures sharing their own ideas and rituals. This was truly one of the highlights of the Jamboree.

Departure

You might expect that the departure wasn't that well organized, but would you believe that the English organization began to think about the departure 3 days before the event ended? All of the previous arrangements with the Dutch Contingent where suspended which resulted in a rather huge challenge. In the original scenario we could stay on the Jamboree site until Thursday and Friday, but this was not possible anymore. After long debates ans short nights our Contingent staff provided a solution that was workable. The first group, including me, departed from the site on Wednesday to London for an excursion and from there to Essex University to sleep. It was very nice to sleep in a real bed and have access to your own shower! The next morning we started traveling around 6.00AM, I was home 14 hours later. The second group first had to relocate to a single spot on the Jamboree site after which they had a day of active excursions. They slept one more night on the Jamboree site and then had the same program as the first group.

Conclusion

I have not yet begun to tell everything that happened on the Jamboree. For more information about our contingent you can visit http://www.wj2007.scouting.nl/, for more information about the activities of the Jamboree you can visit http://eng.thejamboree.org/ when it is back up.

All there is left to say is that I had a great time at the Jamboree. Hopefully I will be able to join the 22nd World Jamboree in Sweden in 2011. Now lets get back to actually making sure I can actually graduate in October 2007 :)

Keeping it simpler

It has been three months since I started writing about the third phase of my thesis. Given the fact that I thought it only needed some fine-tuning a few weeks ago, you might think that there is nothing new to write about it. If so, the next few paragraphs will proof you wrong.

During the development of the third phase I have worked with a set of test-cases which needed to pass in order to proof that the algorithm works on a reasonable level. When there where failing tests the algorithm was tweaked to let those tests pass. This resulted in putting heuristics in the algorithm until everything worked. Unfortunately, the heuristics seemed to work in practice, but where not always correct in theory. Even though I still believe that purity is not necessarily needed for the correct functioning of tools, it is needed in order to argue that an algorithm works.

So the first heuristic that was wrong was the introduction of meta-variables in the calculation of rewrite rules. Even though this works for a large number of cases, it also introduces dependencies which are not necessarily correct. For example, the rewrite rule between 1+1 and 2 would be A + A -> B. The LHS of this rule implies that the rule can only be matched a plus with two equal terms, creating a dependency that can easily be a coincidence. Therefore, the generated rewrite rules should not contain meta-variables, only 'normal' terms. This considerably simplifies the algorithm for rule-calculation and term-distance, while the results are still what is expected. So exit meta-variables.

The second heuristic that was introduced was the penalty given to guesses coming from recursion. I thought that this was needed in order to support the case in which a student applies a rule incorrectly, followed by a correct application of a different rule. Thinking about this a bit more I realized that it is impossible to support this situation. When two rules are applied and the first one is applied incorrectly, you can never know whether the second one is applied correctly from just looking at the terms. Therefore, a guess for either rule is can be a valid guess. So exit penalty.

Now that the algorithm is made even more simpler by removing the penalty there where 6 test-cases that still failed. In two of these cases the returned result is as good a guess as the expected result. The other four cases simply show the limitations of the algorithm.
In all four cases the expected outcome is the rule EqElem: A <-> B => (A /\ B) \/ (~A /\ ~B), but the actual outcome is ImpElem: A -> B => ~(A) \/ B. As you can see, the rules have a pretty similar LHS, the only difference is the operator, and a somewhat similar RHS. It turns out that as soon as the student makes a mistake with the priorities in applying the rule EqElem, for example by forgetting the parenthesis, the algorithm fails and returns the rule ImpElem as a result. This is because a change in priorities causes the AST of the RHS to be heavily transformed, which results in a large distance to the rule EqElem. This rule encodes a special priority in its RHS, creating a distance when this priority is not followed.

The conclusion that can be made from all this is that the algorithm of this phase can be kept simple, while still supporting a considerable amount of situations. For those situations for which it does not work, the problem is usually a (large) change in priorities. Luckily, these kind of errors are easily captured in rules for the fourth phase. So the battle might be lost, at least in the end the war is won.

Testing a contest

Wow, that was a rather interesting experience. I spend the last three days on testing the ICFP-contest. There is not much I can say about the actual content of the contest, but I can say that I learned several new things. Some of them are not so funny, while others are very cool!

Anyway, lets just say that our solution was not exactly optimal :) I am sure that other will do better. So if you have not done so, register here.

Have fun and hack away.

It is getting noticed

Within the last two weeks I noticed that PHP-Front and PHP-Sat are being discovered by people that are looking for PHP-specific solutions. It is not that we are flooded with request, but I am still happy with every question :)

The first question was send to the psat-dev-mailinglist and was about the Cyclomatic complexity of PHP code. I replied that it would not get into PHP-Sat because it is not a bug-pattern. However, it would be a nice tool for the PHP-Tools project. I made a similar tool for Java because of an assignment in the past, so it is probably just a matter of renaming the Strategies to use the PHP-Front api. Unfortunately, I didn't get an answer about how the report should look like. If you have any ideas please let me know in the comments, or in the issue.

A second question was about the grammar of PHP-Front, or actually the license of this grammar. The people behind TXL have derived a PHP-grammar for TXL from the SDF-grammar in PHP-Front. Since our license does not state anything about derived work without common source, we were asked for our permission to distribute this new grammar. Naturally, this permission was given very quickly and we were also allowed to take a peak at the source. I must say that I find it interesting, but I currently do not have time to look into it all. I imagine that the definitions of the grammars and TXL itself is similar to how things work in Stratego, but I have to look into that.

The last question in this series is about defined functions. Finding out which functions are defined in a project is easy when classes are ignored, a simple grep on the project will do. When a project also includes classes it becomes trickier to get all functions defined outside of a class. The question was whether PHP-Front could help with this issue, and the answer is of course yes!
Within the reflection part of the library the list of defined functions and classes is already available. This makes it possible to write a tool to show all defined functions in just a few lines of code. Since it was also a nice tool for the PHP-Tools project I added a tool for this last Friday.

Another issue that was brought up by the last e-mail is the issue of our implementation language. Since Stratego is relatively unknown the project has a steep learning curve. On the other hand, if I had chosen a different implementation language it would have taken me way longer to implement the current features. And besides, this piece of code is not that hard to understand right?
  defined-functions-main =
include-files-complex
; get-php-environment
; get-functions
; if ?[]
then !"No functions defined."
else map(transform-to-message)
; lines
end

Where you at?

Now that the propagation of safety-types seems to go smoothly it was time to dive into another subject: accessing the location of terms. In this case, the location of a term is defined as the location of the text in the original file that was parsed to that term. Since the strategy to annotate the AST with position-info is available in the standard libraries nowadays, it should be easy to access these locations and finally solve PSAT-91 right? Lets find out!

The first thing I did was to add a separate module to handle the low-level stuff of getting the location annotations. This module contains several getter-strategies that can retrieve, for example, the number of the start-line. The location info is captured in six different numbers: start-line, end-line, start-column, end-column, offset and length. A getter-strategy is available for all of them. Furthermore, the name of the file in which the term is defined can be retrieved.

Although these getter-strategies are useful, they are not meant to be called directly. I figured that the most common use of these functions would be reporting the values in some kind of (formatted) message. In order to capture this kind of behavior the strategy format-location-string(|message) is defined. This strategy takes a message with holes in the form of [STARTLINE] as parameter and fills these holes with values from the current term. A rather useful strategy if I say so myself.

To practice with this new piece of functionality I have added an extra option to the tool input-vector of the php-tools-project. This option allows the user to choose between the normal list, or the same list with line-numbers printed for each access. More information about this option and how to add an option yourself can be found here.

After this was done I moved to php-sat to make the output more concise. It was actually pretty easy to implement. The algorithm is nothing more then get-terms-with-annotations, make-nice-output. I actually spend more time on creating a test-setup for calling php-sat through a shell-script then on generating the more concise format. The only problem was that the adding of position-info everywhere interfered with the dynamic-rules. A few well-places rm-annotations where needed to fix this. Please let me know if you like the new output, or whether something should be added.

The next applications of the location info is the tracking of where untainted data enters an application. When a function is called with a parameter $foo which is tainted, it would be nice to show when it was tainted. I think this is not too difficult to add, but bugs always seem to lurk in 'I-think-it-is-easy-to-add'-features.

A last remark about locations is a small problem without an actual solution. Eventually php-sat must support function-calls. The algorithm to analyze function-calls is not complicated, but how can bug-patterns within a function be reported? A message before each call to this function? Within the file in which the function is defined? And what about cases in which one call is flagged and the other one isn't? And can we also handle object-creation in the same way? I haven't figured out how to handle this, so if you have any ideas please let me know.

Finetuning the third phase

You might remember an earlier post about the algorithm for the third phase of generating feedback. This algorithm takes two terms and a set of (allowed) rewrite rules. After calculating the rewrite rule to transform the first term into the other, the algorithm chooses the rule from the set that is 'closest' to the calculated rewrite rule. This chosen rule can now be used to generate better feedback because it gives some insight in the intentions of the student.

In theory the algorithm functions quit well, and the same goes for the actual implementation. However, during the implementation of the algorithm I have run into a few details that needed to be fixed. This once again shows that theory and practice usually do not match completely :)
The following is a (small?) list of some of the refinements made during the implementation.

The rules(et)

One assumption that was made during the design of the algorithm is that rules must rewrite the current term into a term that is closer to the answer. For example, the rule A + B => B + A is correct from a mathematical point of view, but it does not help a student to get closer to an answer.
A second assumption that was made for the algorithm is that the ruleset is extended by combined rules. For example, the rules A => B and B => C are combined into a single rule A => C. The calculation of the combined rules is not very difficult, but getting the calculation to stop is a bit trickier.
The first experiment with rules from the domain of fractions went reasonably well, the only restriction that needed to be added was that restrictions on variables may not depend on themselves. So the restriction ... where B := B is not allowed.
The second experiment with rules from the domain of logic went into endless recursion, a clear sign of an additional problem. It turned out that rules where merging with themselves, resulting in an extra restriction to prevent this kind of behavior.

Calculating the rule

My first idea about the calculation of the rewrite-rule was a simple bottom-up traversal which returned a meta-variable as long as the (sub)-terms are equal. This naive algorithm worked reasonable during the first tests, but failed to perform well at larger examples. In these larger examples the calculated rule contained too much noise to see the important parts of the rewrite rule.
In an attempt to cut down the noise the algorithm was extended to find out whether a sub-tree of a rule can completely be removed. An example of this is the calculation of the rewrite rule between 4 + 5 + 2 and 4 + 6. The naive algorithm would give A + B + C => A + D as an answer. However, inspecting the terms gives the rule B + C => D as a more accurate match. This last rule still contains the same information, we have simply removed the noise of the A + .-sub-tree.

The first implementation of this extension only checked operators with an equal amount of children, and only with a left-to-right match. Even though this worked in some situations, it was still not good enough to be useful in general. Therefore, the matching part of the extension was improved by taken into account the associativity and commutativity of the operators. The different combinations of these properties either result in an exact-match (left-to-right), an assoc-match (left-to-Right, right-to-left) or an assocComm-match (all combinations of children are tried with an assoc-match).

Distance between rules

As with the calculation of the rules, the distance between the rules started with a naive algorithm covering several cases:
  • When we have two equal (sub)-trees we return a distance of 0.
  • When we have two nodes in the trees we compare the name of the nodes and all the children from left-to-right, adding distance when the two operators are not equal.
  • When either one of the trees is a leaf we check whether it is a meta-variable. If this is not the case we simple add a distance of 1 (for the leaf) and the size of the other (sub)-tree. If the leaf is a meta-variable we check whether it is a free variable, and return a distance of 0 if it is. Otherwise the same '1 + size (sub)-tree' is returned as distance. A variable is free when it is not previously bound by a match or a restriction.

Note that the above algorithm does not explain what should happen when two nodes have a different amount of children. After some testing I found out that it works quit well to just take the size of the extra children together with a penalty.

Another thing that is not considered in the above algorithm is the 'forgetting' of an operator. An example of this is the distance between the rule ~~A -> A and ~~A -> ~A. With the naive algorithm the distance between these rules can be quit large when the A in the rule is a large sub tree. In order to model this 'forgetting' the algorithm checks whether a node has a single child on one side. If this is the case the algorithm calculates the distance between the node with operator and the distance between the node without the operator. It then takes the smallest one of the two, taken into account some extra costs for adding the operator.

Putting it together

Now that all the sub-parts worked correctly the top-level algorithm needed to be implemented and tweaked. A first thing that was tweaked was the filtering of a list of matches. When more then one rule has the small distance, the algorithm first returned the first element of that list. Since this behavior makes the ordering of the rules important it needed to be changed. Now the algorithm takes the rule from the list in which the LHS of the rule is closest the current Previous Term.

Speaking of Previous Terms, the original Previous Term does also influence the choice for a rule. When we go into recursion by applying rules, the current PT changes. The changed PT matches better with rules that have a similar LHS, thus this LHS is different from the original PT. Assuming that students apply rules that at least partly match the LHS of a rule, we need to take the distance between the original PT and the LHS of the rule into account. Furthermore, we also take into account the number of rules that are applied to the PT in choosing a rule.

Does it work?

After all the tweaking and the bux-fixing the answer to this question is: Yes! You can take a look at some of the test-cases which list a PT, a CT and a desired result from the defined set of rules.
The tests-modules also include examples of situations which do not produce the correct result. This has to do with the fact that the virtual student made more then one mistake in the application of the rule. This shows that the algorithm does not work in all situations, but this was also not the goal of this algorithm. Luckily, the other examples show that the algorithm is capable of producing desirable output in a number of situations.

Some new features

Last Friday, I had a conversation with nEUrOO in #stratego about the results he was having with php-sat (see his blog for more details). After the conversation, and after looking at the test-file he mentioned in his blog, I got started and added 2 new features to php-sat.

The first new feature is aimed at the usability of the tool itself, and is visible by a new command-line option: -ra CODE. This option accepts one of MCV, COR, EI, STY or OPT as input, and makes sure that the only analysis that is run is the one that belongs to the given code. In other words, you can now run, for example, only the correctness checks by calling php-sat with: --ra COR. This gives you a somewhat coarse-grained control over the behavior of the tool. A plan for more fine-grained control (on the level of patterns) is also mentioned some time ago, but the implementation of this level of control requires some more thoughts. In the meantime, please enjoy running a single kind of bug-patterns :).

A second new feature is added to the analysis of safety-levels. Consider the following example:
<?php
echo addslashes(htmlentities($_GET['name']));
?>
The default configuration for echo requires a parameter to have both the level EscapedHTML as well as EscapedSlashes. Furthermore, the default configuration defines the return-type of the functions as:
function: addslashes       level: escaped-slashes
function: htmlentities level: escaped-html
So this piece of code should not be flagged by php-sat. Unfortunately, previous revisions did flag this piece of code!

The problem here is that the analysis uses the safety-level of a function that is mentioned in the configuration file without considering the parameter of the function. This behavior works well for most functions, but when functions only add a property to their parameter it becomes incorrect. Because of this behavior, the echo-statement is flagged because the call to addslashes is only annotated with EscapedSlashes.

Fortunately, the solution is not that complicated. Since we know that there are several functions that add a certain property to their parameter, we add the possibility to specify this in the configuration file. The new syntax for functions that add a safety-level is a '+' after the level of the function. This '+' forces a function to inspect the level of its (only) parameter and combine this with the level that is specified as its safety-level. The combination of the levels is the result of the function call. (Small note: this behavior is (currently) only supported for functions with a single parameter.)

So from now on, when the following configuration is used:
function: addslashes       level: escaped-slashes +
function: htmlentities level: escaped-html +
the example above is not flagged anymore because the call to addslashes is annotated with its own safety-level (EscapedSlashes), as well as the safety-level of its parameter (EscapedHTML). A pretty useful feature I would say.

A scouts week

Those of you that (try to) read the posts of this blog every week might have noticed the two-week gap between this post and the previous one. This gap originates from the hobby I practice since I was a (very) little boy: Scouting. Within the past week I had the chance to go to two camps, with a three-day rest-period in between. This results in a large amount of dirty clothes, little sleep, a voice like a grinder and lots and lots of fun!

The first camp I attended was the 'clusterweekend' of the Dutch Contingent. My role in this camp was that of quartermaster of the Bontbekplevieren, one of the 25 troops visiting the World Jamboree this summer. Because it is almost impossible to get all of the Dutch participants together on one terrain, the 25 troops where divided into three clusters. There where eight troops in our cluster, each troop consisting of 40 persons. Adding a few people for camping staff, working staff and general staff we had about 350 people attending this camp.

Even though the weather forecast was a bit disappointing at first, the rain all fell down on us during the first night. This immediately showed that our new tents are water-proof, which is a good thing considering the normal weather conditions on England. During the rest of the weekend we played some games on the beach, sorted out a massive amount of badges, played in a casino and had a big party. Luckily, the sun was shining on Sunday so we could pack everything in dry conditions. The weekend was a great success, and of course there are some general photo's, as well as photo's of our troop.

When I came home on Sunday I quickly unpacked everything and washed some of the clothes. I worked a bit on Monday and Tuesday, but I also had to take care of some things for Ontmoeting, the second camp of this week. Ontmoeting is organized once a year for all the scouts of our region in the age of 11-15. They are mixed into sub-camps to compete for the first place. Each year the competition is wrapped into a certain theme, this years theme was 'Bond maakt het Bond'. The mission was to become the replacement of James Bond. The 153 participants where divided into 6 camps, each representing a 'superhero' of some kind. I was part of the Bassie en Adriaan-camp, a rather famous duo in the Netherlands.

This camp started on Wednesday with the packing of 'Diana', the trailer of our group. After loading all sorts of stuff we want to the campsite which was relatively small, but this made the whole thing kind of cosy. On Wednesday-evening we had a BBQ and a campfire with the staff, the children arrived on Thursday morning. The next three days where filled with all sorts of larger games, smaller games and some pretty cool ticket-activities. Naturally, you can check these out on some of the photo's, made by the same people that also made the camp-news-paper. It is always hard to tell about the whole camp in just a few words, but it all boils down to having a lot of fun. This year the games where really fun, thank you 'spelstaf', and our sub-camp has won the cup! To quote a famous clown: 'Alles is voor Bassie!'

These kind of weeks are a great way to relax, no worries about making deadlines or missing important stuff. It is simply not possible to read a real news-paper, so you just ignore the rest of the world for a couple of days. Also, it gives me a lot of energy to start working again, so let's get to it!

After catching some more sleep of course...

Merging fun

Yesterday, commit number 379 and 380 introduced a renewed implementation of the constant-propagation, and new functionality for finding vulnerabilities. You guessed it, the merging of constant-propagation and vulnerability analysis has taken off! The cool things are that 1) the old tests all pass, 2) some new tests pass and 3) I get to write a lot more tests!

The main reason for starting the integration of both analysis was the fact that I saw a lot of code duplication popping up. This duplication was caused by the fact that the bookkeeping of internal structures is the same for all strategies, the code only differs for the properties of values.

With this last piece of information I started to merging. My first approach consisted of trying to pass a list of get-set strategies to the main-strategy and calling these strategies dynamically. This was obviously not a very good or nice start because it always seem to result in numerous segmentation faults.

Thinking about the problem made the second attempt somewhat more pragmatic. Instead of generalizing I just wanted to make it work for constant propagation in combination with a second analysis. So I rewrote the strategy to receive two strategies, one for getting the properties of literals and one for getting the properties of operators. The choice for these language constructs is based on the reasoning that these constructs are the only ones that make or manipulate the actual properties of values. The other language constructs manipulate the flow of the values instead of the actual values.

When this all seemed to work the more difficult challenge had to be solved, manipulation of variables and arrays. It turned out to be more simplistic then I thought because of the indirection between the variables and their values. For the purpose of dealing with aliasing, a variable does not point to a value but rather to a value-identifier. This identifier points to the actual value. This makes the creation of a reference easier, we just create a mapping from a variable to the value-identifier of the referenced variable. Because of this indirection we can simply make the value-identifier point to more then one property, implemented by a dynamic-rule for each property. This makes the merging of sets of dynamic rules a bit harder, but not impossible.

It might sound simple (or incomprehensible), but getting everything right was still a bit tricky. For example, when an assignment is made the property of the RHS must be known before the LHS can be assigned this property. So what happens when the constant propagation cannot compute a value, should we simply fail to assign a property?
The answer is no, the second analysis might still be successful. These kind of little problems made the implementation a little less straight-forward, and the code a little less beautiful.

However, the result of it all is that the following example is now flagged correctly:
<?php
$foo = $_GET['asdf'];
$bar = 1;
$bar =& $foo;
echo $bar;
?>
In this case, echo $bar will be flagged by the latest php-sat.

I experienced one problem with the implementation that is related to the semantics of Stratego. My first attempt in adding an annotation to a term was something like this:
 add-php-simple-value(|val):
t{a*} -> t{annos}
where b* := a*
; annos := [PHPSimpleValue(val) | b*]
This works perfectly, the annotations are matched as a list by the *-syntax, and a list is added as an annotation to the term again. The only problem with this is that the second time this rule is applied it matches the annotations as a list of a list of annotations, which was not the behavior I desired. This problem is easily solved by also adding a * to build the term:
 add-php-simple-value(|val):
t{a*} -> t{annos*}
where b* := a*
; annos* := [PHPSimpleValue(val) | b*]
Now the list of annotations is not wrapped in an actual list anymore. I know it is documented somewhere, but this little explanation might save some others from an headache or a long debug-session.

The next step in the analysis for vulnerabilities is a rather important one: testing. Even though the basic parts of variables and assignments are already tested, there exists a large number of scenarios that need to be tested on this new integration-strategy.
But hey, testing is fun!

Passing time

After I stopped working on the GUI for the RFG I have been working in Haskell again. It has been quite a while since I have worked with this language, so it takes me some time to get used to it again. Since it is a strongly typed language, as apposed to both Stratego and PHP, some things take longer to implement, but some mistakes are found by the type-checker. Unfortunately, this means that I have not done anything terribly interesting for my thesis.

So lets look at the other interesting project, PHP-Sat. I have been working on the integration of a second analysis within the constant-prorogation. This is coming along nicely, but it requires some heavy thinking and careful considerations. I have already worked my way up to the expression-level, so I hope to finish the rest of the constructs that were already supported by the end of this weekend.

In order to tell at least one interesting thing, and to not waste your time completely, I wanted to point out some interesting video-presentations. The first one is also the first one I ever saw through the internet: Drupal, Joomla! & GSoC. The title explains why I wanted to see it, and it is interesting for anyone that wants to know more about the GSoc from a projects point of view. The second also comes from the Google Tech Talks and is called How Open Source Projects Survive Poisonous People (And You Can Too). It is given by the people behind subversion and I especially liked the story about the bikeshed. The third presentation comes from Bram Molenaar, the creator of VIM (yes, that is the editor I mostly use). He talks about 7 Habits For Effective Text Editing, be sure to check it out even if you do not use VIM.

The presentations mentioned above are (some of) the presentations I have already seen, the following are on my todo-list. Please let me know if anyone of them is super-great, or a total waste of time.

Doing it with javascript

I have been working in the wondrous world of JavaScript the last couple of (work-)days, and I have to say that it kind of fun to work with. It turns out that the functionality that I want for the GUI is mostly captured in libraries, which makes it a lot easier to implement everything.

People that are interested in the current status of the GUI can always take a look here. Currently, the best way to view the page is through Firefox. I still have to do some debugging for other browsers, in particular Internet Explorer. I managed to get the designs the same across browsers by using the IE4linux project, but unfortunately the JavaScript-error-pop-up does not seem to work on my box. I am afraid I have to use a real Windows-installation for debugging this.

So what is the JavaScript suppose to provide in the interface? It enables the usage of the graphical view and the merged view. Without JavaScript the interface can still be used, but only through the text view. After the graphical view is enabled, the engine of Scriptaculous makes it possible to drag the keyboard to any position on the screen. I can recommend this library to anyone who wants to add nice animations to their website, it is easy to use and seems to work quite well. Scriptaculous builds upon the Prototype library which extends the objects of JavaScript with several handy properties. I have used some elements of Prototype in my own code as well, it is just to handy to ignore.

When the effects of the keyboard where finished, I started to search for a way to display mathematics and logic without to much effort. During my search I stumbled upon a library called jsMath which is capable of turning LaTeX-code into nice HTML. Even though the library is rather big, the fonts take up about 5MB, it is pretty easy to use. You can simple pass some LaTeX-code to a function which transforms it into HTML that can be shown in a normal website. The library seems to support most of the mathematical notation of LaTex, so I can at least use it for fractions, equations and logic.

The usage of jsMath is easy and attractive, but forcing a student to enter LaTeX-code is not my idea of a friendly user-interface. A normal Dutch student can understand the expression 2 / (3 * 5), but when it is written like \frac{2}{3 \times 5} it becomes harder to understand. So in order to allow a student to enter the expression in a normal notation, and view it in a nice graphical notation, I wanted to transform the first notation into the second one.

A (very naive) first approach involved splitting the string on the operators and putting the parts back together with the right notation. This works for very simple expression, but as soon as you want to have priorities and parenthesis it breaks. So the second approach needed to involve some kind of parsing.

It probably does not come as a surprise that finding a parser-generator for JavaScript is quit hard. There are some people that want it, but nobody which actually wrote one. The only thing I found was an example of an expression evaluator in JavaScript. This project has the elements that I want, but it has too much operators and fancy bits to be of immediate use in my project. Furthermore, it evaluates the expression, I want it to print LaTeX.

Even though I could not use the expression evaluator in the project, it served as a pretty good reference for the implementation of a more generic tokenizer and parser.
This generic code makes it easier to instantiate a parser for a specific domain, you only have to give information about operators and literals. Just like the expression evaluator it tokenizes the input string, after which it uses the shunting yard algorithm to transform the expression into an AST.
In this case the AST consists of objects holding other objects, which are clones of main objects. The objects need to be cloned to prevent infinite recursion to occur when the AST is printed (yes, something I found out the hard way).
I haven't had time to turn it into a library with a website, documentation and comments in the code, this is probably something for the future. However, I did have time to develop some tests for the code, long live jsUnit!

Another functionality that is currently available are the buttons on the keyboard. They work by adding text-snippets to the input field, so in theory you can enter an expression by only using the virtual keyboard. It took some time to figure out how to work with the Caret position, but after combining several JavaScript-snippets it seems to work well.

After reading all this you might wonder whether the GUI is done. The answer is no, there are still many things that can be done! First of all, the functionality needs to be debugged for IE and Safari. Second, when the text-view is hidden the keyboard does not function well. Third, using AJAX should make the updates go smoother. Lastly, it would be nice to be able to point to a place in the graphical view and place the caret at that position on the text-view. The first three todo's are definitely needed before the interface can be used for testing, the last one is a nice thing to have.

However, the real thing needed for testing the framework is .... the framework! It has been interesting to work on the interface with JavaScript and all, but it is not the core of my thesis. Therefore, the GUI is set aside. On to coding the framework!

The third phase

In the beginning of this week I wrote down a description of the third phase of feedback generation. This phase has access to the previous term (PT), the current term (CT) and a set of rewrite rules. The rules describe valid actions on the domain, for example the adding of two fractions:
A/B  + C/B -> (A+C)/B.
In the case that the PT is 1/3 + 1/3, the CT obtained after application is 2/3.

So if this is the input, what is the output? You guessed (or read) it, a feedback-message! This feedback-message is supposed to contain more information then 'this is incorrect', but it can only use the CT, the PT and the allowed rules. Furthermore, the algorithm is assumed to only start when a mistake has been made.

A first try

When we know that a mistake has been made, we also know that there is no path of rule-application which leads from the PT to the CT, otherwise there is a rule which is not valid. In order to get a path between the two terms we introduce a magic rule R which exactly encodes the rewrite between the PT and the CT. In the case that the PT is 1/3 + 1/5 and the CT is 3/15, we have the following R in which A != B != C != D:
A/B + A/D -> B/C
Now lets assume that we have the following (conditional) rewrite-rules:
(1) A1/B1 + C1/B1 -> (A1 + C1)/B1  
(2) A2/B2 + C2/D2 -> (A2*D2)/(B2*D2) + (C2*B2)/(D2*B2)
(3) A3/B3 + C3/D3 -> ((A3*N) + C3)/D3 where N = D3/B3
Given this set, our task becomes to identify the rewrite-rule which the student wanted to apply, but in which he made a mistake. This can be translated into finding the allowed rule which is most similar to the R. We can use some form of tree-edit distance for this problem, since the rules actually represent a structured action.

There exists many different tree-edit distance algorithms, for this algorithm we can use a rather simple one. When we encounter two intermediate or leaves nodes which cannot be matched we replace the one with the other. This operation costs 2, one node is deleted and one node is added. When we encounter a leave node which needs to be matched against an intermediate node we check whether the leaf node is a free variable. A variable is free if there is no previous match and the variable is not subject of any restrictions. When the variable is free we simply match it against the sub-tree, otherwise we replace the variable with the sub-tree which costs 1 + size(inserted-tree).

To illustrate the distance we calculate the difference between (1) and R. Within the LHS we need to replace B1 with B which costs us 2. The rest of the variables can be matched against each other (so A = A!, etc). Within the RHS we need to replace A1+C1 with B which costs 3 + 1 = 4. This replacement needs to be done because B is already matched against B2 in the LHS. The last step is to replace the B1 with C in the RHS which costs 2. This has to be done because we already matched B1 to B in the LHS, and we know that B != C. This results in a total distance between R and (1) of 8. If we perform the same algorithm for the other rules we end up with distance(R,(2)) = 16, and distance(R,(3)) = 8.
Even though (1) and (3) have a similar score, we end up with rule (3) as our guess. This is because rule (1) cannot be applied to the PT, so it is more unlikely that this was the intention of the student.

Unfortunately, we are not done yet because the student could have taken a correct step before making a mistake. To model this we calculate a new set of PT's by applying the rules where possible. This results in two new PT's, PT1' = (1*5)/(3*5) + (1*3)/(5*3) and PT2' = ((1*5)+1)/5. Calculating the rules R1, R2 and all the distances we get six new distances. None of these distances is smaller then the distance of 8 we already have (the proof of this claim is left as an exercise for the reader), therefore we stop the recursion and return a feedback-message containing the rule (3) as a guess.

Although I knew we needed to tests this algorithm more thoroughly, it seemed to be correct to me. However, the assumption that the recursion can be stopped when there is no distance <= the lowest distance up until now can not be made out of the blue. This has to do with confluence, a property that is not guaranteed for the set of rewrite-rules. Sigh, just when I thought that I had a fitting key for the lock, I could not turn it to unlock the door.

Getting the facts

The concept of confluence came up Thursday and I have been trying to get around it until today. A first thing that I did was sum up some properties about the scope of the problem.
  • The terms are rather small (100 nodes or less)
  • There are few steps between the start and the answer (15 or less)
  • All rules are applied with a reason
The first two assumptions follow from the fact that teachers want students to practice the same routine with many different situations. The last one comes from the fact that students for example know that the denominator and the enumerator of a fraction may be multiplied by the same digit, but they do not do this at random. Such rules always have a context in which they are used. Therefore we can assume that the allowed rules always rewrite a term towards an answer.

With these assumptions in mind I noticed that the domain of fractions has only a few allowed rules. The same holds for the domain of equations and the domain of rewriting logic formulas to CNF, there are only 6 allowed rules for this last domain! Even if we combine rewrite-rules, by unifying the RHS of a rule with the LHS of another rule, we only get 9 rules for the domain of logic. These are not the kind of figures a computer cannot handle within a few milliseconds.

When we introduce new rules by unification this can be viewed as applying the rules after each other on the same (part of the) term. For example, if we unify the rules (2) and (1) from above we get:
1.2 A2/B2 + C2/D2 -> (A1+C1)/B1 
where B1 = B2*D2, A1 = A2*D2
, C1 = C2*B , B1 = D2*B2
If we perform the same algorithm as above, this rule surfaces as the result with a distance of only 4. The only right thing to do now is to return both rules (1) and (2) as a result. We cannot deduce which of the two rules went wrong, but we know that the error is somewhere along this path.

If we compare this with the first result of the algorithm we see that it is completely different. However, if we look at the example we see that it is indeed more likely that the student wanted to apply the rules (1) and (2). A guess which complies more with my intuition because of the matching denominators in the answer. Furthermore, this adapted version of the algorithm also work on the other examples that I tried, even the ones for the domain of rewriting logic!

The solution?

Now that the improved algorithm seems to work it would be nice if we can prove the claim 'we can stop when the distance becomes greater'. We can view the path between the PT and the CT as a sequence of allowed rule application in which there is an error. This error can be anywhere on the path. So is i is an allowed rule, we have something like in ..im . error . ik ..ij. The recursion in the algorithm strips of the first set of rules, so we end up with error . ik ..ij.
Because we assume that all rules rewrite the term towards an answer we argue that stripping of allowed rules results in a smaller distance. When we 'jump over' the error the distance becomes bigger because we get strange R's which do not look like an allowed rewrite rule at all, otherwise we could have stripped of another rule.

When we combine this reasoning with the fact that we only have a small sequence of steps, I believe that this algorithm will at least give better feedback in many practical situations. Whenever it is not possible to make a good guess we can always fall back to a message like: 'We do not know what you did, can you try to do it in smaller steps?'. I don't see a problem here because the tools are focused on practicing a task in small steps. Now lets see what my supervisor thinks of it.

Thinking of Merging

I have been dealing with some challenges in the last couple of days. For example my practical assignment for DBA, to be implemented in MIL, and styling the GUI for my thesis, which is coming along just fine I guess. Both of these tasks are easy for people who invented/work with it every day, but I sometimes find it hard to wrap my mind around the problem or keeping track of every detail. Good luck for me there exists a topic where I can work on without repeatedly searching on Google, analyzing PHP!

Speaking of analyzing PHP, Martin has written a blog about the generation of rules for operator-precedence in PHP. He mentions some interesting ideas for work on grammar engineering, so if you are looking for a (thesis)-project you should definitely check it out. Otherwise it is just an interesting post to read.

As for my work in operators in PHP, I have finished the rest of the operators in PHP-Sat. Furthermore, the implementation of the constant-propagation regarding operators is revised within PHP-Front. This is done because I am thinking about merging the constant-propagation and the safety-type analysis into one big analysis. I know, it sounds like premature optimization (a.k.a. the root of all evil), but I can explain why it is necessary.

Consider the following piece of code:
  $foo = array(1,2);
  $foo[] = $_GET['bar'];
  echo $foo[2];
When we consider the constant-propagation we first assign the values 1 and 2 to the first two indexes of the array. The value of $_GET['foo'] is then assigned to the third index of the array which is the parameter to echo in the last statement. We know that the value is assigned to the third index because PHP-Front keeps track of the internal index-count of arrays.
Now lets look at the safety-type analysis. We first assign the safety-type IntegerType to the first two indexes of the array. The safety-type of $_GET['foo'] is then assigned to the third index of the array which is the parameter to echo in the last statement. We know that the safety-type is assigned to the third index because PHP-Sat keeps track of the internal index-count of arrays.

You might have noticed that both paragraphs are almost identical, except for the kind of value that is assigned. Thinking about this case, cases for function calls and cases for objects it turns out that performing the safety-analysis involves a lot of bookkeeping. This bookkeeping, for example the internal index-count, is not specific for the propagated values, it encodes the internal semantics of PHP. Therefore, in order to provide a good safety-type analysis, we have to embed this bookkeeping.

In order to avoid code duplication, which might be worse then premature optimization, I believe we must merge the two analyzes together. By separating the assignment of values from the bookkeeping we can visit a node, perform the bookkeeping and then perform as much analyzes on the node as we want. The only thing needed is a list of strategies that encodes a single analysis on the node.

The idea might sound a bit vague, but while I was working on the operators I already saw some duplication creeping in. Some of the strategies for both analysis only differ on the strategies to fetch or add some value, a perfect change to generalize a bit I would say.

The operator type

My first intention was to name this entry after one of these quotes about assumptions, because in the past week I found out (again) that they are totally correct.

Last Sunday I started to work on typing the operators of PHP. My first attempt was really basic, simply follow the documentation and everything will be alright. So I started with the arithmetic operators '+','-' and '*'. They all act the same on the type level in the sense that the result is always integer, unless there is a float involved. The next operator in line was division ('/') which always returns an float (according to the documentation). I finished off with the modulus operator which behavior turns out to be ...... not documented.

Since I want PHP-Sat to follow the semantics of PHP I needed to know the exact semantic of this operator in PHP. Naturally this is not hard to figure out because we simply run PHP on some test-data and use the function gettype. I wrote a little script with some lines like:
...
echo "Type of 5/2 = ", gettype(5/2), "<br />";
...
which is not only tedious, but also a complete violation of the DRY principle.

In order to comply with DRY I turned to the eval function of PHP. This function evaluates a string as PHP code and can actually return the result of a its computation! I used this function within a loop to dynamically calculate the result of applying an operator to different types. You can find the complete script here, but I will show the critical part here:
foreach($types2 as $key2 => $val2){
$code = 'return ' . $val1 .' '.$op.' '.$val2. ';' ;
$result = eval($code);

echo ''. gettype($result) . ' (' . showval($result) . ') ';
}

The foreach-loop resides within another foreach-loop looping through a copy of the array $types2. This gives us access to two different types and the current operator. These are combined within a string which is executed. We then extract the type and the result of the execution and display it.

Basically you just simply pass an array with types and a operator to the function and it will output the type and result of all possible combinations. I put the output of this script here, you can find the types and values that I used before the actual result. The first few sections contain most of the unary operators, the binary operators are organized in tables. Within the tables each row contains the first value given to the operator, each column is the second value. I apologize for for the ugly formatting, just trying to be a good programmer.

It was now a trivial task to read type(s) of the modulus operator from the table and be amazed. The result type is almost always an integer, but it can an also be a boolean type! This comes from the fact that whenever the RHS is a '0'-value the modulus cannot be calculated. Where other languages would simply halt execution, PHP returns a 'false'-value and issues a warning. This is also the case for division and is now also handled correctly in PHP-Sat.

You might wonder why I have chosen to test all of the operators instead of just the modulus. The first reason for this is that I could simply not resist the temptation, it was just to easy to add all the operators. The second reason was that the first run of the script also involved the testing of the division operator. According to the documentation it always always return a float, but the table tells us that it return either a float, a boolean or an integer! I might be able to see that the boolean return-value is not documented, it is almost certainly a mistake to divide by zero, but the documentation specifically states that division never returns an integer! Luckily (yes I believe it is a good thing), this problem is already mentioned by someone else and I hope it will be fixed before the year ends.

Back to PHP-Sat, the todo-list regarding operators has been heavily shortened. The only ones left are the assignment- and string-operators. The first group involve some dynamic rules which I want to implement when I am in a more awakened state-of-mind. For the second group I still have to figure out which safety-type is to be assigned when the safety-type of both side differ. I currently believe it should be the lowest safety-type, but please feel free to provide me with a counter-example that shows that this belief is wrong.

Interfacing Feedback

My framework for feedback generation does not need a nice graphical user-interface. The input is simply a set of strings (current term, previous term and maybe some file names), and the output is also a string (the generated message). Since I am perfectly capable of using a command-line interface, why should I spend time on designing a fancy interface?

For one simple reason: Testing.

In order to know whether the feedback that is generated can actually be of use for students I am going to test the tools on real-life persons! This will not only give me a change to get out the lab and mingle with normal people, it will also provide useful feedback about the usability of the generated feedback. I intend to test the tools on students and teachers in a normal school setting, something which will probably result in nice anecdotes of system crashes, hanging computations and incomprehensible error-messages. Definitely something to look forward to :)

I thought that the easiest way of making sure that the tools can be tested everywhere is by providing a web-interface. No problems with installation or configuration, simply fire up the browser and lets start the fun.

Unfortunately, the choice for a web-application raises some other problems like different interpretations of web-standards and dealing with response time.
The first issue is a matter of using the standards and applying some hacks here and there. The last issue is (hopefully) solved by making use of AJAX (yes, my thesis is buzzword compliant). I haven't had the pleasure of programming with this technique so this is a nice opportunity.

A different issue with the design of a graphical interface is more personal. I am not that great in designing user interfaces. Most of the GUI's I have designed have the same design or are only input-output fields. It is not that this describes my GUI's, but I will probably not bring home any awards either. This is definitely something I want to learn.

Fortunately I love a challenge, so I intend to make a high-quality web-interface for my feedback generation framework. I have read several papers and books and I think that my current design is (theoretically) pretty good. It is rather minimalistic, gives visual indications for different zones and allows the users to adapt it to their personal taste.

Please feel free to comment on the design. After all, feedback is critical for learning ;)

Summer of Code 2007

For those of you who are unaware of the occasion, today is the day that students can start sending in their applications for the Summer of Code 2007!

Another change for students to work on open-source solutions, get a t-shirt, gain experience in working with larger projects, get a t-shirt, and to get paid for doing this!

Oh yeah, did I mention you also get a cool T-shirt?

Unfortunately, I will not be able to join this year because of an other world-wide event, so your changes of getting accepted have just been increased :)
This year I will just try to promote the Summer of Code because I think it is a wonder full chance to do something useful and interesting.

And what better way to spread the word then with some flyer's! We (that is me and my girlfriend) have translated the flyer to Dutch, but it is also available in other languages.
Please mail/print/fax these to everyone who might or should be interested!

Taking precedence

It has finally been done, PHP-Front has perfect operator precedence!

And why can this be written down as a fact? Because of the work of Martin! He has worked on the grammar-engineering-tools tool for generating SDF priorities. When I worked on them I ran into some problems because of the decisions that where made during the initial development (read: when the tools had to be finished yesterday). This resulted in a common format for PHP precedence rules that was neither YACC-like nor SDF-like. Martin has rewritten the tools to use most of SDF representation and this worked very good according to all the solved issues. The operator precedence is now encoded into PHP-Front as separate modules for both version 4 and version 5, both of them over 6k LOC.

The (technical) details about how these files are generated will probably make a long and interesting blog-post. Since Martin says it is hard to find such a subject, not to mention (again) that it is his work, who am I to take this subject away from him? Maybe he can find some time after preparing the LDTA-presentation for Thursday. I won't be able to make it, but if you are close to Delft you might want to sneak into the Research Colloqium. (If you do, please tape it!)

Besides bringing perfect precedence to PHP-Front, this work has also resulted in a new issue. It was not very hard to figure it out, I just misinterpreted the note in the documentation. So I immediately fixed it by removing a special case which did not have to exist, just to put the icing on the cake.

Downs and ups

Some weeks are filled with good things, others with bad things. For me, the last week was filled with both. My parents celebrated there 30th wedding anniversary, my little brother turned 21 and my last remaining grandmother passed away. You can probably figure out yourself which ones are good and which one is bad.

So the schedule of this week was a bit out of balance because of these events. However, I still managed to get some work done. The analysis for constant-propagation and the one for the safety-levels now share the same structure. This will make it easier to generalize the analysis into a more generic framework, something which will reduce code duplication.

I have also finished my thesis proposal which means that I can now start the real graduation process. The official start date will be on March 5th, next Monday. This process is suppose to take 22 weeks, in this case until the 6th of August. Since there are some other events in between it will take some additional weeks. However, this schedule still allows me to graduate before first of September 2007, the start of the new academic year and the end of my 5th year.

Unfortunately, people keep telling me that the changes on keeping this schedule are pretty slim. Each year, only 1 or 2 students manage to graduate within 5 years, the minimal amount of time for this study. A nice challenge I would say :)

Oh, for those who are interested, my thesis proposal can be downloaded from this page. Please let me know if you have any questions/comments.

Phases in rewriting

Now that my proposal is really coming along the exact details of the approach become more clear. I already have written down all the steps of the research and implementation, the only thing I need is a green light from my supervisor. When I get this I will put a link up to the full proposal. For those that cannot wait I have written a little explanation about the functionality of my thesis.

As explained before the subject of the thesis is to create a generic framework that can handle the generation of rule-feedback. An essential property of any generic framework is that it can be instantiated on a particular domain. In this case the instantiation will need a parser and a solver for a specific domain. This is not a real surprise since these objects are always needed when a tool is constructed. Furthermore, an equality module can be given to the framework. Such a module can be used to check two terms on semantic equality instead of syntactic equality. This is needed because in some cases a teacher would want to allow an answer to be either 3/2 or 6/4 as long as the students calculates it correctly. In other situations the answers may only consists of simplified terms, hence the equality may only succeed on the term 3/2.

After instantiation the framework is able to generate rule-feedback for a particular domain. This generation is done in four phases, each of them requiring more information and resources. Phases that need more input are of course able to generate better feedback. Notice that they 'are able to', they do not necessarily generate better feedback. The instantiated framework will give you a change for better feedback, but the default feedback will be on the same level as current tools. Thus it is a change on a gain, never a loss.

Phase zero is able to generate feedback with only the current term and the previous term. These two terms are always available since a student will have to take a step before rule-feedback can be generated. The algorithm of this phase will calculate the answer, e.g. the term a student has to supply as final result, of both terms and compare those. When these answers are equal a correct step has been taken, otherwise the student has done something wrong. The feedback generated in this phase is only 'correct' or 'incorrect' because we have no further information about the intentions of the student.

Within phase one the student will supply the rule he wanted to apply. The algorithm of this phase will check whether the rule was correctly applied and whether the rule was correct. The later is easily done by using the algorithm of phase zero. When this delivers an 'incorrect' the rule was wrong. Checking for a correct application is a matter of applying the rule to the previous term and checking the result against the current term. When these are syntactically equal the rule was applied correctly.
The feedback in this phase can let the student know in which area he made a mistake. Either a mistake in the rule itself or in the application. This will at least be more of a help to a student then a simple 'incorrect'.

The second phase needs more input from the programmer. A set of rewrite-rules defining the allowed actions of the domain will need to be defined as input. From this set the algorithms of this phase will 'guess' the rule a student wanted to apply. When an error has been made the feedback can contain hints about this rule and pointers to explanations or examples.

In the third and last phase the programmer can also supply a set of 'buggy'-rules. These rewrite-rules contain patterns of common misconceptions. By finding a path between the current and previous term with the allowed and buggy-rules the algorithm of this phase can identify a mistake with a high precision. The algorithm for this is well defined by Hennecke, a short English overview can be found here. Feedback from this phase can involve a specific explanation about the mistake, maybe accompanied by an example of a correct rule.

The explanation of the second phase mentions that the algorithm 'guesses' the rule. This 'guessing' is not completely worked out yet, it is actually a large part of the thesis. Algorithms for the other phases are already defined and only need to be implemented in a generic fashion. The result will be a practical implementation combined with formal research.
Everything a master thesis needs, right?