Darkness 2009-11-04 12:00:00 UTC
 
Twilight 2009-11-05 12:00:00 UTC
 
Daylight 2009-11-06 12:00:00 UTC
 
Finish 2009-11-11 12:00:00 UTC

Color Bridge - Winners

The Fall 2009 MATLAB® Programming Contest has come to an end. We hope you enjoyed it as much as we did.

Thanks to all the participants and spectators who have helped make the Color Bridge Contest such a great success. Congratulations to all our winners! We also want to give a special "high-five" to those winners who are already in our MATLAB Contest Hall of Fame, yet made the effort to participate again in this contest.

Alfonso Nieto-Castañón
Grand Prize

Alfonso Nieto-Castanon

This was my fifth time participating in a MATLAB contest (after Ants, Sudoku, Blackbox, and Wiring) and as always I found this to be an incredibly fun learning experience. I was only able to participate in the beginning and end stages of the contest but even this was enough to come out of the contest with a bag full of new tricks, algorithms and implementation ideas. In this context I was impressed by some of Oliver Woodford's contributions. He not only crafted an excellent albeit quite obscure (should I say optimized?) Bellman-Ford implementation which deservedly made a huge impact on the entire contest, but also in the way brought to our attention and improved Tim Davis' elegant find_components function, and included dmperm and accumarray in the contest lexicon. Not bad at all for a newcomer!

Regarding the development of the winning entry I used as a starting point Nicholas Howe's qcost3 entry. This was an extremely time-efficient implementation of some of the best ideas in this contest (including his own duotab2), so I was able to use the spared time to run a set of alternative search algorithms on every board. These additional algorithms were all based on a greedy depth-first search through the solution space, and each used a different heuristic to determine the suboptimal path to the solution (trying to cover to some extent the variety of scenarios in the testsuite). All of the heuristic cost functions used a combination of three basic measures: the true cost associated with a given color change; the distance to target measure, representing the minimal number of color changes necessary to reach the target at any step of the search and estimated through a separate standard Bellman-Ford run, and the number of clusters of each color remaining in the board. Overall the winning entry, in addition to qcost3, implemented five alternative search algorithms, each with up to three possible variations (when avoiding one color all together). As always I left the contest with the feeling that I would love to have had more time to try different approaches (if only I could have added the ability to change strategy in the middle of a search), and that I cannot wait to see the new problem that the MATLAB team comes up with in the next contest for us!

Sergey Yurgenson
1000 Node Challenge

Sergey Yurgenson

The major part of the contest mainstream code was based on the code initially developed by Oliver Woodford. Thus, I was not surprised to find that same code, simplified to 995 nodes, to be the leading submission for the 1000 nodes challenge. Studying other submissions, I realized that attempts were made mainly to accelerate the code, but not to modify the algorithm. Dominating code was very efficient and solved the problem precisely, with one caveat - it solved a slightly different problem. It did not take into account "expansion of flooded area". I decided that one way to deal with this was to adjust the values of clusters, and, thus, to change the criteria for the best path.

At that moment, I was working on the code that modified cluster values taking into account values of surrounding clusters. However, soon I realized that even the shortest implementation of that idea would require 25 nodes and I would not be able to fit it into a 1000 node code. I set it aside, to be used in the final submission, and started looking for a simpler modification. Modification that adjusted cluster values, based on their size, demonstrated a promising result during tests, and I submitted several versions of it shortly before deadline. Apparently, it was not shortly enough as the cyclist and Abhisek Ukil had time to resubmit my code with their modifications. Fortunately for me, they were not able to pick out the winning version, and my best submission stayed on top..

I used my initial idea in the final submission and it showed significant result improvement; however, it was not able to beat the code by Alfonso Nieto-Castañón, who has won the main prize.

Mike Russell
Sunday Push

Mike Russell

Bio: I completed a B.S in Computer Engineering at Clarkson University in 2005, and am nearly done with a M.S. in Electrical Engineering at Syracuse University expected completion is Summer 2010. I am a full-time employee at SRC, a not-for-profit research and development company. I work in the Systems Technology Center as a Systems engineer. I primarily use MATLAB to collect and validate data derived from signal processing chains on a variety of systems. I have also performed system modeling tasks that relied on Simulink. I enjoy finding ways to make things run faster within MATLAB.

I was quite surprised that I won the Sunday push with only improving the score by ~5 points. When you scan the other days score improvements there are usually a few in the 20's. My strategy was fairly simple, experimenting with the cost matrix used within the solvers. Doug has described the various boards and how they challenged each solver, like his analysis I had chosen to focus on a few of the odd boards. I especially liked the board with the 80's as outliers, as it showed some of the widest variance in scores after I made changes.

I started by running a few of the leading solvers through the example test suite and saved each boards scores. I then compared all of these solvers and identified the boards who produced a high score, far away from the mean. This allowed me to then run quicker iterations on small changes to solvers and typically produced a lower result because I had lowered the score for these high scoring boards.

Simply put I utilized the rand function to drive to the top of the Sunday push. I do not endorse the use of the rand function during the contest, and would like to see it go away in future contests. Because it is available and it almost always finds its way into the top solvers I decided when to insert it into the mix. The entry "I didn't want to do this" was the entry which had added a random integer to the calculated cost array in Oliver's solver. This resulted in ~4 point decrease in the overall score. I believe after this point the rand function took off with considerable fine tuning to identify the exact seed which produced the lowest overall score.

Oliver Woodford
Twilight

Oliver Woodford

I'm currently a Research Engineer at Toshiba Research Europe Ltd., where I develop computer vision algorithms, usually in MATLAB. I first used MATLAB in 2000, for a lab project during my undergraduate degree, but became a habitual user during my PhD, which I started in 2004 and finished in 2009.

This was my first MATLAB contest. I've wanted to take part before, but never had enough time. This time when I checked out the problem it seemed really fun, and I had an idea for an approach I thought would work well:

  1. Make each same-colour island a node in a graph, and compute the cost of each island becoming the target square's colour.
  2. Use the Bellman-Ford algorithm (which I'd used in my work the year before) to compute the (approximate) lowest cost colour sequence to flood each island, taking into account the cost of each flood.
  3. Output the lowest cost sequence for the target island.

At that stage I wasn't paying much attention to the various contest deadlines, and by the time I'd implemented a working version it had already become twilight (next time I'll pay more attention). The algorithm was immediately very competitive score-wise, but quite slow. A little time with the MATLAB profiler improved things enough to give me the twilight phase win comfortably, which was great. After that my code immediately became a part of all the leading entries, which was flattering, but because of the complexity of my implementation (which I'd tried to make understandable!) the improvements were fairly superficial. However, despite several algorithmic and code improvements of my own, I wasn't able to win another stage - hybrid algorithms which performed noticeably better on certain boards had taken over.

Nick Howe
Saturday Leap, Early Bird Special and Darkness

Nick Howe

In Darkness you never know where you stand compared to everyone else, so I just keep working to improve my own solution as measured on the trial set. Usually I start with some simple heuristics that I don't expect to win, but that may gain the top spot for a while if they are submitted early enough. In this case, one very simple approach (SimpleSolver4) did quite well. By the time the end of Darkness rolled around, my programs were much more complicated but only modestly better in score.

The entry that won the prize used a flood-filling approach but avoided one or two expensive colors to improve the score. It was fast enough to try several possible taboo combinations. Interestingly, all my entries in this stage used a flawed connected components finder I had written, but in some cases when I later fixed the error I found that the scores got worse! Sometimes you get lucky even when you don't deserve it.

By Friday Oliver had won the Twilight prize with a nice solution based on graph theory. But I noticed that the results produced by his entry and those descended from it left some high-payoff squares accessible on the board at the end. My entry that won the Early Bird prize did so by deliberately flooding those squares just before the goal tile was reached. I called the function that did this 'gleaner' after the people who go into a field following the harvest, collecting grains that have fallen to the ground.

For the Saturday Leap, I had noticed a category of boards where Oliver's solver was not working as well as it could have. These were the "high-80" boards that had a sprinkling of high-value tiles scattered throughout. Oliver's code focused on efficiently reaching the goal, but on these boards you could actually get a better score by exploring more rather than less, because you would ultimately collect more of the tiles with high payoff. I developed a split solver that detected the board type and switched to random exploration on the high-80 boards. (Incidentally, although random exploration improved on fast progress to the goal, it turned out ultimately that you could do even better. Alfonso's entry that won the Grand Prize succeeded by explicitly *avoiding* reaching the goal, thus reaching even more high-payoff tiles than the random methods.) Anyway, I submitted several entries with variants of the random solution right after midnight in an attempt to lock up the Saturday Leap right as it started.

But the next day, on a whim I decided to try one more that I had not attempted the night before. To my surprise it did much better than any of the previous entries -- and I am lucky it did, because a few hours later Gwendolyn Fisch submitted a very strong competitor that would have beaten any of my other attempts.