This article is more than 1 year old

Reg Programming Compo: 22 countries, 137 entries and... wow – loads of Python

We have a winner, ladies and gents

Roundup Our latest programming competition was our most popular yet in terms of the number of entries – 137 in all once we'd de-duplicated them.

The judges were intrigued to see just how popular Python is these days: nearly half (a smidge over 48 per cent) of the entries used this as the language of choice, with Java the next most popular at a “mere” 32 per cent. Even more surprising was that almost 12 per cent of entries were in PHP – looks like there are plenty of web developers out there who decided to use their favourite language for our challenge despite it being devoid of any requirement for funky webness.

Entries to the competition, which was sponsored by IBM, came in from 22 countries, which is a fantastic variety. With 79 entries (58 per cent of the total) the UK was the most common, followed by 15 from the USA, six each from Ireland and the Netherlands and five each from Australia, Canada and Denmark.

Our favourite entry had to be the one in Fortran 90. The author (Simon: you know who you are ...) noted in the comments: “Not eligible of course because of the weird language constraints but if you're forward thinking and open minded like me you'll at least be interested in this entry”. We're not sure we'd agree that the language constraints are all that “weird” (can't say we see a lot of code in our day jobs where the source filename ends in “.f90”) but we do now know that we can run Fortran 90 programs on a Mac, and it was a fun entry to deal with.

We also discovered an interesting fact about Python: throwing stuff into data structures without forcing a sort sequence can end up with non-deterministic ordering of the results. Our test data contained some examples where two competitors ended up with the same score. In no fewer than 12 of the solutions, running the program multiple times sometimes produced A before B in the output file but sometimes plonked B before A. Seems it's a Python thing, as we didn't see it – at least with our test data on our platform – with other languages. Incidentally, where there were two competitors with the same score some of the solutions only put one of them in the output file.

There were refreshingly few schoolboy/schoolgirl errors. We stipulated that the solution should be presented as a single code file, and only a handful of people didn't do this. Those of you who commented that this isn't best practice, programming-wise: yes, we agree, but in this case it was a compromise to make a potentially large number of solutions to a relatively simple algorithm uniform and easy to run via our scripts. And only two entries managed to fail by misspelling “Decathlon” as “Decathalon” in the name of their program and/or the name of the input/output files.

Some of the solutions – 16, or nearly 12 per cent – managed to calculate scores incorrectly. One was a bit weird because it got all but one right, but that single incorrect one was among the less complicated examples. In a handful of cases it looked like things fell off the upper limit of a number representation – some of the higher scores were incorrect where the lower ones were right. Testing solutions prior to submission against a robust set of test data is always a good idea, and in this case there's no excuse for not doing so because there's shedloads of it out there on the internet.

So, for example, when we were writing our sample solutions some of the test cases we used were actually Daley Thompson's real scores from the 1980 and 1984 Olympics, as provided by Mr. Google (noting that the scoring system did change in 1984). And if you're too young to know who Daley Thompson is, ask someone over 40 to bore you with tales of what a legend he was back then.

Output formatting was a problem in some cases, too. Most common was the failure to right-justify some of the higher scores correctly – some competitors perhaps assumed that scores wouldn't be more than four or five digits, so some of the more extravagant test cases extended past the 25th column. And in a couple of solutions the names in the output file weren't expressed in capital letters (the instructions and the sample output stated/showed how this should have been done).

Finally, some solutions fell down by not handling junk in the input file. So for instance we stated in the instructions: “You should ... be prepared for the possibility that there may be extra characters and/or lines after the marker that denotes the end of the input file”. Translated from judge-speak, this means you can be absolutely certain that our test data will have exactly this kind of garbage thrown in to test how you handle it.

On to the results, then. Obviously we immediately discarded the solutions that either didn't run error-free to completion or simply gave incorrect answers when run against our test data. We then had to look at the remaining solutions that had processed the data correctly and decide which we liked most with regard to how they were written, the clarity of the code, the efficiency of the algorithm (not the actual execution time – that would be an unrealistic comparison) and so on.

And after some deliberation... we've decided that the winner this time round is Brett Fernandes from Gauteng in South Africa. Well done, Brett.

More about

TIP US OFF

Send us news


Other stories you might like