Now that the code war is over, here’s my thoughts on how we did.
First off I think the problem was perfect. There were a number of facets to the problem so that teams had to weigh out what mattered the most and then implement those parts. And while the problem was simple enough to be understandable, there were enough variables that it made it a challenge to take them all appropriately into account.
My goal in this is two-fold. First to create a challenge that pushes the students to the limits of what they can do. Because there is tremendous joy in accomplishing an extremely challenging task. Second to create a challenge that is won not by raw programming skill, but by the mix of abilities that are required in jobs out in the real world – understanding trade-offs, determining what must be done vs. what can be ignored, working well in a team, etc. And boy did this problem test that.
What we got Right
I also think we did well on some of the feedback we got. The biggest complaint was that the problem wasn’t clearly defined. Yep. Part of the challenge was the teams had to figure out what they should do unlike pretty much every problem assigned in school where what is required is very specifically laid out. This was one of the key attributes that made this problem so good.
Another complaint was that the provided client code was a bit messy and poorly documented. Now this was not done on purpose, it was just that this is the situation of most code out in the commercial world (including ours). But again I think this made for a good problem because it mirrors what the students will face in the real world and so made for a good part of the problem they faced.
The third common complaint was not enough time. People wanted another 4 – 8 hours. And that is perfect. If there’s enough time then decisions aren’t required as to what is the most important. On the flip side, if the amount of time is way too little (not an issue here), then you can’t do anything. But with the level we had – I think it was perfect.
What we got Wrong
First off we had a bug where it did not update the Company.Passengers property. We fixed this in 5 minutes and got it out to everyone before most hit this but still, this should not have got through our testing. (We had the contest here as a final play test and it turned out everyone here walked the Passenger.Lobby properties instead of Company.Passengers.) This is 100% my fault as I wrote the initial client and it was a dumb bug.
Second we had a bug in the server but only for some versions of Windows due to the order of events firing. This we fixed in 2 minutes and only affected the few (2?) people with that Windows configuration.
The third thing we got wrong was running the finals. When we ran them Saturday night they hit connection issues and we were all so tired we weren’t watching the games. So we ran them and pushed them up. We got emails from some people when the quarter-finals went up and then pulled them down, dove into the problems, and fixed it. But it meant a delay in seeing the results and that sucked for everyone having to wait. I’m very sorry about this.
What students tripped over, but shouldn’t have
In the emails we sent out we said turn off your firewall. In the documentation we said turn off the firewall. In the FAQ the first item was turn off the firewall. So the main question we got? “Why can’t my client connect to the server?” I’m as guilty as anyone of not reading the instructions but this was an unnecessary delay.
I had our full support staff here to help and when people had problems getting the system running, we immediately did a screen share (the most common was configuring IntelliJ to see their JDK for the Java client). Happy to do it and we did it quickly. And again in the emails, web pages, documentation we told people to call or email us. Yet many spent an hour plus getting their systems configured instead of asking for help.
The server running under Windows for students who only use Linux. (And if the server was Linux, we’d have the opposite complaint from students who live in Windows.) Yes we’re all most comfortable with the O/S we mostly work on. But a computer science student needs to be competent on Windows for about 90% of the jobs out there (you may develop on Linux but you’ll probably use Word and Outlook regularly). And we made it trivial to run.
It was very interesting watching the clients. Most of the finalists did well on the small maps, but some did poorly on the complex maps. An A.I. that would win regularly on the small maps would then come in last on the large maps. Here’s what I saw watching it (and I could be off as this was just viewing, not looking at the code or messages):
- Teams that did not re-write the A* pathfinder pretty much bombed on the 2 maps where the provided A* gave really poor paths. This appeared to hit not just how long it took the cars to make the trips (because they took the long route) but probably also deciding which passengers to take.
- Almost every A.I. if they were trying to drop off a passenger and the passenger had an enemy at that company, would circle and keep trying, basically waiting for another car to come pick the enemy up. On the small maps this generally occurred quickly. But on the large maps, especially the spiral map, it could be a long time.
- Some A.I.s clearly focused on moving the high point passengers as they almost always carried all of them. That worked well most times, but sometimes they then went on very long trips. I saw times where Shirley was being transported on a very long trip and the game ended just before delivering her.
- A lot of the A.I.s appeared to use direct line calculations to determine who to carry. On the spiral map this was a very bad approach and you would see it in cars that were travelling well over half the road to deliver a passenger that was close as the crow flies, but not as the cars drove.
- A lot of the A.I.s did not appear to estimate if they would get to a pick-up first. Many times we would see 4 cars all headed to the same company. The first got Shirley and the remaining 3 got nothing. Yet none of them, even after Shirley was picked up, changed direction until after they got to the now empty bus stop.
There were a lot of really good A.I.s written. In the quarter finals there were a few that didn’t do well. But aside from that it was a very close run brutal competition. Many times the cars from places 3 – 8 in a round were all within 2 – 3 points of each other. As the car that wins a round gets a final 3 – 5 points, we would see 2 or 3 cars fighting for first place and then the 6th place car wins the round because they delivered passenger #8 and it was Meg Whitman giving them an additional 4 points.
The luck of the draw had an impact. If the 10 maps had a significant number of the complex maps, then the A.I.s that did not handle those would lose. If the 10 maps were mostly the easy ones, then those same A.I.s would do well. The teams that handle all maps well did well regardless but this did impact if the teams that only did well on the simple maps advanced.
The other big luck of the draw was if an enemy sat a long time in a lobby where an A.I. wanted to drop off a passenger. We saw some very strong A.I.s do poorly in 1 or 2 games due to this. From what I saw (and again this is just watching the game), I think this may have been the biggest issue that the best A.I.s didn’t have time to address.
Really good game. Very very impressive teams. The students who did well in this, they’re going to do very well when they go out into the real world. Because they have what it takes to design & deliver a product.