“The Last Slice”- a coding challenge adventure

“The Last Slice” has been a fun developer competition, organized by Microsoft and involving three challenges (between May and August 2018).
Developers who passed the first two would be gifted a specially-crafted Microsoft IoT kit, needed to partecipate in the final challenge. The first five able to complete it would receive a $10000 prize.

Partecipating in competitions like this has been something I’ve wanted to do for long time, but struggled to find the time for in the latest years.

With “The Last Slice”, the only time-sensitive challenge (the last one) was in mid August, in non-working days, and I found the presentation of the challenge particularly appealing (I’m a game developer after all), so I decided to give it a shot.

Now that it’s over (yeah, and I didn’t win, if you were wondering :P), I thought it would be nice to write something about it for everybody that is “late to the party” but curious about it.
And hey, even with this blog title, I should write something once in a while, right?

So, I will walk through my process of playing and completing all the three challenges and provide some feedback on the whole operation.

Challenge 1: TheLastSliceGame

A nice retro-styled game was at the core of the first challenge, and you can get it on GitHub here.

From the problem description: “The Last Slice retro video game is fun to play, but impossible to beat as coded” – the objective to this challenge was to modify the source code such that you could complete the game.

Every level involved completing a series of pizza deliveries on the map, avoiding the obstacles and collecting only the right ingredients for the current delivery.
Of course, even before starting to look at the source code, I was curious about what made the game “impossible to beat”, so I started playing.

It quickly became obvious that it might be very difficult to get all the deliveries right because of the possibility of getting out of gas or being unable to pick-up only the right ingredients (without stepping on some undesired ones). That didn’t really scream “impossible”, but I started to make the game easier to play by doing two simple edits to the source code:

  • removing the gas consumption
  • altering the “correct ingredients” check so that anything I delivered would have been ok

Also, I started checking the level loading code because I wanted to be sure the game had reasonable length and no endless procedural level generation.
From the involved levels.txt file and the level loading code, all looked fine, so I went back playing my “easy” version of the game.

A surprise was waiting for me: after quickly completing the first two levels, with 3 and 5 deliveries, the UI was showing to me that to complete level 3 I had to deliver 99 pizzas (!).

99 deliveries sound a bit excessive...

99 deliveries sound a bit excessive…

What was going on? From skimming the levels.txt and the level loading code, I had understood that the “delivery houses” where the lines in the format “DH,mapsegment,rownumber,colnumber” – and there definitely weren’t 99 of them. I wasn’t going to manually do 99 deliveries, so I stopped playing.

I went back to the level loading/running code, this time digging deeper and figuring out the involved data structures: no trace of procedural generation, and the deliveries for level three were definitely 15 and not 99.
So, where does that 99 in the UI come out from? I checked out the UI code, and found an if statement that showed ’99’ and not the actual number of deliveries if the current level was > 2. Why would one put that statement? A more subtler trap would have been nicer :), but well, let’s fix it and play again.

The fake UI still definitely wasn’t what could make the game “impossible”, so I was waiting for some other trap, and it showed up – of course – right at the end of the game. The last delivery is what made the game impossible: there’s fire obstacles right in front of the house where you were supposed to go.

Fire - exclamation mark - fire - exclamation mark - help me - exclamation mark.

Fire – exclamation mark – fire – exclamation mark – help me – exclamation mark.

In the spirit of keeping my changes minimal, I didn’t touch the game code anymore (disabling collision with obstacles, for example): I just removed from the levels.txt file the fire obstacles that made the level unfinishable

After that, I was able to complete the deliver the final pizza and complete the first challenge.

As expected from a “first phase”, the first challenge was quite easy and took me little time. But tougher stuff was around the corner.

As a final note regarding challenge 1, let’s take note of something:
while examining MapManager.cs, you could notice such comments (and yes, there’s another Metal Gear Solid reference in the code, elsewhere):

//TODO:: Specifiy adjacent maps through the maps.txt - Hal Emmerich
Maps[0].MapRight = Maps[1];
Maps[0].MapDown = Maps[2];

Maps[1].MapLeft = Maps[0];
Maps[1].MapDown = Maps[3];

Maps[2].MapRight = Maps[3];
Maps[2].MapUp = Maps[0];

Maps[3].MapLeft = Maps[2];
Maps[3].MapUp = Maps[1];
//TODO: Add more maps here or in the Map.cs file? - H.E.

What’s in Map.cs?
A misterious partially-filled map. The “RASPI Map” label suggests it will be useful with the Raspberry PI to be used in challenge 3.

The two letters codes indicate the pizza toppings used in the game.

The two letters codes indicate the pizza toppings used in the game.

Challenge 2: KneadMoreDough

By completing the second challenge before July 17, one was given a coupon that allowed to partecipate to a kind of lottery extraction where 1000 developers would receive the IoT kit needed for the last challenge.
By that date, as was visible by scraping the leaderboard as someone did, less than 1000 partecipants had passed challenge two, so I guess the lottery was unneeded.

I hope at Microsoft they don’t consider this operation a failure because of the limited number of partecipants, maybe it should only have been advertised more.

So, what was challenge two about?
From the GitHub repo:
“An old-school algorithmic word find, with a twist. Download the code and run the project. Request a puzzle, and solve it with code. Each puzzle is unique and expires 30 seconds after it’s issued. You can request as many puzzles as you need.”

So, the first step was trying the provided code and fetching a puzzle.
A JSON message like this was received:

  "Puzzle": {
    "Lines": [
  "Id": "3d025fb4-74c8-49b3-a933-16b309f28450",
  "TimeIssued": "2018-07-13T01:57:05.4485749Z"


So, a bunch of strangely formatted characters, and knowing that the puzzle was related to “algorithmic word find”.
At the beginning, I was thinking there was some basic cryptoanalisys to do, so I added code to write to disk the received messages, and then fetched some puzzles to have more samples to throw at some frequency analisys algorithm etc.

While skimming through the saved puzzles, I got lucky and my eyes caught a cleartext “MOZZARELLA”:


Watching even more carefully, there was also an “ARTICHOK”, with the final “E” at the opposite side of the row, suggesting some kind of wrap-around (assuming that wasn’t a random E).
At this point, I understood that there was no cryptography involved: the “word find” pointed out by the challenge statement was that of the “word search puzzles” – I now know they’re also called “word find puzzles” (in Italian, that kind of puzzle has a completely different name, which didn’t help).

Feeling I was now on the right track, I kept checking the saved samples and thinking about the peculiar shape of the grid.
The central part of the shape was 5×20 characters, and the top and bottom were 5×5.

After a while looking at the puzzles, I got a little Eureka moment: if you considered the central part as 4 5×5 grids, you could view the strange looking grid as an “opened” cube with 5×5 characters on each face:

|   |
|___|___ ___ ___
|   |   |   |   |
|   |

I quickly looked for a puzzle that could prove I was on the right track, and found one easily


Finally, it was time to write some code able to navigate the character grid as if it was walking on the cube.
So, for each character on the cube, I would consider 8 strings starting with that character and proceeding with the neighbours in the 8 directions, for an arbitrary length I decided to set at 20, because with 20 characters you would wrap around when moving vertically/horizontally, so it felt right to assume there weren’t ingredients with names longer than 20 characters.

I would check if such extracted strings started with one of the words to search for, and save the matches.

To get started, I used as wordlist to look for the names of the ingredients appearing in challenge 1 and some others that I stumbled on during my tests (MOZZARELLA, ARTICHOKE etc). If any of the strings fetched as described started with one of the words in the list, I saved a match.

At this point, with the cube-navigating-string-fetching-and-matching code working, there were two open problems:

  • in which format the web service expected the answer?
  • what is the exact list of ingredients to search for on the cube?

The solution for the first problem was hinted in the provided source code:
// TODO: Now show your Swagger and find the solution.

Swagger is an open source framework related to RESTful web services like the one used by the challenge code.
I hadn’t been dealing with web services recently, so I didn’t know, but Googling “Swagger” pointed me in the right direction.
So, by checking the address of the web service with a browser, you could get to the API documentation detailing the expected request format:

A screenshot of the API info page.

A screenshot of the API info page.


With this information, crafting the solution message in the expected format was easy.
Actually, checking Swagger before everything else would have made obvious instantly that the challenge involved a word search puzzle, but unfortunately I din’t focus on how to send a solution until I felt close to having one.

So, I tried sending some solutions, which looked like this:

  "PuzzleId": "9b2fec25-bc29-46e4-88a4-d44a6a9536a6",
  "Words": [
      "word": "SAUSAGE",
      "x": 1,
      "y": 1,
      "direction": "W"
      "word": "MUSHROOMS",
      "x": 0,
      "y": 2,
      "direction": "NW"
      "word": "PARMESAN",
      "x": 4,
      "y": 5,
      "direction": "N"
      "word": "PEPPERONI",
      "x": 6,
      "y": 6,
      "direction": "W"
      "word": "MEATBALLS",
      "x": 9,
      "y": 7,
      "direction": "W"
      "word": "JALAPENO",
      "x": 11,
      "y": 6,
      "direction": "N"
      "word": "HAM",
      "x": 12,
      "y": 7,
      "direction": "SW"
  "Initials": "DSK"

Unfortunately, as result from the web service I kept getting only “Did I stutter? Because that’s not the pizza I ordered.

Was I missing some word?

Or maybe the coordinates weren’t starting at the top left cell of the grid as it felt natural to me? Nope, I tried setting as origin the bottom left cell (actually, I tried setting as origin other cells, like the center cell of the middle face, etc),  but kept failing.

So, I focused on enlarging the list of words to search for. I tried different lists of words, even downloaded kitchen ingredient dictionaries being afraid of missing some ingredient by not being an English native speaker and not recognizing them while skimming through the extracted strings.

Some ingredients looked totally valid pizza toppings (like “MOZZARELLA”), other somewhat strange (“PEANUTBUTTER”?), other appeared in the first challenge too (“JALAPENO”, “PEPPERONI”…). Other matches were for short words that could have been there randomly even if looking as legitimate ingredients (“OIL”).

Long story short, my solutions kept getting rejected, so I started getting more sophisticated.

I wrote code to request and save dozens of puzzles, and then I ran my “8 directions string fetching” on all of them in a single execution where I would

  • extract all the candidate strings (considering varying length, 4 to 20 characters)
  • put them into a Dictionary<string, int>, where the keys where the strings, and the value was the number of puzzles that string occurred

I left my program crunching puzzles and then went back to analyze the results.

Sorting the strings by number of occurrences led me to find some new ingredients, which made my feel very satisfied of what I did… until I tried sending solutions with the new list, and failed again.

After banging my head for a while, I went back to the first challenge hoping to find more clues about the ingredient list.
That’s when I saw, in the manual of the game, a giant hint about challenge 2: one of the pickups – the only useless one, in fact – was a mysterious cube with ingredients written on the side.

RTFM, always. And I did, but my memory sucks.

RTFM, always. And I did, but my memory sucks.

This is when it became clear to my eyes that the three challenges were more connected than I expected.
So, I did a simple, desperate attempt: used for the word search ONLY the ingredients you could find in the challenge 1 game.

    public static readonly List s_IngredientsList = new List() {


Yay! Using such list, my solutions were finally accepted, and I got the “coupon code” that allowed me to request the IoT Kit to access the final challenge:

"Now that's a pizza! Here's my coupon code: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
Player, you are one of the chosen few. A lottery drawing will select 1000 to proceed to challenge 3.
Standby for further instruction."

(In case you were wondering, I replaced my coupon code with “xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx“).

Oh, a minor quirk: some fellow coders got stuck because initially the puzzles sent involved the word “MUSHROOMS” while the ingredient name used in challenge 1 was “MUSHROOM”. I wasn’t affected by the issue, because my approach had led me to find “MUSHROOMS” and I used that in my wordlist.
A little mistake when preparing challenge 2, I guess.

Anyway, challenge 2 took me a lot of time, but only because I totally felt into the trap of the extra ingredients.
As often happens, less is more: it wasn’a problem of missing ingredients – I was finding too many.

If I started by checking Swagger and reminded about the “cube with ingredients on the sides” of challenge 1, and used only its ingredients for the word search… well, too late, patience!

Anything else worth noting about challenge 2?
Another piece of the “RASPI Map”, in the file RaspiMap.cs (not included in the VS project, in case you got distracted):

I will publish my fork of the project, including solver, soon – I just prefer cleaning it a bit first. I also have a 3D front-end I quickly put together in Unity at some point, to visually check the correctness of my cube-walking code.

Yeah, it's interactive. But too ugly to publish now.

Yeah, it’s interactive. But too ugly to publish now.

Getting ready for challenge 3

I received the IoT Kit about a week before the challenge. A very appropriate packaging: someone at Microsoft definitely put some heart into all this.

Opening the shipping bag and finding this inside was genuinely funny.

Opening the shipping bag and finding this inside was genuinely funny.

On the back of the box, an ingredient map similar to the other two already encountered: the opening on the right suggests it’s the lower left zone of RASPI Map.

Another part of RASPI Map?

Another part of RASPI Map?

I merged the three available parts together and printed the result, expecting to quickly fill in the missing part when it will be made available (at the beginning of challenge 3, maybe).

Will we ever complete the map?

Will we ever complete the map?

Enough with the box, what was inside?

The IoT kit core device was a Raspberry Pi 2 Model B.
I was already somewhat familiar with the Raspberry Pi (1), but this one looked pretty special:

Cool, isn't it? And we haven't turned it on yet...

Cool, isn’t it? And we haven’t turned it on yet…

There’s a custom board connected to the Raspberry Pi 2 GPIO pins, and on such board there are two components, which I looked for online:

  • an Arduino Pro Mini, featuring an AT Mega 328 microcontroller with 32 Kb of memory
  • an Ada Fruit Bi-color Led Matrix with its “LED backpack”

Additionally, in the package, there was an instructions sheet:

Yeah, this image could be better, but it's readable.

Yeah, this image could be better, but it’s readable.

I followed the provided instrucions and tested that everything was working properly. Following the suggested docs, I wrote and deployed a simple test program, a simple TCP echo server, thinking that maybe it could be useful to have some code ready to communicate with the IOT device from the pc.

When turning on the device, the LED matrix showed a scrolling text (“Pizza Oven 2000 online”), a loading animation with some kind of clock, and then a smiley face.

The intimidating smile of the freshly booted device.

The intimidating smile of the freshly booted device.

That happened before Windows even booted, meaning that the LED matrix was controlled by the Pro Mini on the custom board and not from some UWP application.

I considered the possibility of dumping the Pro Mini memory and doing some reverse engineering of the contents, but read online that the microcontroller featured some lock bits that prevented access to the memory (unless you wiped it).

I thought that who prepared the deviced would definitely have used such feature to prevent challengers to access the microcontroller memory before the challenge.
I then considered the idea of trying to fiddle with the GPIO pins, but I got afraid of messing up the device and becoming unable to partecipate to the final challenge.

So, I wrote some code snippets involving GPIO access, following the suggested docs, but didn’t try them.
More fearless finalists acted differently and were rewarded by that.

Anyway, I disconnected the IoT device and did another kind of preparation: refreshing some algorithms and techniques often used in coding challenges (graph traversals, pathfinding, dynamic programming…).
I tought that having the “RASPI Map” ready to be used by some pathfinding algorithm could be useful, so I wrote code to parse it from the text file where I had merged the first three parts (and was ready to add the missing one).

At this point, it was time to wait for challenge 3.

Challenge 3 – The race is on

As announced in the IoT Kit instructions, challenge 3 started by receiving an e-mail:


The race is on. Challenge 3 has officially begun, and the first five developers to complete this final challenge will each take home a $10,000 USD pizza delivery tip.

By now you received your Internet of Things (IoT) Kit, followed the setup instructions inside and are ready to compete. This time, there’s nothing to download from GitHub. You’ll be writing the code, and following is the info you’ll need to get started.

Use the following GPIO Connection Guide:

GPIO -> Pizza Oven 2000
2 -> A
3 -> B
4 -> D
5 -> L
6 -> N
7 -> P
8 -> R
9 -> T
10 -> U

1. To unlock your IoT device, give Mad Dog and Scorpion 30 lives.
2. Collect the correct pizza ingredients.
3. Further instructions will be given through Gordon Bell’s first design at DEC. Any submissions made prior to the delivery of this email do not qualify.

The time has come to determine who will nab the last slice. Good luck!

Step 1: unlocking the IoT device.

The first clue, “give Mad Dog and Scorpion 30 lives”, brings us to the famous “Konami Code“, an input sequence used in lots of games to activate cheats or special features.
Mad Dog and Scorpion are the playable characters of the videogame “Contra”, and using the Konami Code in such game gives them 30 lives.
This was easily found out by Googling.

The Konami Code input sequence is: Up, Up, Down, Down, Left, Right, Left, Right, B, A.
That looked promising, because looking at the provided GPIO pin number to letter mapping, you had A, B, D (Down), L (Left), R (Right), U (Up).
It made sense to use write values to such pins as if there was a gamepad connected to them, setting the pin values to 1 when the button was unpressed and 0 when pressed (or the opposite).
To write values to such pins you had to open them in output mode, and that left you with three remaining, unused pins: N, P, T.
Ha! That’s going to be iNPuT, some pins to open in input mode and read data from in the next step!

This is what I thought it had to be, but I only got some red flashes on the led matrix (and usualy, red -> bad).
I tried different delays, swapping the pressed/unpressed pin values etc, but it didn’t seem working.

I also modified the TCP server I had written to test the deploy such that it activated the buttons pins when receiving some simple messages, and I quickly adapted a TCP client I had written in a Unity project to send such messages on key press/release, so that I had the cursor keys and two other keys bound to the pins activation/deactivation. So, I could easily try inputting the Konami Code on my keyboard with different speeds (and try to figure out what the red flashes meant).

But I also couldn’t be sure it hadn’t worked already, because it was unclear what would happen after the “unlocking”.

Coming back to the clues provided by the mail, I looked up what was Gordon Bell’s first design at DEC: the I/O subsystem of the PDP-1, including the first UART.
Bingo, we have a UART device on the Raspberry PI, and I had prepared a code snipped to access it in the “preparation” days.

Unfortunately, I hadn’t tested it, as I explained earlier. And I started getting a null device when trying to open the interface.
Only after some googling and going back to the documentation suggested in the instructions, I saw this (and remembered that I had read it already a week earlier, and then forgot):

Note that you must add the following capability to the Package.appxmanifest file in your UWP project to run Serial UART code:

    <DeviceCapability Name="serialcommunication">
        <Device Id="any">
            <Function Type="name:serialPort" />

Visual Studio 2017 has a known bug in the Manifest Designer (the visual editor for appxmanifest files) that affects the serialcommunication capability. If your appxmanifest adds the serialcommunication capability, modifying your appxmanifest with the designer will corrupt your appxmanifest (the Device xml child will be lost). You can workaround this problem by hand editting the appxmanifest by right-clicking your appxmanifest and selecting View Code from the context menu.

After blaming myself for my epic failure in not having edited that XML file to add the capability during my “preparation” days, I did that and I finally accessed succesfully the serial interface, and tried to read from it. But I got nothing.

At this point, around 4 A.M., I stopped for a minute to prepare a cup of tea. Meanwhile, I checked Twitter, and I found out that the challenge was practically over: the first 5 places on the scoreboard had been taken, and two finalists had completed the challenge even before the mail arrived (!).

Well, it was time to come back to the challenge “just for fun”: no prize, but a dozen of fellow coders spread around the world, day or night, winners and not winners, giving tips on things they had already figured out and at the same time trying not to spoil it too much.
Isn’t that nice too? Not $10K nice, but worth staying the night up, for sure.

The revealing tip, for me, was “try to use more pins at once”.
I went back to the pin mapping and tried to look at it from a different angle. And then it hit me: N P T weren’t for “iNPuT”, were to be used together with the U D L R pins to have “UP“, “DN“, “LT“, “RT“. Damn.

I totally hadn’t tought about anything like that (I mean: why would you use two bits for a two values input? XD).

I quickly adapted my code to use two pins when needed, hoping for a breakthrough.
Unfortunately it still wasn’t working.

After some time, in one of the many desperate attempts, I tought that maybe the unlocking had to be done right after turning on IoT device.
I rebooted, and this time I had a green “UP” arrow flashing on the LED Matrix. Finally some success. After that, more red flashes (bad). And no way to get another green arrow repeating the sequence.

I tried the small “reset” button on the “Pro mini” board with the microcontroller, and it looked enough to try another unlocking attempt without rebooting the whole device (the LED matrix animation restarted), because I got another green arrow.

After some timing adjustment (I was sending the inputs too quickly), I finally managed to input the full Konami Code sequence and unlock the IoT device.

There was no chance of not recognizing the unlock, as I was afraid earlier: the LED matrix showed some kind of little coloured map.

Step 2: collecting the right ingredients

As you might have guessed, it was finally time for the “RASPI Map” patiently put together before the challenge started.
The LED matrix cells could be in 4 different states:

  • blinking red
  • off
  • green
  • orange

If you compare the matrix to the RASPI Map, you will see it matches the top-left 8×8 region.

Matching the matrix with the “RASPI Map”, you could instantly figure out that the blinking red LED was the player (in the START position on the map), the empty cells where – well – empty, and the other cells were ingredients. More specifically, the green ingredient, on the map, is the JALAPENO.
We can have a confirmation about that by walking the player to collect the green cell (we can move it by using the LT/RT/UP/DN inputs previously setup), which flashes a jalapeno icon on the LED matrix. If we walk on an orange cell, we get a red X, and we have to start over.

So, the JALAPENO looks like one of the ingredients that we have to collect (green cell), while we must avoid the others (orange). But what about the other ingredients?
Can we just walk around and find other green cells? Sounds too easy… and in fact, the piece of map shown on the LED matrix didn’t change/scroll when walking past the edges.
We have to blindly collect the other ingredients using the map, of course! But which ones?

This time I had learnt the lesson: the three challenges are interconnected.
The JALAPENO was one of the ingredients that was part of the challenge 2 solutions. Even if you received different puzzles with lots of different “bad” ingredients, and even if in different positions/directions, every puzzle had in the solution the same four ingredients:

So, I looked for PEPPERONI, SAUSAGE and MUSHROOMS on the RASPI Map. MUSHROOMS was in the top-right part, and I tested my assumption by going there and collecting it (getting a mushroom icon on the LED matrix).
PEPPERONI was in the (supposedly) bottom left zone. It could also be the bottom right, I couldn’t be sure. But I tried, and it worked.

But there was no SAUSAGE in the known map areas: it had to be in the lower right part, which unfortunately was not available.
Did I miss that somewhere? I think not, anyway it wasn’t mandatory after all: with a little patience, we can blindly walk our way through the final part of the map.

I automated the player walk until the edge of the known zone, and then started walking on unknown territory using the keyboard. When I got a red X, I marked the cell as “bad” on my map printout and started over. When nothing happened, I marked it as an “ok” cell to walk over.

After a little trial and error, the SAUSAGE was found.

A rewarding green mark on the LED matrix screamed “success” and suggested “UART”, kinda making the third clue in the mail useless.

Here you are a video of the unlocking and ingredients pick-up, as performed by my code (available on GitHub).
Unfortunately the lighting is poor, and the  orange LEDs (bad ingredients) look almost like the red blinking one (player). Anyway, the blinking saves the day.



Step 3 – reading from UART and dealing with the damn Skype bot

Reading from UART was an instant success for me, because I had already prepared the code and faced the “null” issue while being stuck on step 1.
I just had to press “S” on the keyboard and the the TCP server received a message that told it to open UART, read from it and write in the Debug console anything that came out.

And I got:

You've found the correct ingredients. Here is your order confirmation# AaAa0AAaAaaaAAA0AAA0
Please confirm your order with bit.ly/LastSliceChat to finalize the order.

(I replaced my code with “AaAa0AAaAaaaAAA0AAA0”).

I felt succesful and tought I had finally completed the challenge, probably the Skype bot was just in charge of receiving the code and putting an entry for me in the leaderboard.

Unfortunately I was wrong.

Greetings, challenger. Nice work solving the third Last Slice challenge.
Yes, I’m a bot—but this is very real, and it’s my job to help you cross the finish line. To get started, please reply with just your MSA email address.
11:39 AM

thelastslice, 11:39 AM
Excellent. You’re one step closer, <my-MSA-email-address>. Now I’m going to ask you to provide a few more pieces of information to test your Last Slice worthiness.
Proceed carefully—I will not try to verify your answers until the end. Here we go...
The first Last Slice challenge was all about delivering pizza to hungry customers. Please reply with the street address for the final pizza delivery in the game.
11:40 AM

thelastslice, 11:40 AM
Ok, next, in the second Last Slice challenge you searched for, and found, some special pizza ingredients. I hope you can remember them! Please reply with the first ingredient.
11:41 AM

thelastslice, 11:41 AM
Ok, I've added JALAPENO. What else? (Enter 'done' when you are done)
11:41 AM

thelastslice, 11:41 AM
Ok, I've added MUSHROOMS. What else? (Enter 'done' when you are done)
11:42 AM

thelastslice, 11:42 AM
Ok, I've added PEPPERONI. What else? (Enter 'done' when you are done)
11:42 AM

thelastslice, 11:42 AM
Ok, I've added SAUSAGE. What else? (Enter 'done' when you are done)
11:42 AM

thelastslice, 11:42 AM
Nice! That sounds delicious, I might even get one myself. Ha ha—who am I kidding? I’m just a programmed entity that doesn’t require sustenance.
All joking aside, let’s continue.
Please reply with the coupon code you received after solving challenge 2.
11:42 AM

thelastslice, 11:42 AM
Ok, just one more thing. Please reply with the order confirmation number you received after solving the third challenge.
11:44 AM

thelastslice, 11:44 AM
Uh ho, something went wrong. I can’t verify the information you’ve provided.
Please check over all of your answers.
I’m going to forget everything we’ve talked about so that you can try again.
Let’s start over from the beginning...
Greetings, challenger. Nice work solving the third Last Slice challenge.

I found this very annoying: at this point, there was no coding or puzzles involved, and I knew my answers were “right” on some level.
My primary doubts were about

  • the delivery address at the end of challenge one (I had given the “address” as it appeared in the level description in levels.txt: piece of map, row, column), because I somewhat remembered there might be some other kind of address visible in-game
  • the challenge two coupon: I had many of them, but probably not the first I received and not the last.

Additionally, I started having problems with Skype not detecting the new incoming messages from the bot, for minutes.

As the fun was over, I tried asking on twitter hints about the bot interaction, and went to sleep.
As I hoped, someone who already completed the challenge answered, suggesting to play the challenge 1 game again.

I did it, and for a couple seconds, after completing the last delivery, there was this screen:

Have I already complained about my memory? I don't remember.

Have I already complained about my memory? I don’t remember.


So, the address problem was settled, now I had to face the challenge 2 coupon thing.

I went through the solution/responses my challenge 2 solver had saved to disk, sorted them by date, and tried the oldest and the newest with no success. I knew that I hadn’t saved all the coupons I had received, so I started surrendering to the idea that I might have been unable to fulfill the request and complete the challenge.

Anyway, hoping that the bot would accept the latest provided coupon, I tried to get a new one running again my challenge 2 solver.

A nasty surprise was waiting for me: my solutions weren’t accepted anymore. What happened? I checked, and I saw they didn’t contain “MUSHROOMS” anymore.
So, I remembered the MUSHROOM/MUSHROOMS problem already mentioned earlier, and it turns out that they have now made the web service consistent with challenge 1: you have to look for “MUSHROOM” and not “MUSHROOMS”

After fixing that, my code started working again, and I went back to the Skype bot using the new coupon code.
Finally, it worked: my “Last Slice” adventure was over.

thelastslice, 6:07 PM
Your answers all check out, but unfortunately you are not one of the first five players to finish.

While this wasn’t your lucky day, you’ve definitely proven yourself as a contender.
On behalf of my colleagues here at Microsoft Windows (a.k.a. Last Slice HQ), congratulations on beating all three challenges!
And from the bottom of my Skype-bot heart, thank you for playing the Last Slice.


Overall, I enjoyed partecipating and think a lot of effort went into preparing this challenge, so congrats to who worked on it at Microsoft.
No $10K for me, but I got a free Raspberry PI 2 that is kind of a collector’s piece, had some fun and learnt something along the way: definitely not a total failure.

That said, I feel like some details could have been handled better, so I will provide some constructive criticism hoping that there will be more challenges like this in the future (and wishing the organizers more success in terms of number of partecipants).

My first complaint is about the start date of challenge three.
The official rules stated: “Challenge #3 begins at 12:00 a.m. (GMT) on August 17, 2018”.
So, 12:00 am is not exactly a happy time to pick for something like this, because of ambiguity.
There’s also a wikipedia paragraph on this issue. From there, you can read:

“The American Heritage Dictionary of the English Language states “By convention, 12 AM denotes midnight and 12 PM denotes noon”

and then

“The Associated Press Stylebook specifies that midnight “is part of the day that is ending, not the one that is beginning.”[20] Thus, according to AP style, “midnight Friday” occurs one minute after 11:59 p.m. Friday, not one minute before 12:01 a.m. Friday.”

If you wanna accept this interpretation, “12:00 a.m. (GMT) on August 17” was the midnight of August 17.
The GitHub page of challenge 2 (not the official rules) also stated “…and you’ll be qualified to participate in the third and final challenge (which will begin on August 18, 2018).”

So, I was almost sure that the challenge was going to start at 00:01 of August 18 (right after midnight of August 17).
Anyway, I put an alarm clock at midnight GMT of August 16 (2 a.m. in Italy) because “you never know”. I checked for new mail from the phone, and as I expected there was no new mail related to the challenge.
I was about to get back to sleep, when – at 2:23 a.m. – the mail came, and I rushed to the pc.

Setting as date “00:01 a.m. (GMT) on August 17” or “11:59 p.m. (GMT) on August 16” would have avoided the problem.

Anyway, talking about timing and fairness of the challenge, there was another issue.

Obviously, challenge three was a real “race” between the finalists, but at the time of receiving the mail there were already two entries in the leaderboard: people that fiddled with the hardware before the official start, and managed to complete the challenge before it even started.

I guess that with a $10000 prize something like that had to be expected, and I don’t feel like criticizing who did it (putting in it serious effort).
My problem with this is that not everybody received the IoT Kit at the same time – I read on twitter of somebody who got it just a couple days before the challenge (and somebody didn’t receive it in time at all – sorry guys).
So, it wasn’t totally fair towards all the partecipants: it would have been better to make it technically impossible to fiddle with the device before the challenge started (wasn’t it possible to send the microcontroller blank and provide software to flash it at the beginning of the challenge? I’m not sure it could be done, not being familiar with the Pro Mini, but sounds reasonable).

Finally, I think that it was somewhat anticlimatic, after the “meat” of the challenge, having to deal with the unforgiving Skype bot.
Providing the right answers didn’t need any puzzle-solving, technical or algorithmical skills.
I felt like I had “finished” when I got the order code from the serial interface of the Raspberry PI: when I was sent to deal with the bot and it didn’t like something in my answers (not telling which one), I got frustrated and went to sleep.
It did not matter anymore at that point, but if I had lost the $10000 prize because of the bot interaction part, well… it would have sucked. A lot. 🙂

Anyway, kudos to winners, organizers and to all the fellow coders exchanging tips on Twitter in the night.
See you at the next challenge!

Binary Patching the Razer Atrox Linux Driver

…or “how I got Retropie to handle all the 10 buttons of the Razer Atrox changing a bit in xpad.ko”.

What does a videogame programmer on vacation? Well, many things, actually, but to somehow stick to stereotypes, he surely must *play* some videogames, too.

the Razer Atrox connected to Raspberry Pi running RetroPie

the Razer Atrox connected to Raspberry Pi running RetroPie

So, recipe:

  • a Raspberry PI sitting in a drawer since months
  • the Razer Atrox fight stick, bought on sale and sitting in another drawer since then
  • the amazing RetroPIE SD image and some childhood game ROMs


  • all working in a very straightforward way, EXCEPT for the fact that during the input configuration, only six of the main eight buttons worked (the top ones, because the Atrox also features two side buttons, “start” and “select”, that were working).
    The buttons that didn’t work were the ones labeled LT and RT (Left Top and Right Top).

After excluding the chance of a hardware problem (trying the joystick under Windows), I decided that it was a driver issue… probably a simple one: 8 buttons out of 10 were working already, after all.

So, listing the loaded kernel modules and through quick Googling I found out that the code handling the joystick was
in xpad.c, and in a few minutes of code analysis I noticed that the two buttons not working were handled by this code block, only if the xpad->mapping field was set to MAP_TRIGGERS_TO_BUTTONS:

static void xpad360_process_packet(struct usb_xpad *xpad,
				   u16 cmd, unsigned char *data)
	/* triggers left/right */
	if (xpad->mapping & MAP_TRIGGERS_TO_BUTTONS) {
		input_report_key(dev, BTN_TL2, data[4]);
		input_report_key(dev, BTN_TR2, data[5]);
	} else {
		input_report_abs(dev, ABS_Z, data[4]);
		input_report_abs(dev, ABS_RZ, data[5]);


Before processing the packets, the xpad->mapping field is set in xpad_probe, after identifying the device by idVendor and idProduct:

static int xpad_probe(struct usb_interface *intf, const struct usb_device_id *id)
	for (i = 0; xpad_device[i].idVendor; i++) {
		if ((le16_to_cpu(udev->descriptor.idVendor) == xpad_device[i].idVendor) &&
		    (le16_to_cpu(udev->descriptor.idProduct) == xpad_device[i].idProduct))

	xpad->udev = udev;
	xpad->intf = intf;
	xpad->mapping = xpad_device[i].mapping;
	xpad->xtype = xpad_device[i].xtype;

So, the fix looked to be setting the proper flag (MAP_TRIGGERS_TO_BUTTONS) into the mapping field of the Razer Atrox definition in the xpad_device array of structs, that is a series of definitions used to tune the driver behaviour according to the detected hardware.

Another hint of that was that some similar fight sticks had the mapping field set to MAP_TRIGGERS_TO_BUTTONS, while the Razer Atrox didn’t:

static const struct xpad_device {
	u16 idVendor;
	u16 idProduct;
	char *name;
	u8 mapping;
	u8 xtype;
} xpad_device[] = {
	{ 0x0738, 0x4728, "Mad Catz Street Fighter IV FightPad", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
	{ 0x0738, 0x4738, "Mad Catz Wired Xbox 360 Controller (SFIV)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
	{ 0x24c6, 0x5000, "Razer Atrox Arcade Stick", 0, XTYPE_XBOX360 },

At this point, I wanted to try my simple fix:

	{ 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },

I didn’t want to recompile the whole kernel, but only the xpad module. The process didn’t look totally straightforward, because Retropie was using a custom kernel image, and not the one distributed through APT (that would have made things easier).
The process was made harder by the lack of a proper Internet connection (due to my vacation setting: I only had available an unrealiable 3G connection accessed through wi-fi tethering).

So, after considering that if I was right I only had to change a byte in that struct array, I opted for the binary patching approach.
The C structs are usually compiled to the binaries in very simple and predictable way, and this wasn’t an exception: after a couple of two-bytes fields (idVendor and idProduct), we have a pointer to the name string constant describing the model (4 bytes on 32 bit ARM), and after that two one-byte fields, mapping and xtype, the first of which contains the value we need to change.

The flags used as values for the “mapping” field are so defined:

#define MAP_DPAD_TO_BUTTONS		(1 << 0)
#define MAP_TRIGGERS_TO_BUTTONS		(1 << 1)
#define MAP_STICKS_TO_NULL		(1 << 2)

...meaning that we're looking for a byte with value 0x00 in the original definition, and we need to change it to 0x02 (that is, 1 << 1).

The two bytes fields used for identification, 0x24c6 and 0x24c6, are a perfect candidate to search into the binary, after remembering to fix the byte ordering field by field: 0x24c6 gets compiled to C6 04 and 0x5000 to 00 50.

After firing and hex-editor on my version of xpad.ko, and searching for the hex string C6 24 00 50, I found out the binary translation of the Razer Atrox definition at offset 1CA8:

C6 24 00 50 D8 09 00 00 00 01
{ 0x24c6, 0x5000, "Razer Atrox Arcade Stick", 0, XTYPE_XBOX360 }

That needs to be patched to

C6 24 00 50 D8 09 00 00 02 01
{ 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }
HxDen comparing the original xpad.ko and the patched one

HxDen comparing the original xpad.ko and the patched one

I did the simple modification with an hex-editor, replaced the original xpad.ko file with the patched one (in /lib/modules/3.18.11+/kernel/drivers/input/jostick/), rebooted, and the Razer Atrox buttons were all working nicely.

I submitted the patch to the Linux kernel tree following the standard procedure.

Sony Vaio S13 – replacing the HDD with an SSD

I have plenty of posts in draft status, about the work I did for my master thesis and for Zodiac. I hope to complete and publish them in the upcoming months. Today, instead, I resurrect the blog with a quick post about the HDD-to-SSD upgrade I did recently on my Sony Vaio S13.

When I bought it, using the online hardware configuration on the Sony website, the SSD option was very expensive, and it featured an hybrid drive I wasn’t really sure about. So I decided to configure it with a cheap 320Gb HDD, planning to replace it at a later time with an SSD of my choice. Even if this kind of replacement should be straightforward, experience tells that the devil is in the details (remember when the Windows install didn’t have the driver for your brand new SATA drive? When your notebook decided that it couldn’t boot from your everyday use USB key? And what about that hidden recovery partition?), so I postponed the replacement until I knew I had plenty of time to face the possible issues. Well, if you’re in a similar position, forget about it: I followed steps that seemed logical to me to do what I wanted, and all worked flawlessy. So, here’s what I did, hope it helps.

Beware: what I wanted was a clean install, as the Vaio had just arrived from the factory with the SSD. If you don’t want to reinstall your system and applications, you might try copying the partitions from the HDD to the SSD (if the SSD capacity is lower than the HDD, prepare for additional troubles: you have to stretch the HDD partitions before copying them to the SSD). I have done plenty of similar operations with Linux systems, but with Windows I’ve often run into non-sense troubles when moving installations. So, my suggestion is to take the easy path and do what I did: take it as a good clean-up opportunity and reinstall from scratch.

Unfortunately I wasn’t planning to write a post about the upgrade, so I didn’t shoot pictures of the hardware replacement, and I didn’t take note of the exact menu choices involving the recovery procedure. Feel free to ask for detals in the comments if in doubt.

Phase zero: items needed

  • some place to backup your data (external HDD, pen-drive, cloud storage)
  • recovery media (a bootable pen-drive – at least 32 Gb – or a few blank DVDs)
  • a SATA SDD
  • a small screwdriwer

Phase one: before hardware replacement

  1. backup your data
    • I used an external HDD drive
  2. with the notebook turned on, press the “Assist” button to open the Vaio Care utility, and create the recovery media (if you didn’t already when you bought the pc, maybe)
    • I used the pen-drive option – a 32 GB Kingston DTSE9
      • You need at least a 32 GB drive, and it gets blanked out – you’ve been warned.
      • The other option is using DVDs. When I bought the notebook, I tried burning them (I think you needed 5 discs), and there was a burning failure creating the fourth disc. Guess what: there was no option to restart the procedure from the fourth disc, I should have burned again the first three (perfectly fine) discs. Some software developer out there should really feel ashamed. Maybe they have updated the software in the meantime, I don’t know. I don’t wanna know. Anyway: DVDs have other disadvantages… you have to swap them during the recovery (so you can’t go away doing other stuff), can get unreadable, and come on, it’s 2014, everybody hates optical media.
      • Ok ok, I can hear you: “but why should I buy a 32 GB pen-drive for something I only need rarely and read-only?” – Well, you haven’t to. After creating the recovery usb-key, I dumped its image in a safe place (a couple of 1 TB HDD, RAID 1, where I keep my important stuff). So, after using it for recovery, I can format and use the pen-drive for other things. In case I will ever need to recovery again, I will restore the image to the usb pen-drive (or another one I will have that day). My RAID1 HDDs are attached to a Linux system, so I just did a dd if=/dev/sde of=/mnt/raid/vaios13recovery.dd bs=1M – but you can use your favourite drive imaging tool.
  3. Turn off the pc.

Phase two: hardware replacement

  1. disconnect the AC adapter, turn over the notebook and put it on a flat working surface (yeah, a table!)
  2. remove the two screws that keep the battery panel closed
  3. remove the battery
  4. the HDD is connected to a flat cable that, on the top-left, is secured by two screws. Remove them, and gently unplug the cable (you can’t see the connector, it’s under the cable… in the middle of the screws. Gently pull and the connector will pop-out.
  5. remove the four screws that keep the HDD in position (one was under the cable you just unplugged)
  6. gently pull out the HDD
  7. unplug the cable from the HDD
  8. remove the two metallic components at the sides of the HDD (more screws!)
  9. attach the metallic components to the SSD
  10. plug the cable removed at step 7 to the SSD
  11. put in the SSD: tie the four screws of the metallic component, plug the cable connector (remember the two screws removed at step 4)
  12. put back in the battery
  13. put back the panel and the final two screws

Phase three – after hardware replacement

  1. plug in the recovery pen-drive and turn on the notebook by pressing the “Assist” button
  2. select the option to boot from usb
  3. navigate the menu and select the recovery option
  4. the process will create the partitions (including the recovery partition) taking all the available space on the SSD, as desired
  5. at some point, you will be asked to remove the usb key and reboot
  6. go out for a walk or something, this will take time – finally, you’ll end up with the notebook booting like the first time you turned on it, but on the SSD

Phase four (optional) – removing the recovery partition

If you backup the image of the recovery pen-drive somewhere  safe, as I did, you can gain more space on the SSD by removing the recovery partition

  1. plug back in the recovery pen-drive and turn on the notebook by pressing the “Assist” button
  2. select the option to boot from usb
  3. navigate the menu and select the “remove recovery partition” option. This will grant you around 27 more GB.

Learning OpenGL: a few months later

Yeah, I know, I failed to post at least once a month, the previous post dates back to April… but hey, did you read the title of the blog?

In this post, in March, I wrote about documentation and libraries I found useful in getting started with modern OpenGL.
After a few months, I have a small addendum to do, both regarding documentation and libraries.


When I wrote the old post, I suggested some tutorials and online documentation I had been consulting, and I wrote that I hadn’t heard of any single book suitable to learn modern OpenGL. A few weeks after my post, the OpenGL Programming Guide (8th edition), also known as “the red book“, was published – and things changed.
I bought it from Amazon (33,50 € – definitely affordable) and spent the last week reading it cover to cover: it’s great stuff, clearly written, well organized, with the amount of detail I expected (not so much to make you feel lost, but enough to give a comprehensive introduction to the API).
I wish I had it when I started learning, and I strongly advise anybody trying to learn OpenGL to buy it right now – it will be a time saver.


I read that the previous editions were heavily criticized for mixing modern OpenGL with deprecated stuff, probably as a result of incrementally updating the book. Moreover, GLSL had its own book (“OpenGL Shading Language”, a.k.a. the “orange book”) that, I guess, you definitely needed to supplement the “red book” if you were using shaders.

The 8th edition, instead, is a complete rewrite: it doesn’t use or describe the deprecated API, and it covers GLSL too. Yay! Thankyou, book authors!


If you read the old post, you might remember that I was quite confused about what was the best image handling library to be used with OpenGL.

I listed DevIL, FreeImage, ImageMagick, OpenGL Image (GLI), and Simple OpenGL Image Library (SOIL).
Recently, I also stumbled on GL Image, part of the Unofficial OpenGL SDK, that looks promising.

The “Unofficial OpenGL SDK” deserves a special mention by itself: it’s a collection of libraries and utilities helpful in getting started with OpenGL development.

I motivated my library-picks (GLFW, GLM, GLEW, Assimp) in my previous post, and I was glad to see that more experienced developers also choosed GLFW and GLM for their SDK. For image handling, the thing I was mostly in doubt about, they provided a custom library: I guess I wasn’t the only one unsatisfied in some way by all the popular libs?
On the paper, GL Image rocks, and only lazyness and the dependancy on GL Load (the SDK alternative to GLEW, that I’m using) have prevented me to try it up to now.

I don’t exclude switching to the SDK: as for the “red book”, I wish it was available earlier. Anyway, it’s worth noting that the SDK currently misses an asset loading library. Why don’t they include Assimp? I’m using it, so I hope they don’t have any good reason! 🙂

A comparison of all the image handling libraries on some key points (formats supported, features, syntax, license, usability, portability, documentation) would be interesting: it looks like a lot of work, but maybe I’ll try to write something about it in the future.

If you were wondering what I am using now, it’s DevIL.
The library API is similar in some way to the OpenGL one, and for the very limited usage I’m doing, no problems arise.
Unfortunately, it looks like DevIL development ceased in 2010 – you’ve been warned.

Anyway, postponing my final choice about image handling, there’s another library I’d like to suggest today: AntTweakBar, a GUI library.

At the beginning, I limited myself to using keyboard/mouse events to select/move objects and adjust colors or other properties in my test programs.
Of course, after a while, I ended up having an unspokable number of unpractical keyboard shortcuts, and no visual feedback about the current state (for example: coordinates of the selected object), that I only printed to the console in response to some key-presses.

Don’t even reach that point: use a GUI library to interact with your OpenGL programs right from the beginning, you’ll save time later.

I think Ant Tweak Bar is a very valuable choice in this field:

  • it’s easy to integrate into your project (with simplified interfacing to popular libs such as GLUT, SDL, GLFW, SFML)
  • it’s portable (Win/Linux/Mac)
  • it looks cool – semi-transparent, floating windows
  • it’s lightweight and easy to switch on/off
  • it provides some GUI components specifically thought for computer graphics: orientation, color, direction
  • it allows grouping and hierarchically structuring components

A GUI component worth a special mention is the “rotoslider”: you use it to change a value, like you do with a normal slider, but the circular motion allows you to:

  • spin around multiple times  (e.g.: each full clockwise rotation increases the value by 100.0), while with a normal slider you are limited by the min/max values associated with the start/end positions
  • vary the precision/speed of the value changing: moving the mouse far from the circle gives precision, while moving it near the circle allows to increase/decrease rapidly

It’ better understood with a picture, taken from the wiki page of the rotoslider:


That’s it for today! I have many half-written posts in my drafts, so I hope to publish something else soon.


Gemini Rue – working on the Italian version for Zodiac (featuring some game-dev random thoughts)


Recently, I’ve been working on the italian version of Gemini Rue,  released on Zodiac today.

Gemini Rue was on my (constantly growing) “must play when I find enough time” list, and by having to work on it for the Zodiac release, I finally had a chance to play it (and to play it with the care and attention that it deserves).

I reviewed the translation (co-operating with my long time adventure games buddy, Cristiano ‘Gnupick’ Caliendo) and fixed some scripts in the source to let it handle properly the Italian texts (some strings aren’t automatically handled by the Adventure Game Studio i18n system, unless you take care of some details – but let’s avoid going technical).

I know that, given my involvment with the Italian edition and Zodiac, my opinion could be perceveid as spam, but I have to say that I really liked it, so kudos to Joshua Nuernberger, the game author.

The game came out in 2011, so there are already dozens of online reviews and it ‘s quite famous in both the adventure games community and in the indie game development scene, so let’s skip the part in which I talk about it and let’s proceed to the random thoughts I promised in the title. Some minor/indirect SPOILERS could be present.

In 2011, a single person can still publish a succesful game

Gemini Rue is not 100% done by Joshua, he received help in the music/sound area (and of course for the voice acting), but still, we are towards the vision of a single person that gets built after lots of hard work and passion. And you can feel that passion while playing the game, more than in some shallow big game productions.

How can this happen? I think it’s mainly about honestly evaluating the available skills (yours and of your teammates, if you have any), knowing what you want to do, and sometimes setting the bar low from the technical perspective (for example, using low resolution graphics and a domain-specific engine like AGS).

Atmosphere wins over pixel count

Gemini Rue graphics is 320×200 in 256 colors, something that might sound ridicolous in 2013. Smartphones handle more than that.

But it’s 320×200 256 colors graphics done extremely well: it’s not about exploiting some nostalgia factor related to “the nice days when I played Beneath a Steel Sky”, it’s about managing to have the player feeling immersed in the game world.

I’ve seen many high resolution 2.5 games with 3D characters not properly merged with the background, or full 3D games with horrible camera handling and user interface. It’s like someone from the marketing department shouted “FullHD/FMV/2.5D/full3D games are cool now, we have to do that too” , without evaluating if it was economically feasible for them to achieve a good result with that kind of technology.

Gemini Rue rooms are alive and rich of small details and animations, with some carefully placed lighting changes and an amazing sound design that consistently improves the immersion. From my point of view, definitely better than some high resolution static background or some low poly 3D world with clipping and lame particle effects.

A well told story doesn’t need to be original to be good

Gemini Rue has a good sci-fi/cyberpunk story, with dozens of direct and indirect influences that fans of the genre can point out: Blade Runner, A Clockwork Orange, Total Recall, Dollhouse, maybe a little THX 1138, and probably many more (there are some explicit Cowboy Bebop references in the game, but unfortunately that’s on my “must watch when I find enough time” list, sigh).

So, one might want to criticize the lack of originality of the game plot.
But you know, I think that really “new” ideas are quite rare nowadays (IIRC, Philip K. Dick once told that he had just a single good idea, the one of some artificial being that doesn’t know about being artificial), and this kind of criticism is not the smartest one can do: it’s not just about the ideas, it’s more about how they are put together and communicated to the audience. It’s about the way characters are used to carry on the events.

I mean, you can tell the “Reservoir Dogs” plot in 30 seconds, but it’s how is carried on the screen that makes it a cult movie.
And you can give some brilliant sci-fi background to some dumb action movie director/screenwriter and have a shitty film no matter how the basic ideas were cool.

If the characters manage to get us interested in their psychology, motivations, background, and have good dialogues, the originality and sometimes even the consistency of the story often become secondary. We have to care about the characters and feel motivated to impersonate them, because we want to see what’s going to happen to them as the story moves forwards.

A well designed adventure game keeps you interested

I played Gemini Rue from the beginning to the end and never got stuck or bored. My interest was always kept high, I constantly had some goal to achieve, and the puzzles were logical. Theres’s a good variety of environments and characters, the alternance between Azriel and Delta-Six sections of the game works very well, and the non-linearity of the story adds some interest to it without getting too complex to be followed on a first playthrough. There are some small action sequences, often looked skeptically by purists of the genre, but they’re well integrated into the story and didn’t bother me at all.

It’s a matter of rhytm and variety: getting a good balance between moments where you are looking for an object or a person, moments in which you must face some inventory driven puzzle, areas where you have to explore the surroundings, dialogues in which you get background information, cutscenes, action or time limited sequences.

Put too much of something and you risk having a too long cutscene, too much background info all in a place to be perceived by the player, too many locations to explore without being able to understand what’s useful in your next step, too many objects in the inventory… or, worse, you can get into stupid puzzles put there just to make the game longer, making no sense and having no connection to the narrative side. Good puzzles are the ones connected to the progression of the story, the others are mostly fill-ins.

I think that Gemini Rue does really a great job in this area, all is very well balanced and distributed during the whole game. There is maybe just a single puzzle that feels a little like a filler (the air flow management through vents opening), but hey, we had some stuff like that even in “The Dig” and it still is a masterpiece.

This kind of “balance”, in my opinion, is an instance of the often quoted Jenova Chen’s “Flow in Games” principle of keeping the player challenged but not frustrated. When talking about adventure games, to keep the “flow”, storytelling skills (similarly used in books, movies, comic books, tv shows etc.) are as important as technical skills strictly tied to game development (for example, providing a good user interface where the user can navigate the environment and handle the inventory without getting frustrated).

And while storytelling itself gives you endless opportunities, they escalate further when you add interaction: non linearity, adaptive details to provide a better suspension of disbelief, keeping the player active in the story-critical moments to increase the immersion… possibilities are endless, even forgetting for a moment about improvements that could come from the technical side (graphics, IA, NLP). And they say adventure games are dead… 🙂

Of missing textures, Assimp, the Crytek Sponza atrium, and naked women

Recently, I’m spending a lot of time in the atrium of the Sponza palace.
No, I didn’t move to Dubrovnik, Croatia, nor I’m on vacation: it’s just that the Sponza atrium model, originally created for a rendering contest, has been adopted by the computer graphics community as a test model (especially in the global illumination field), and who am I, working on my master thesis on rendering, to pick something else?

The original model dates back to 2002, but what I’m using is the updated and enhanced version made available by Crytek in 2010.

It features textures, normal maps, specular maps and opacity maps, so there’s plenty of data to be used while rendering.

I’m using Assimp to load models, and I stumbled into a couple problems. I googled about them and found nothing, so I tought that a little post on the subject was worth writing. I mean: if you get lucky and find something not yet on the Internet, you are morally obliged to blog about it before someone else does it! It works like that, right?

Anyway, enough blabbering: the two problems reside in the materials file, sponza.mtl.

  • a texture is missing
    • the material 16___Default refers to gi_flag.tga, not included in the textures archive
  • the specular maps are unreferenced
    • the textures pack features some specular maps (the files named *_spec.tga), but they are never referenced by the materials file

Let’s fix them…

Lenna, save us from the missing texture

Obviously,  the missing texture is the one of the only untextured object in the scene: the big vertical drape.  Copying some other drape texture as gi_flag.tga (or changing the filename in sponza.mtl, of course) gives visually acceptable results.

But didn’t make me happy. What was the original texture like? What if it featured a secret message of some sort? What if it was selected between hundreds, by Crytek professionals, to show some effect I could not even imagine? What if this “I WANT THE RIGHT TEXTURE” thing is just some form of OCD and I must see someone?

I googled for the misterious missing texture, gi_flag.tga but had no luck finding it. I found files named gi_flag.tga in some code repositories using the Sponza model, as I am, but after checking them out, the included gi_flag.tga was always some bogus file put there just to make the asset loader happy.

This poor guy on Stackoverflow asked for the misterious texture too, but had no luck, either.

Finally, my investigations brought me watching this amazing Crytek video, where they use the Sponza model at the beginning and around 2:30.

(also, the video humiliated me and my test renderer a lot)

It somehow solved the mistery: the missing texture is a quite boring red stuff with borders. Yawn.

Disappointement, and I still didn’t have the damn file – but it was time to move on, leaving the gi_flag.tga failure behind me. So, as you may imagine, after some hours of flying around with the camera, testing the shaders, looking if the alpha blending is done correctly, if the normal maps are applied properly, etc, the Sponza palace slowly becomes a boring place. Lion heads. RGB Drapes. Bricks. Plants. More bricks. All static, dead. Depression is around the corner, especially when you waste 3 hours ’cause you forgot to transpose a matrix, or something like that.

Consequently, I tought that selecting missing texture could be an occasion to make the place happier. How? Suddenly, the right answer came to my mind: Lenna.



This picture of Lenna Söderberg, scanned in 1973 from a Playboy issue (to be used as test image for a scientific paper), became a standard test image for compression algorithms and image processing.

I encourage you to visit http://www.lenna.org for the full story.

So, I put a computer graphics iconic image into a computer graphics standard test model.

And the iconic image is a naked woman, turning a bitter failure into an epic win. Hey, that’s how I see it.


Of course, you’re welcome to spread the usage as gi_flag.tga of this Lenna texture I prepared.

Get your favourite version (I know, it’s a difficult choice) and rename as gi_flag.tga, put in the textures folder of the Sponza model, and be happy. It’s 2048×2048, you might want to scale it down to 1024×1024 or 512×512.

I properly cutted the original image to let it fit the drape geometry: I’m a gentleman, and gentlemen never let texture get stretched improperly – that could make women on them look fat.

Where are my specular maps?

This is not interesting and not funny: sponza.mtl is missing the “map_Ks” entries used in the .mtl file format to refer to specular maps. I tought it was some other Assimp quirk (for example, it loads the normal maps as height maps), but when I checked the sponza.mtl file I found no reference at all to the *spec.tga files, so Assimp was innocent.

Luckily, the files are consistently named, so a specular map named “sponza_thorn_spec.tga” can be tied to the material that refers to the “sponza_thorn_diff.tga” texture.

That’s what I did, assigning all the provided specular maps to the appropriate materials.

And here you are the updated sponza.mtl file, so you don’t have to repeat the boring file editing process:

With this, I made the post concretely useful for other people, so I’m forgiven for all the Lenna delirium.


… you now what? I’m dating this post April 1st, I can post whatever I want and still pretend to be a serious person.

Learning OpenGL: getting started and picking the “right” libraries

17/08/2013 update: after this post, you should read this addendum

I’ve been working with 3D graphics in the past, but never at low level. That was something I really wanted to do: high level tools are fun for building stuff rapidly, but you know, you never get that sense of fulfilment from knowing exactly what’s going on under the hood.

For working with 3D computer graphics at low level there aren’t many options to choose from: OpenGL or DirectX. I’m not a big fan of online religion-wars, but for me picking OpenGL was a no-brainer: portability and openness always mattered to me, and anyway OpenGL knowledge nowadays is even more attractive for its applicability on mobile platforms (OpenGL ES)  and the web (WebGL). Also, the common opinion is: once that you know one of the two, switching to the other one is not that hard.

After deciding to learn OpenGL, I started reading documentation and testing small code snippets.

There’s a lot of OpenGL documentation, but when you’re a newbie it’s not easy to choose a book or a series of tutorials that works for you. Getting oriented needs a little time, and there’s another important thing to be aware: the switch from immediate mode to “modern”, shader-driven OpenGL programming, made a lot of available documentation outdated.

Of course you can learn tecniques from old-style tutorials and update their implementations to the current best practices, but that’s something for who already knows the basics – if you are starting to learn, you need some working foundations.

Also, OpenGL is narrowly focused on interacting with the GPU to provide 2D/3D graphics rendering, so to get up and running you basically need a few libraries that handle these tasks:

  • creating an OpenGL context on your operating system (something that is out of the OpenGL specs, and so platform dependent)
  • handling input from at least keyboard/mouse
  • performing mathematical computations commonly used in 3D computer graphics (with vectors, matrices, quaternions…)
  • loading OpenGL extensions
  • loading images to be used as textures
  • loading 3D models (maybe with animations)

So, the main questions are:

  • what documentation?
  • which supporting libraries?

What documentation?

For a quick hands-on introduction that doesn’t involve buying books (and, from what I read online, it looks like there’s not really any single book everybody agrees it’s the best way to learn modern OpenGL: you need to use multiple sources anyway), I think these series of tutorials offer lots of useful information:

  • http://www.opengl-tutorial.org/  (using GLM and GLFW)
    • written for beginners, and so very easy to follow: a good place to start
  • http://ogldev.atspace.co.uk/ (using a custom math class and FreeGlut)
    • many interesting tutorials, ranging from basics stuff to more advanced topics such as shadow volumes, deferred shading, skeletal animation
  • http://www.arcsynthesis.org/gltut/ (using GLM and FreeGlut)
    • a little hard to start with, but often goes deeper than other resources and also provides a good amount of theory. An interesting framework gets built as you proceed with the tutorials, but you often need to take your time and study the code on your own

Other stuff worth checking:

Which supporting libraries?

I propose here my choices, based on my recent experimentation, but beware: I’m not saying these are absolutely the best libraries, I’m saying that I think these are the ones more appropriate, for the little I found out in the past weeks,  in a position similar to mine (learning and experimentation trying to use a modern approach). Next month I could change my mind 🙂

So, context creation/window handling and basic input are often handled by a single library.

Freeglut is a popular choice for this, and was the first library I tested, but then I switched to GLFW.

GLFW looks more actively developed and modern, and provides many options in the creation of the OpenGL context. Freeglut is an open source rewrite of the GLUT library, that dates back to 1998 (!), so while it’s worth checking (at least for understading the structure of the many examples that use it), I think it’s better picking something else.

I prefer the way GLFW handles input (offering both polling and callbacks), and another difference I find important is that it puts you in charge of controlling the main render loop, while with Freeglut you must use the idle/display callbacks.

Alternatively, SDL, SFML and Qt are some bigger libraries that provide lots of other functionalities and aren’t focused on OpenGL, but support it. Even if they might end up being more productive/useful at some point (for example when you are developing a full featured game or a 3D enabled application), I think that they aren’t the right choice for learning purposes and experimentation.

So, the first winner is GLFW.

What about mathematics? In many tutorials/examples I see, a custom minimal math library gets built. Probably, math inclined people like doing that for getting a more deep understanding of the involved data types and operations. I’m not one of them: I think in my life I’ve already learnt and forgot how to invert a matrix too many times, so I’m happy to have a well written and tested library that does that for me. I’m not saying that it can’t make sense at some point (when you start thinking about optimization, device costraints etc) but surely not in learning stage. And excluding efficiency concerns, picking the math library is easy: OpenGL Mathematics (GLM). Why? Simple: its classes and functions are similar to those of GLSL, the OpenGL shading language. You have to write shaders in GLSL, so you have to learn that syntax anyway, so it’s nice being able to use it in C++ code too. I also have been using the Configurable Math Library (CML) for a bit, and while that’s probably a perfectly viable alternative, and a valid C++ math library, I think that for learning purpose the advantage GLM gives with its syntax is unbeatable. Both are header only libraries, and that’s cool.

Second winner: GLM.

What’s next? Extension handling. AFAIK there are only two alternatives: GLEW and GLee. And picking the winner here is easy too:  GLEW is the most recently updated of the two (2012, providing support for OpenGL 4.3, against 2011, supporting OpenGL 3.0)

What about image handling for texturing? As you can imagine, image files handling is a common task also outside the scope of 3D graphics, so many libraries exist. So, it’s about picking between the ones supporting image formats particularly used in texturing, and maybe explicitly written to be used with OpenGL. Honestly I still haven’t made my choice in this area. For production purposes, I’d care about speed (and so, texture compression: you might want to check this paper from Nvidia), but for learning, I think it’s more about what library supports more formats, is painless to add as dependency, etc. I’ve found myself using using DevIL, FreeImage and ImageMagick. I still have to test OpenGL Image (GLI), that’s from the same author of GLM (something that encourages me to try it). Another library on my radar is Simple OpenGL Image Library (SOIL) .

Last step: loading 3d assets. The winner here is Assimp (yeah, I know, some could think Assetimp would have been a better name, but I’m with the developers and their unintentional pun, as explained in the FAQ). It supports a wide range of formats, and for what I know there aren’t similar alternatives.

If you are a 3D artist too and do your own models, or someone does them for you following your directions, you might want to stick to a single format and use a specific loader for that one, avoiding a library dependency. Otherwise, using Assimp looks reasonable, and avoids you wasting time converting models around.

What about skinning and character animation? I’m not at that point yet, and it’s out of my current scope, but if you’re interested in that, I’ve heard about Cal3D. Maybe I’ll look into it in a future post, for now I think I’ve already written too much 🙂

Do you think I missed something important? I would be surprised of the opposite, so feel free to leave me a comment!

No, seriously…

…I’m too busy to mantain a blog!

But who cares, I’m going to try anyway. And I’m going to write in English (self enforced exercise, yeah! Feel free to correct me in the comments, I’ll be happy to fix stuff), attempting to offer my extremely useful and wise posts (…) to a wider audience (and, just a little, to piss off people who thinks I’m just showing off when I opt for the english edition of some technical book available in Italian too – seriously, I do that just to avoid hilarious stuff like “overflow” translated as “trabocco”. Yeah, I’ve seen that. Really!).

So, what are you supposed to find here in the future? Mostly, boring technical stuff with code snippets, quick hands-on tutorials made mostly as a reminder to myself, and information about what I’m studying or working on. Occasionally I might even go philosophical, you’ve been warned.

Currently, the most probable topics are related to

  • cross platform native software development and manteinance with QT, because of my work as programmer for Zodiac, a digital delivery platform focused on adventure games (<spam>register now! free games available to test the platform!</spam>)
  • OpenGL and rendering algorithms, because of the master thesis I’m working on in ISISlab, a research lab of the Università degli Studi di Salerno
  • videogame programming and game design, storytelling in videogames (because it’s what I talk about with my Hexence mates, and the three of us are going to release games, I promise)
  • low level programming, software security, reverse engineering,  programming challenges and wargames – because it’s fun stuff

Ok, write you soon. Or not too soon, ’cause you know, I’m too busy to mantain a blog.

Return top