Using a learning algorithm known as NEAT, this Super Mario World play through is an example of a machine learning how to beat the level on its own. Using an evolutionary process, a neural network was built–or learned–to complete the level. The name of the program used to control Mario is called… Mar-I/O.

  • N: Neuro
  • E: Evolution of
  • A: Augmenting
  • T: Topologies

The initial play through is fascinating as well as the breakdown of what is going on. Well worth the 5 minutes.

[Update… Ryan in the comments provided this MarI/O version modified to factor in score.]

Author

Fan of making things beep, blink and fly. Created AddOhms. Stream on Twitch. Video Host on element14 Presents and writing for Hackster.IO. Call sign KN6FGY.

3 Comments

    • That is very cool. Around 0:30 mario takes a hit by Bullet Bill. I wonder if future generations would be able to avoid that hit. Maybe MarI/O doesn’t know how to duck yet?

      • Without and major changes to the original script, I’m not sure, because by that recording the algorithm had been running for over 200 generations, equating to over a week straight in clock-unrestricted emulation on a decent PC, and as far as I could tell it seemed to be asymptotically nearing that particular run. The score and behavior hadn’t changed much over many generations and the gene “mutations” were very sparse and unlikely to actually beat the current best or even be successful at all (unlike before where all different kinds of behavior were common in the pool, which was fun to watch).

        I don’t really think it’s a limitation of the algorithm, just this implementation (and to be fair it did the job pretty well). Repeating the level each time from the beginning for each run makes progress slow, especially when every run is getting all the way to the end, not to mention a more optimized emulator/script situation could help a lot. Also, the algo only has limited information about the objects on screen based on this script, so its ability to learn is limited. That’s why it seems to exhibit more of a trial-and-error/memorization type of learning than an intuitive one. Despite that, it still seemed surprisingly organic.

        Having said all that, it is pretty cool. If I get around to it sometime I’d like to try putting this thing to work in other games/situations.

Write A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.