Systems, Mechanics, Gameplay

To recap the last post, this project is an attempt to capture Caillois' ilinx as a game with an ever-shifting ruleset which tries to keep the player almost failing the whole way through.

Surely, having a computer generate enjoyable and meaningful game rules from scratch is an impossible task, so some groundwork will have to be made to accommodate the shifting nature of rules. Besides, the player always needs some ground truths to work from. For example, if the main gameplay loop isn't singularly physics-based the physics of the world must remain constant.

The game will have to be built solid around the gradually shifting random element, with little in the way of superfluous goals, mechanics and distractions.

Artificial Neural Networks

I want to make the game react to user input but not in a scripted or predictably linear way. To make this happen, I have to construct the game with known rules that react to procedural inputs (in addition to the player's input device) so I can let the game fiddle with the numbers. The problem is, it's difficult to predict any player action unless I restrict it greatly.

Machine learning has come a long way and is now beginning to be used to train video game AI. The result of such training is a neural network of certain neuron weights that evaluates inputs and quickly produces an output that controls AI behaviour. However, to make the game react to user play style rather than generic situations, I will have to train the "brain" while the game is in progress. This places certain limits on the game design and informs subsequent design decisions.

Here's some design restrictions to help train a neural network live:

  • lots of input samples in a short time
  • few input and output variables
  • a simply defined and quantifiable goal for the AI

Flocking Behaviour

Boids are a type of swarm AI that features many agents which individually feature extremely simple rules of operation that are designed to manifest in a natural and intelligent behaviour when simulated in a flock. I have been playing around with boid systems for a while and they seem to be perfect for the above restrictions. One can simulate a great number of boids, measure their success in the game world and train the neural network to do better with the next ones, all while the game is in progress.

Designing for Intended Gameplay Modes

A possible interesting outcome of using a live-learning neural network is that different gameplay styles might have somewhat unexpected results:

  • Trying to practice and play in a homogeneous style all the way through results in the AI catching up quickly and winning the game. Straightforward and rational approaches are punished because they work against achieving ilinx.
  • The other extreme - playing haphazardly and almost randomly "button bashing" keeps the AI off your back but mistakes are easy to come. This mode of play should be more effective because it works very well with the idea of ilinx.
  • Middle ground - or almost a synthesis of the above - is a type of play in which the player is constantly changing their behaviour in order to confuse the AI but this will have to be executed in a very deliberate and almost strategic way.

The goal of Project MOSKOE is to discourage the immediate and rational mode of play in favor of either winging it or meta-playing. The last two should work to make the game "easy to learn, hard to master".

Leave a comment

Log in with to leave a comment.