Featured

10 Ways Object-Oriented Code is More Complex than Functional Code

Object-oriented programs tend to be an order of magnitude or more complex than their equivalent functional programs. Here are ten ways in which object-oriented programs are more complex:

  1. Mutable State: Object-oriented code often relies on mutable state, which can introduce complexity as objects change their internal state over time. Managing state transitions and ensuring consistency can be challenging, especially in large codebases.
  2. Inheritance Hierarchy: Inheritance hierarchies can become complex and difficult to manage as the number of classes and their relationships increase. Deep inheritance trees can lead to tight coupling and make code harder to understand and maintain.
  3. Coupling and Dependency Management: Object-oriented code tends to have higher coupling between objects, making it more challenging to manage dependencies. This can result in cascading changes and difficulties in modifying or replacing objects without affecting other parts of the system.
  4. Side Effects: Object-oriented code often involves methods that produce side effects, modifying state outside of the local context. This can lead to unexpected behavior and make code harder to reason about.
  5. Decentralization of Control: Since each object is in theory responsible for as much of its part of the world as possible, emergent system behavior tends to become distributed across multiple classes. This leads to anti-patterns that OO programmers call ‘Shotgun Surgery’ where a single coherent change to system behavior requires changes across a large and unpredictable number of classes and methods.
  6. Object Lifecycle: Objects have their own lifecycle, including creation, initialization, and destruction. Managing object lifecycles, especially in complex systems, can be challenging and error-prone. Lifecycle concerns are omitted entirely when you focus on transforming data rather than simulants whose state indirectly represent data.
  7. Object Identity and Identity-based Operations: Object identity introduces additional complexity, especially when comparing or manipulating objects based on their identity rather than their value. This can lead to unexpected behavior and bugs.
  8. Polymorphism and Dynamic Dispatch: While polymorphism is a powerful feature of OOP, it can introduce complexity when dealing with dynamic dispatch and resolving method calls at runtime. It can be harder to track and understand the flow of execution.
  9. Inversion of Control and Frameworks: Object-oriented code often relies on frameworks and dependencies, which can add complexity. Understanding and managing the control flow within a framework can be challenging, especially for newcomers to the codebase.
  10. Testing and Mocking: Unit testing object-oriented code can be more complex due to the need for setting up and managing object states, dealing with dependencies, and mocking objects. This can make testing more cumbersome and increase the potential for test failures.
Featured

10 Ways Functional Programming Makes Development Less Painful

Functional programming is a programming paradigm that emphasizes immutability, pure functions, and declarative programming. Here are 10 ways functional programming makes software development less painful:

  1. Improved Readability: Functional programming emphasizes declarative code, where the focus is on what the code should accomplish rather than how it should be done. This leads to more expressive and readable code.
  2. Modularity and Reusability: Functional programming encourages breaking down problems into smaller, composable functions. These functions can be reused in different contexts, promoting modularity and code reuse.
  3. Easier Debugging: With immutable data and pure functions, debugging becomes easier because you can isolate and test individual functions or parts of your code without worrying about hidden dependencies or mutable state stomping over relevant input values.
  4. Concurrency and Parallelism: Functional programming promotes the use of immutable data and side-effect-free functions, which makes it easier to reason about and manage concurrency and parallelism. It reduces the chance of race conditions and makes it possible to write concurrent code with fewer synchronization concerns.
  5. Testability: Pure functions in functional programming are easier to test since they produce the same output for the same input, regardless of the program’s state. This makes it easier to write automated tests and ensure code correctness.
  6. Maintainability: Functional programming encourages writing code with clear boundaries between different components and minimal dependencies. This modular and loosely coupled design improves code maintainability, as changes to one part of the codebase are less likely to have unintended consequences elsewhere.
  7. Scalability: Functional programming principles align well with distributed and parallel computing. By avoiding shared mutable state, functional programs can scale horizontally across multiple machines and take advantage of parallel execution.
  8. Predictability: Functional programming discourages hidden side effects and mutable state, leading to more predictable code behavior. This predictability simplifies reasoning about code, understanding its flow, and predicting its performance.
  9. Resilience to Change: Functional programming’s emphasis on immutability and pure functions reduces the impact of changes in one part of the codebase on other parts. This makes code more resilient to change, as modifications can be localized and their effects are easier to understand and manage.
  10. Error Handling: FP provides robust mechanisms for handling errors and exceptional cases. Techniques like monads, option types, and error handling combinators allow for cleaner and more structured error handling code.

But how does a program like the Nu Game Engine make functional programming as fast or faster than object-orientation? Stay tuned to find out our (open) secrets!

Featured

Why ‘Quality of Life’ Features Can (Sometimes) Ruin Your Retro Game Design

I’ve gotten a bit of feedback about OmniBlade (current release here), a portion of which I can’t directly address, specifically in terms of Quality of Life (QoL) features.

As far as OmniBlade goes, it’s in a weird design space as a retro-style game. Firstly, if you give a retro-style game all of the quality-of-life features that people have come to expect from modern games, it ceases in many ways to be a retro-style game. You have to give them SOME quality of life stuff, for sure. But the question is where to draw the line?

Take Final Fantasy on the NES for example. One of the reasons dungeons in Final Fantasy are so intense and satisfying is because it lacks the Quality of Life feature that provides a way to instantly exit them (at least for about the first half of the game). In doing so, it forces players to engage in resource management strategy far more intently than if they can just fly out with an Exit spell or item. Resource management, however, is at the heart of what makes old school RPGs so interesting and fun (despite all of the technical limitations).

More importantly in this case, the adrenaline rush the player gets when barely escaping a level with their life is hard to match. When you have several minutes of progress on the line, it makes the stakes much higher than if you can just spam continue whenever you die.

Additionally, a lack of an Exit action makes it so levels can require multiple runs before getting to their boss and beating them, creating a ‘looting’ style gameplay mechanic. This is a great game mechanics that newer games don’t at all capture, again due to being thwarted by QoL. This looting style of play is specifically something I was going for with OmniBlade (although due to balance issues, I think it’s currently a little overdone). If I directly relented to player’s QoL feature requests, the core gameplay mechanics around the game were built would cease to function properly.

However, there is a partial, compromised way to satisfy player demands. Having a two-way warp gate after each mid-boss will keep players from backtracking through areas with weaker enemies, but still keeping the looting style of gameplay in tact (when exploring a new area, you have to at least make it back to the previous warp gate to get out alive). UPDATE: this has been implemented and works as well as predicted.

This dynamic between retro-style mechanics and QoL features is precisely why you have to be extremely careful on which and what type of QoL features you give the player when making a retro-style game. Unfortunately, modern reviewers will most likely dock your game based on lack of QoL regardless of whether their lack actually improves the intended part of the experience. Some game journalists are probably going to dock your game for have any challenge or stakes at all. But them’s the breaks.

It’s good to remember that retro-style games are not modern games, and that’s intentional. Modern games are often less interesting than retro games, primarily because they don’t draw in the player despite all of their graphical luster. One thing you often see happen often in modern game development is that the developers start with a fun and challenging game, but by the time the testers and previewers get all of their pet QoL features in, it’s no longer much fun because many of the game mechanics have been watered down by QoL features. How can you be emotionally invested in your character’s survival if every time you die, you just go back three seconds in time and lose nothing? All those cool gameplay mechanics your development team spent so much time on come to mean nothing because the player can just get out of trouble for free with a QoL feature.

So don’t let QoL requests, as reasonable as they may initially seem, compromise your retro-style game’s design. If you can find a way of addressing the issue for which the QoL is being suggested without sacrificing your existing design goals, that is probably better. Always listen to feedback, but also be cautious when triaging QoL feature requests.

Featured

Ant vs. Space Marine

vs

There are two broad categories of software systems – interruptible and non-interruptible. The interruptible system offers high-level capabilities such as dynamic event handling, but does so at the cost of performance due to an impedance mismatch with the underlying commodity hardware. The non-interruptible system eschews such high-level capabilities to avoid their associated impedance mismatches in order to offer greater quantitative scalability. Which approach the working programmer should choose depends on whether he is solving a problem that imposes requirements relating to its qualitative or quantitative aspects.

This choice is not made clear purely based on the model category our programmer is trying to working in, either. Consider a game simulation where there exists both space marines, who are of high consequence to the state and outcome of the simulation, and ants in a massive ant colony, which are not. Both of these abstractions fall into the model category of simulants in a simulation which they all share. Space marines, being armed with giant bazookas and able to pick up keys that afford entry to new areas, have the potential to affect the simulation in the most profound ways and, given the hectic nature of space marine combat, at unpredictable times. Our humble ant, on the other hand, has no profound effect on the simulation – potentially no observable effect at all. The only way in which the ant’s presence dwarfs that of the space marine’s is in his massive numbers. Even if our lowly ant were able to affect the simulation in some small way, it would be in a very predictable fashion, bounded to only a small number of effects easily enumerated by the programmer up front. As you’ve probably guessed, the space marine best falls into the category of the interruptible system whereas the ant best falls into the category of non-interruptible system. Like the space marine and the ant, it is a classic disparity between quality and quantity. To the author, all the world seems a war between these two opposites, and software systems are not excluded.

As is now known by many game developers, the latest innovation in the war for scalability is the entity-component-system (ECS) programming model. Entities are little more than an identifier and an indirectly-associated collection of razor-thin ECS components. The ECS offers not just a possible light-weight representation of a simulant, but also a memory-access- and vectorization-friendly means of organizing its components. The ECS therefore make a lovely home for our masses of ants. After all, ants have such a muted effect on the world that their implementation need know very little – nor affect much of – the outside world.

The space marine, however, is the complementary case. Whereas ants might number in the 10s or 100s of thousands, space marines are likely to number in the dozens or hundreds. The advantages of quantitative scalability of the ECS isn’t nearly as meaningful here. What’s more, the space marine is the main force of causality in our simulation, meaning that living inside the myopic constraints of an ECS poses increasingly-hard problems at the level of programmability. Sure, our programmer could attempt to sit down a graph out all possible causal sequences involved with the marine in order to appropriately chain the systems together, but that will not be an easy or stable method throughout the lifetime of development. Given that the things that marines like to affect most of all is other marines (via explosive munitions or hails of bullets), circularity in our hand-crafted causal sequence is difficult to avoid when possible, and difficult to sequence where unavoidable. So why should the space marine be force-fitted inside an ECS system when he would be much more comfortable in the more classic event-based one? Why not accommodate both of these simulants equally by exposing one API for the interruptible, event-based system and another API for the non-interruptible, ECS-based system? In short, why not both?

This is what both Unity and Nu does, and the author suspects that embracing this duality is better than forcing every possible simulant into the new and cool but very constraining ECS model.

Functional Programming and Data-Oriented Design

These two complementary paradigms, rather than object-orientation, could be the future of game development.

Note: This article is a follow up to the previous article, ‘Why Functional Programming Works for Games’.

When we tell the story of functional programming and data-oriented design in games, we tell a story of trade-offs.

What do you get with Functional Programming in games?

  1. A declarative programming interface, such as Elm-style.
  2. Ease in reasoning about program behavior at a high level.
  3. The complete freedom of an open system.
  4. A great debugging experience.

And what do you pay to get it?

  1. Higher performance costs depending on how declarative you want to be.

Conversely, what do you get with Data-Oriented Design?

  1. Unrivaled speed compared to high-level programming paradigms.
  2. Ease in reasoning about program performance at a low level.

And what do you pay to get it?

  1. Be prohibited from using abstractive language features and techniques.
  2. Difficulty in reasoning about program behavior at a high level.
  3. A requirement to work in an exclusively closed system.

By looking at these trade-offs, we see that functional programming and data-oriented programming are quite orthogonal. But for games, that’s actually a great thing because orthogonal means complementary!

So how do we know which approach to use when writing games?

It’s all about knowing in what way your game needs to scale. For an example, let’s look at game Entities in the Nu Game Engine

In Nu, you have 3 tiers of Entity configurations out-of-the-box –

  1. Optimized (Omnipresent = true)
  2. Elm-style (use the EntityDispatcher<_, _, _> type, Omnipresent = true)
  3. Cullable (Omnipresent = false)

Depending on the tier of the Entity, different numbers of them can be changing every frame before soaking the CPU –

  1. Optimized => 25,000* On-Screen
  2. Elm-Style => 12,000* On-Screen
  3. Cullable => 12,000* On-Screen

*These numbers are discovered by running Nu’s Metrics project.

As you can see, you can get additional scalability if you’re willing to dial back declarativity. And since most types of Entities won’t need scalability beyond the hundreds, you can stick with Elm-style or Cullable Entities by default. If you need a thousands of bullets flying around the screen, you can use the Optimized configuration.

What about bleeding edge games where you have certain game elements on screen that number in the many 10,000s or 100,000’s? Well, that’s where you have to eschew Nu’s normal path of programming and embrace the data-oriented style – https://github.com/bryanedds/Nu/blob/master/Nu/Nu/Ecs/Ecs.fs. Even if you were using an object-oriented system, you would still need to duck out to an array-based approach like an ECS to get to the next level of scalability.

4. Data-Oriented / ECS => 100,000+

In my view, game engines of the future will use a mixture of functional programming and data-oriented design. Object-orientation will take a back seat to these emerging paradigms, it having only succeeded in giving us the worst of both worlds — insufficient abstraction and inefficient performance.

By using both functional programming and data-oriented design in games, we will aim to get the best of both worlds!

UPDATE:

5. Compute Shaders => 1,000,000+

And I think I should mention the next level of scalability beyond ECS – compute shaders!

Check out what people are doing with GPU-only simulation programming here –

Of course, Nu doesn’t provide anything specific to enable this, but it should be pretty straight-forward to expose support for this once we implement OpenGL rendering.

So to summarize, entity scaling goes approximately as follows –

Elmish =>          1000's
Classic Nu =>      10000's (same for OOP APIs like Unity)
ECS =>             100000's
Compute Shaders => 1000000's

Nu supports all of these gradients now except for compute shaders, and those are on the horizon as well.

A Game Engine in the Elm Style!

A ‘Nu’ way to make games!

The Nu Game Engine was the world’s first practical, functional game engine. And it has recently accomplished another first — allowing developers to use Elm-style architecture (AKA, model-view-update) to build their games in the cleanest, most understandable possible way!

This article will go over two examples often used by the Elm developer written in Nu. The first shows an tiny Elm-style UI in Nu, and the second a tiny little Mario-like example.

For a quick intro to Elm and the Elm-style, see here – https://guide.elm-lang.org/

Because it is simpler, let’s start by looking at Nu’s way of implementing the canonical Elm-style UI –

The full code is as follows (see it on Github) –

namespace Nelmish
open Prime
open Nu
open Nu.Declarative

// this is our Elm-style model type
type Model =
    int

// this is our Elm-style message type
type Message =
    | Decrement
    | Increment
    | Reset
    interface Nu.Message

// this is our Elm-style game dispatcher
type NelmishDispatcher () =
    inherit GameDispatcher<Model, Message, Command> (0) // initial model value

    // here we handle the Elm-style messages
    override this.Message (model, message, _, _) =
        match message with
        | Decrement -> just (model - 1)
        | Increment -> just (model + 1)
        | Reset -> just 0

    // here we describe the content of the game including its one screen, one group, three
    // button entities, and one text control.
    override this.Content (model, _) =
        [Content.screen "Screen" Vanilla []
            [Content.group "Group" []
                [Content.button "Decrement"
                    [Entity.Position == v3 -128.0f 96.0f 0.0f
                     Entity.Text == "-"
                     Entity.ClickEvent => Decrement]
                 Content.button "Increment"
                    [Entity.Position == v3 128.0f 96.0f 0.0f
                     Entity.Text == "+"
                     Entity.ClickEvent => Increment]
                 Content.text "Counter"
                    [Entity.Position == v3 0.0f 0.0f 0.0f
                     Entity.Text := string model
                     Entity.Justification == Justified (JustifyCenter, JustifyMiddle)]
                 if model <> 0 then
                    Content.button "Reset"
                       [Entity.Position == v3 0.0f -96.0f 0.0f
                        Entity.Text == "Reset"
                        Entity.ClickEvent => Reset]]]]

This example code creates a Nu game program that shows a + button as well as a button that change the numeric value of the counter button when clicked. Additionally, there is a reset button that will revert the counter to its original value (and only exists when the value has changed).

Let’s step through each part of the code, from the top –

// this is our Elm-style model type
type Model =
    int

Here we have the Model type that users may customize to represent their simulant’s ongoing state. Here we use just an int to represent the counter value shown by the Counter label. If we were to write, say, a custom Text widget, the Model type would be a string instead of an int. You’re not limited to primitive types, however — you may make your model type as sophisticated as you see fit.

// this is our Elm-style message type
type Message =
    | Decrement
    | Increment
    | Reset
    interface Nu.Message

This is the Message type that represents all possible changes that the Message function will handle.

// this is our Elm-style game dispatcher
type NelmishDispatcher () =
    inherit GameDispatcher<Model, Message, Command> (0) // initial model value

This code does three things –

  1. It declares a containing scope for the Elm-style function overrides (seen coming up next) and packages them as a single plug-in for use by external programs such as Nu’s world editor, Gaia.
  2. It allows the user to specify the Elm-style model, message, and command types. Here we pass the empty Command type for the command paramter since this simulant doesn’t utilize commands.
  3. The base constructor function takes as a parameter the initial Model value (here, 0).

Let’s look at the next bit –

        // here we handle the Elm-style messages
        override this.Message (model, message, _, _) =
            match message with
            | Decrement -> just (model - 1)
            | Increment -> just (model + 1)
            | Reset -> just 0

This is the Message function itself. All it does is match each message it receives to an expression that changes the Model value in an appropriate way.

“But what is that just function?”

Good question! Strictly speaking, the Message functions return both a new Model as well as a list of signals for processing by Message and (here unused) Command functions. But since we don’t need to generate additional signals, I use the just function to automatically pair an empty signal list with the new model value. It’s just a little bit of syntactic sugar to keep things maximally readable!

    // here we describe the content of the game including its one screen, one group, three
    // button entities, and one text control.
    override this.Content (model, _) =
        [Content.screen "Screen" Vanilla []
            [Content.group "Group" []
                [Content.button "Decrement"
                    [Entity.Position == v3 -128.0f 96.0f 0.0f
                     Entity.Text == "-"
                     Entity.ClickEvent => Decrement]
                 Content.button "Increment"
                    [Entity.Position == v3 128.0f 96.0f 0.0f
                     Entity.Text == "+"
                     Entity.ClickEvent => Increment]
                 Content.text "Counter"
                    [Entity.Position == v3 0.0f 0.0f 0.0f
                     Entity.Text := string model
                     Entity.Justification == Justified (JustifyCenter, JustifyMiddle)]
                 if model <> 0 then
                    Content.button "Reset"
                       [Entity.Position == v3 0.0f -96.0f 0.0f
                        Entity.Text == "Reset"
                        Entity.ClickEvent => Reset]]]]

Here we have the Content function. The Content function is mostly equivalent to the View function in Elm. Here the Content function defines the game’s automatically-created (and destroyed as is needed) simulants. Studying this structure, we can see that we have a simulant structure like this –

Screen/
    Group/
        Increment
        Decrement
        Counter
        Reset

The Content function declares that the above hierarchy is instantiated at run-time. Each Content clause can also define its respective simulant’s properties and event handlers in a declarative way.

                [Content.button "Decrement"
                    [Entity.Position == v3 -128.0f 96.0f 0.0f
                     Entity.Text == "-"
                     Entity.ClickEvent ==> Decrement]

Here we have the Decrement button’s Text property defined as “-”, its Position translated up and to the left, and its ClickEvent producing the Decrement message that was handled above.

                 Content.button "Increment"
                    [Entity.Position == v3 128.0f 96.0f 0.0f
                     Entity.Text == "+"
                     Entity.ClickEvent ==> Increment]

Here is the Increment button, which produces the Increment message.

                 Content.text "Counter"
                    [Entity.Position == v3 0.0f 0.0f 0.0f
                     Entity.Text := string model
                     Entity.Justification == Justified (JustifyCenter, JustifyMiddle)]

Here we see first use of the := operator. This tells the Text property to sets its value to the result of the expression on the right hand side whenever the result of that expression changes.

                 if model <> 0 then
                    Content.button "Reset"
                       [Entity.Position == v3 0.0f -96.0f 0.0f
                        Entity.Text == "Reset"
                        Entity.ClickEvent => Reset]]]]

And lastly, here we have a button that exists only while the outer if expression evaluates to true. So, as long as the model value is non-zero, the button entity will exist. Otherwise, it will not. The engine takes care of creating and destroying the button entity accordingly.

Now let’s look at Elm-Mario in Nu (see it on Github here) –

The code here is a bit more involved. It demonstrates how Nu’s uses its scalable built-in physics engine by applying forces rather than using ad-hoc physics routines –

namespace Elmario
open Prime
open Nu
open Nu.Declarative

// this module provides global handles to the game's key simulants.
// having a Simulants module for your game is optional, but can be nice to avoid duplicating string literals across
// the code base.
[<RequireQualifiedAccess>]
module Simulants =

    let Screen = Nu.Screen "Screen"
    let Group = Screen / "Group"
    let Elmario = Group / "Elmario"

// this is our Elm-style command type
type Command =
    | Update
    | Jump
    | Nop
    interface Nu.Command

// this is our Elm-style game dispatcher
type ElmarioDispatcher () =
    inherit GameDispatcher<unit, Message, Command> (())

    // here we define the game's properties and event handling
    override this.Initialize (_, _) =
        [Game.UpdateEvent => Update
         Game.KeyboardKeyDownEvent =|> fun evt ->
             if evt.Data.KeyboardKey = KeyboardKey.Up && not evt.Data.Repeated
             then Jump :> Signal
             else Nop :> Signal]

    // here we handle the Elm-style commands
    override this.Command (_, command, _, world) =
        match command with
        | Update ->
            let physicsId = Simulants.Elmario.GetPhysicsId world
            if World.isKeyboardKeyDown KeyboardKey.Left world then
                let world =
                    if World.isBodyOnGround physicsId world
                    then World.applyBodyForce (v3 -2500.0f 0.0f 0.0f) physicsId world
                    else World.applyBodyForce (v3 -750.0f 0.0f 0.0f) physicsId world
                just world
            elif World.isKeyboardKeyDown KeyboardKey.Right world then
                let world =
                    if World.isBodyOnGround physicsId world
                    then World.applyBodyForce (v3 2500.0f 0.0f 0.0f) physicsId world
                    else World.applyBodyForce (v3 750.0f 0.0f 0.0f) physicsId world
                just world
            else just world
        | Jump ->
            let physicsId = Simulants.Elmario.GetPhysicsId world
            if World.isBodyOnGround physicsId world then
                let world = World.playSound Constants.Audio.SoundVolumeDefault (asset "Gameplay" "Jump") world
                let world = World.applyBodyForce (v3 0.0f 140000.0f 0.0f) physicsId world
                just world
            else just world
        | Nop -> just world

    // here we describe the content of the game including elmario, the ground he walks on, and a rock.
    override this.Content (_, _) =
        [Content.screen Simulants.Screen.Name Vanilla []
            [Content.group Simulants.Group.Name []
                [Content.sideViewCharacter Simulants.Elmario.Name
                    [Entity.Position == v3 0.0f 54.0f 0.0f
                     Entity.Size == v3 108.0f 108.0f 0.0f]
                 Content.block2d "Ground"
                    [Entity.Position == v3 0.0f -224.0f 0.0f
                     Entity.Size == v3 768.0f 64.0f 0.0f
                     Entity.StaticImage == asset "Gameplay" "TreeTop"]
                 Content.block2d "Rock"
                    [Entity.Position == v3 352.0f -160.0f 0.0f
                     Entity.Size == v3 64.0f 64.0f 0.0f
                     Entity.StaticImage == asset "Gameplay" "Rock"]]]]

First we see something new – an explicit simulant reference for our main entity, Elmario –

// this module provides global handles to the game's key simulants.
// having a Simulants module for your game is optional, but can be nice to avoid duplicating string literals across
// the code base.
[<RequireQualifiedAccess>]
module Simulants =

    let Screen = Nu.Screen "Screen"
    let Group = Screen / "Group"
    let Elmario = Group / "Elmario"

This simply allows us to refer to our simulants from multiple places in the code without duplicating their address names.

Also new here is the use of our game physics system. Because the physics here update the engine state (yet still in a purely functional manner), Elm-style commands are used rather than messages.

// this is our Elm-style command type
type Command =
    | Update
    | Jump
    | Nop
    inherit Nu.Command

Here we have three commands, one to update the character for left and right movement, one to make him jump, and Nop, which is a command that doesn’t do any operations but does allow us to make bindings that may or may not result in a command (as we’ll see below).

// this is our Elm-style game dispatcher
type ElmarioDispatcher () =
    inherit GameDispatcher<unit, Message, Command> (())

Since we don’t need a model this time, we simply pass unit for the first type parameter. And since we’re using just commands (no messages this time), we pass the empty Message type for the second type parameter and Command for the third type parameter.

Next up, we will use Nu’s ability to define properties explicitly instead of from simulant property lists –

    // here we define the game's properties and event handling
    override this.Initialize (_, _) =
        [Game.UpdateEvent => Update
         Game.KeyboardKeyDownEvent =|> fun evt ->
             if evt.Data.KeyboardKey = KeyboardKey.Up && not evt.Data.Repeated
             then Jump :> Signal
             else Nop :> Signal]

Let’s break each part down –

        [Game.UpdateEvent => Update

The first initializer creates an Update command every frame (when the Game simulant itself is updated).

         Game.KeyboardKeyDownEvent =|> fun evt ->
             if evt.Data.KeyboardKey = KeyboardKey.Up && not evt.Data.Repeated
             then Jump :> Signal
             else Nop :> Signal

The second initializer creates either a Jump or Nop command depending on the state of the keyboard keys as described by its lambda expression.

Let’s take an overview of the this.Command function that handles these commands (with some code snipped for brevity) –

// here we handle the Elm-style commands
override this.Command (_, command, _, world) =
    match command with
    | Update -> // snipped here...
    | Jump -> // snipped here...
    | Nop -> just world

Commands are different from Messages in Nu. While Messages can transform the simulant’s Model value, the Command can transform the World itself. Thus, a Command can effectively ‘side-effect’ the world whereas Message cannot. Think of it as a way of controlling effects.

Anyways, let’s look in detail at the Update command handler –

    | Update ->
        let physicsId = Simulants.Elmario.GetPhysicsId world
        if World.isKeyboardKeyDown KeyboardKey.Left world then
            let world =
                if World.isBodyOnGround physicsId world
                then World.applyBodyForce (v3 -2500.0f 0.0f 0.0f) physicsId world
                else World.applyBodyForce (v3 -750.0f 0.0f 0.0f) physicsId world
            just world
        elif World.isKeyboardKeyDown KeyboardKey.Right world then
            let world =
                if World.isBodyOnGround physicsId world
                then World.applyBodyForce (v3 2500.0f 0.0f 0.0f) physicsId world
                else World.applyBodyForce (v3 750.0f 0.0f 0.0f) physicsId world
            just world
        else just world

All this code does is look at the state of the keyboard and determine whether or not to apply a linear force to the physics body correlated with the entity from which the PhysicsId is found, as well as how much force depending on whether the physics body is on the ground or in the air. It is necessary to put this type of code inside of Command because the application of physical force is a ‘side-effect’ on the World.

Additionally, note that when dealing with physics-driven entities, we usually prefer to move them by applying forces rather than setting their position directly.

        | Jump ->
            let physicsId = Simulants.Elmario.GetPhysicsId world
            if World.isBodyOnGround physicsId world then
                let world = World.playSound Constants.Audio.SoundVolumeDefault (asset "Gameplay" "Jump") world
                let world = World.applyBodyForce (v3 0.0f 140000.0f 0.0f) physicsId world
                just world
            else just world

Jumping is done similarly, but we also tell the engine to play a jump sound when the entity performs his acrobatic feat.

Having studied the code for previous example, Nelmish, the rest of the code for Elmario should be self-explanatory.

Zoom Out

So that wraps up the introductory explanation, but let’s zoom out to add some interesting conceptual detail. In Nu, unlike Elm, this approach is fractal. Each simulant, be it a Game, a Screen, a Layer, or an Entity, is its own self-contained Elm-style program. In this way, Nu is perhaps more sophisticated than Elm — it’s more like a hierarchy of dynamically-configured Elm programs where each one can be individually loaded by the game as needed, and configured by external programs such as Gaia, the real-time world editor. This gives Nu an additional level of pluggability and dynamism that is required for game development.

Nu’s Elm-style / MVU architecture is a great new way to build games. By leveraging this powerful architecture, we turn game development from being big-ball-of-mud OO nightmare into a task that is fun again!

Likely Questions

Let’s wrap up with some questions that people might be likely to ask about this approach –

Q. “Why not just use Elm to make games?”

A. Well, unfortunately, Elm isn’t really set up for that. Unlike Nu, Elm does not integrate a fast, imperative physics engine. Elm doesn’t include a WYSIWYG editor like Gaia. Elm doesn’t have a game-centric asset pipeline. Elm was not built to scale in the way that a modern game engine must. There are so many game-specific tasks that an engine needs that are entirely out of the scope of Elm’s intended use case. In short, Elm wasn’t designed to build games — but Nu was.

Q. “What about the performance of using this high-level programming interface?”

A. Nu’s MVU implementation is so fast that it really should handle about anything you throw at it save for things that need to go into an ECS. Even bullets in a bullet hell shooter should be workable with Nu’s declarative Elmish API. However, you can always utilize Nu’s classic API for when you need a slight speed-up, or again, its built-in ECS. This is great because you get simplicity-by-default while being able to opt-in to additional scalability where you need it. And that’s what functional programming is all about!

S-Expression The Ultimate Format

A Powerful F# Library Shows How S-Expressions Can be Superior to XML and Json

First, a little historical context…

People used to love XML, especially those in the Microsoft camp. XML represented a powerful enabling technology for programmers of that day — data-driven programming without plain text files. Out of this love for XML came many technologies on which we still rely. For example, XML has been used –

As a Data Description Format –

Often good for serializing objects at run-time, XML enabled the following storage solution –

For Visual Studio project files –

In the late 90’s and early 2000’s Microsoft were quite enamored with XML as a solution to many long-standing problems…

As a DSL language such as XAML –

Microsoft also saw XML as a solution to its newer and more interesting problems –

XML is Dead! Long Live… Json?

Over time, the industry has come to know XML as a technology with many disadvantages. First of its problematic attributes is its verbosity. Tag names are duplicated everywhere, and angle brackets are all over the place. Attribute tags save some space, but their syntax is somewhat bizarre and entirely non-normal, requiring programmers to specialize code their interpreters to deal with its alternative structure. Second of its problems is that XML is ‘stringly’ typed —that is, the only type it can explicitly represent is a string. This adds an additional parsing phase to any interpreter that wants to pull out numbers or dates. Thirdly, are its many issues caused with special characters. For those who are familiar, I need elaborate no further.

Around the time people were discovering XML’s disadvantage, the use of javascript was becoming very widespread. Javascript offered up its own ad-hod data description format, Json, which became decidedly popular. So much so that people who weren’t even using javascript began turning away from XML toward Json. Unfortunately, Json turned out to have its own disadvantages, some are which are surprisingly complementary to XML’s.

As far as advantage go, Json does have support for limited type information. Json can, for example, tell the difference between a string and a number. Also unlike XML, you’d be hard-pressed to put together a DSL like XAML. Without something like XML’s attributes, it’s just not sufficiently information-dense. While XML does a reasonably good job at capturing DSLs, Json does comparatively poorly.

Code is Data Too!

The area where both XML and Json fall down is in enabling scripting languages when you find out that you need them. If you’ve ever seen someone try to implement an ‘if’ or ‘foreach’ loop form using XML or Json syntax, you will know what I mean. Both of these languages were designed to be vanilla data languages, but neither of them compose well when the data they are describing is program behavior (EG — code). It is pretty damning that there are important forms of data that neither format represents well — even if that particular type of data is code.

Looking at the trade-offs of each format, you might be best having your data storage as Json, your DSLs in XML, and your scripting languages as whatever ad-hoc syntax you come up with. But the real question is — why would you want a separate format for your data, your DSLs, and your scripting languages? And why would you want to have to write a custom parser and a custom interpreter for each and every scripting language you need? Why not use a library that provides a single format that solves all of these problems? And rather than spending weeks or months hand-crafting a scripting language, why not use an existing scripting language whose semantics can be extended with a simple plug-in?

In short, why not use Prime for F#! It’s Available via Nuget here.

(At the time of this article, there’s actually one reason to not use Prime — Prime is currently implemented mostly only for F# data structures like algebraic data types, functional List, Map, and Set. The idea is sound in all types of languages, so I also have a partial port of Prime to C# here – https://github.com/bryanedds/Sigma)

Using Prime as an Automatic Serialization Solution

First, let’s take a look how Prime automatically serializes and deserializes your types in F#. Take the following Person type –

You can construct a Person, serialize it to a string, and write it out to a file with the following code –

To deserialize said person, all you need to do is –

As you can see, there are only two novel functions you need to know about for serialization and deserialization — scstring and scvalue. It really is that simple.

So what does the data looks like when serialized?

Compared to XML and Json, this is a very succinct and lightweight format!

However, you may notice one immediate trade-off — because there are no name tags for each element, the order of fields is important. You can’t, say, put the Name after the Age — it will raise a ConversionException since it expects a string for the first value. This is a slight disadvantage in some cases, but is a huge boon for succinctness. However, if you do actually need property names written out along with their values, you can simply attribute the type like so –

And it will be written out like this –

[[Name "John R."]
[Age 16]
[FavoritePetOpt [Some "Scruff E."]]
[BloodType ABPos]]

With this approach, you can also put the fields in any order you like.

Using Prime for DSLs (Domain-Specific Languages)

Oftentimes you want to encode some data that will later be executed by a little interpreted within your program. Consider this F# type which is used to implement special effects in an existing game engine –

See the rest of the related type definitions here — https://github.com/bryanedds/Nu/blob/master/Nu/Nu/Effects.fs

Using an attribute, we can declare the keywords that are used to define an effect with text. This data is used for syntax highlighting, auto-completion, as well as determining pretty-printing behavior.

Here is a screenshot of this DSL being used to construct special effects at run-time in the world editor –

Editing an effect while it is running via the DSL created with Prime. Note the syntax highlighting and auto-complete…

Editing these types of constructs at run-time can be essential, and Prime provides fantastic facilities for enabling these types of external DSLs when you need them!

Your Very Own Scripting Language — For Free!

The above features are very powerful. But sometimes you need additional super-powers, such as being able to write program control constructs at run-time. Fortunately, Prime offers an extensible scripting language called AMSL (A Modular Scripting Language) which is built on top the above features.

Let’s take a look at some example AMSL code from the Prelude file where its standard functions are defined –

You might be able to pull off something like this in XML or Json with a lot of hacks and a lot of syntactic compromises.

But let’s look at some more involved AMSL code…

Out of the box, the scripting language includes the full lambda calculus, functional data structures, dynamic polymorphic functions for user-defined data structures, and much more. Don’t even think about doing this type of thing with XML or Json! With Prime, it’s all based on the same code, all built on the same functions, and all inter-compatible.

One downside that I must mention, however, is that there isn’t yet much documentation for AMSL. Most of what you can learn has to be gleaned from looking at the full Prelude.amsl file. Work on the language is still a bit in progress, and the documentation phase has yet to get under way.

Hopefully we can now see how s-expressions solve some of our most common programming problems in a consistent, succinct, and coherent way. Once we put together a good standard based on s-expressions, XML and Json become, IMO, technically obsolete. And please don’t be confused by the use of F# — these techniques are just as applicable in an imperative / Object-Oriented code base as they are in a functional one. So much so that I don’t know why this approach hasn’t been in use for decades…

Next Time…

In the next article, we’ll look at how to add custom semantics to AMSL with an F# plug-in. For those who want to just see an example RIGHT NOW, have a look here, here and here. Until then, please let me know your thoughts, feedback, and gripes in a comment below!

Until then…

Happy trails!

Why Functional Programming Works for Games

Why I set out to prove that functional programming works for games, and what it might mean for modern game development.

A fellow on www.fpchat.com slack channel #gamedev asked me the following question about my functional F# game engine, Nu (source available here) –

“I was curious of how this project started. Do you have previous experience in game/engines development?”

I think it was my very painful experience working with commercial game code bases — most recently on the Sims 4 — that finally convinced me of the fundamental flaws of the object-oriented approach to game development. Those conclusions kept being reconfirmed with my successive experiences with modern engines like Unreal, Unity, and in-house game engines.

The game that convinced me to change my approach.

The bottom line was that, in these code bases, what should have taken 15–30 minutes would typically get estimated at – and actually take! – 3-5 hours. And let me assure you almost all that extra time spend was pure pain, most of which was spent in the debugger trying to figure out just how the hell the horrifically complex system got into the relevant part of its current state to begin with.

As an engineer, I became consistently frustrated due to the complexity that seemed unavoidable with current tools.

However, while working on Sims 4, I was privileged enough to undertake what would be an ongoing conversation with one of the principle engineers on the team. It was this months-long exchange that helped me shape some of my initial ideas of the Nu Game Engine — if only as a contrarian undertaking.

Let me note that my colleague was an awesome chap personally, and gave a great deal of time to these discussions that he did not have to, so even though he argued forcefully, he was one of the nicest and most open-minded people I’ve worked with. As our conversation proceeded, the arguments he gave as to why functional programming could not work for games kept returning to the following two points –

1) The GC would create too many pauses and affect the frame rate of games. This was from his experience of using C# in the Sims 3 engine, and his experience didn’t allow him to conclude otherwise — even though the modern .NET GC was very different than the one that shipped with Sims 3.

To invalidate his assertion, I did some research on modern GC technology, consuming several white papers and a couple of books along the way.

My favorite!

After spending several weeks doing my homework (it was not easy ramping up on such an unfamiliar topic!), I continued our conversation. I suggested that the design of at least some modern GCs —such as those considered ‘pauseless’ due to their iterative nature — would eliminate the issue in theory.

When I brought this to his attention, he didn’t seem to be able to give a concrete rebuttal, so I concluded the approach would, at least in theory, work. It would have been nicer had he been willing to concede the point outright, but fortunately I was able to make up for his lack of explicit concession with my own stubbornness.

Even better, as I prototyped the engine initially, it turned out that, In practice, the .NET 3.0 (and above) GC did not have the type of pauses he worried about — even without an incremental design! As far as bandwidth is concerned, the GC maxes out at 2% of CPU usage in Nu. Outside of a single GC2 hitch at the start of the program (which can be easily hidden with a manual call to GC.Collect() at a loading screen), there are no frame-delaying pauses.

Through judicious use of mutation encapsulated behind the engine’s functional interface (as we’ll talk about later in this article), I can easily fend off all noticeable GC stalls until long after the CPU is soaked with normal simulation processing. Because we reach CPU soak long before GC stalls kick in, especially considering how tightly tuned and optimized the engine itself it, I consider this a non-issue both in theory and in practice thanks to modern GC technology.

Some people think GC should be at the hardware level anyways. I find it hard to disagree.

2) Functional programming would be too slow.

This is a common concern, and a bit more valid than the other. But is it to the extreme as being suggested by my colleague? And aren’t there some caching optimizations and other workarounds that can assuage this concern?

Currently, the Nu Game Engine can process about 25,000 on-screen entities at 60 FPS before saturating the CPU. For perspective, consider that the modern CPU can only handle about 50,000 particles before they need to be implemented with an alternative programming known as data-orientation — and particles are much cheaper than entities in any game engine.

With this number of entities on the screen at once, the performance limitations depend entirely on the engine’s structure. Consider that in order for an entity to have its current state retrieved, it must be looked up from a map. And not just any map, a purely functional map! How can we process that many entities when we have to rely on this type of data structure?

There are three optimizations that make this fast in Nu. First, we use an innovative purely functional unidirectional map, UMap, rather than the normal F# Map. While the vanilla Map’s look-up time is O(log n), UMap’s look up time is O(1) in Nu’s use case!

(It’s called the ‘unidirectional map’ because its performance is near that of the .NET Dictionary’s so long as most past instances of it are discarded — just as they fortunately are in a game simulation such as Nu!)

Check out the timings –

Some timing comparisons for the various mapping primitives. Source at https://github.com/bryanedds/Nu/blob/master/Prime/Prime/Program.fs. Umap is almost as fast as the highly-optimized .NET Dictionary!

Second, the most recently-found entity is cached by the engine so that subsequent state retrievals on the same entity require no entity look-up at all! So as far as subsequent reads go, we’re as fast as we’d like to be.

Third, there is the ability to specialize types of entities as ‘imperative’. That is, operations on them mutate the state in-place rather than copying and updating. Because imperative entity operations are in-place, their data can be cached directly in the handle, requiring no look-ups even when dealing with different entities! The above 25,000 number is for when the engine is configured to update entity states imperatively on the back-end. If you want your entities to be purely functional and work with systems such as undo / redo in the editor, you can only have about 12,500 on-screen. Still, that’s a surprisingly small perf loss considering the theoretical performance costs of functional data structures.

Additionally, I’ve included an ECS API that allow entities to scale into the millions In a later article I describe how functional programming and ECS have a synergistic complementation – https://vsyncronicity.com/2020/03/01/functional-programming-and-data-oriented-design/

So, as it stands, we can say the following today with certainty –

Functional game programming should work out-of-the-box for casual, non-AAA games. Concerns 1 and 2 have been demonstrated to be non-blocking both in theory and in practice. With concern 2, you do need an escape hatch to alternate approaches when in need of different scalability properties – and that’s just what Nu’s ECS provides!

The open question is: do these types of techniques work in the context of AAA games like Uncharted 4?

The static environments won’t be a problem as it’s nearly all on the GPU. It’s the dynamics that are intimidating.

I cannot answer this with certainty… yet. Nu’s non-ECS entities perform nearly as well as Unity’s GameObjects and its Archetype-based ECS system is extremely performant. There also seems to be a general tax on .NET code as vs. C++ – but the .NET jitter is getting better all the time, especially with the recent release of RyuJit. We can also looks forward to Profiler-Guided Optimization in .NET as well – https://devblogs.microsoft.com/dotnet/conversation-about-pgo/

That all said, like as was done with the initial prototype of Nu proving the workability of functional game development, I will assert this much:

There’s only one way to prove that Nu’s idealized combination of functional and data-oriented programming can work for modern AAA games — and that is to try it and see.

Double Cone Design

A Highly Effective Approach for Designing Functional Programs

Nowadays, I design my programs with an approach that I call ‘Double Cone Design’. Inspired from a rendering of an elegant mathematical construct called a ‘double cone’ that is intersected by a plane segment, it colorfully illustrates my overall approach to software design –

The bottom cone, data abstraction, represents the bottom-up ‘architectural’ view of my programs. Except for primitive types like Vector2’s, nearly every type in my program is implemented as a data abstraction as described in SICP here. I also gave a detailed presentation on how to successfully architect F# programs by recursively applying data abstraction to your domain’s types.

The top cone, symbolics, represents the top-down, ‘representational’ view of my programs. Nearly every type in my program can be represented as a generic ‘Symbol’ type, manipulated symbolically, serialized to disk, and so on. Here’s the encoding of the Symbol type in F#.

Most importantly, it gives us the ability to implement a top-level scripting language for our program, as well as the ability to add domain-specific languages as we need them almost entirely for free. An example of the latter being the special effect system of my purely functional game engine, Nu

Special effects are composed at runtime with our symbolic expression language (plus syntax highlighting) at the bottom.

The plane section, purity / efficiency, represents the spectrum from purity to efficiency somewherein I choose to dabble depending on what I’m creating. Think of moving along this plane by sliding the double cone either left of right.

How do we decide where to move on this spectrum? When programmability is paramount, such as when encoding game logic, I scoot closer to the purity side of the spectrum, building almost everything out of immutable data structures. When performance is paramount, such as when writing a renderer or physics engine, I build most things out of mutable data structures and arrays. When connecting the two worlds, I use very special techniques like message queuing and mutation-caching (EG – Prime.KeyedCache and Prime.MutantCache) to encapsulate highly-efficient mutable abstractions below the immutable APIs.

Efficient functional programs have layers.

Not represented by the above image, but oft-enough utilized by myself to be of note in this discussion, is sub-typing as an extensibility technique. .NET-style interfaces on F# records and DUs allow users of our APIs to write self-contained components that can be plugged in to customize behavior where needed. Of course, component-style plug-ins as such are less composable than purely combinator-based APIs, but are more widely applicable than the latter.

Since I’m a functional programmer, I sometimes get push-back against the use of sub-typing via interfaces from my peers ‘too OOP’. for being . But what I emphasize is that sub-typing is not OOP, but is in fact an orthogonal concept — and one that is unfortunately too conflated with OOP in the mind of most programmers merely due to its historical association. You can think of sub-typing as simply adding a level of abstraction over an existing set of related data abstractions to form an abstraction over data abstractions. We don’t always need this level of abstraction, but it is essential for building things like plug-ins in F#.

Implementations

Interestingly, I’ve implemented both F# and C++ libraries (open source with MIT licenses on both) that provide the basic types needed for the ‘double cone design’ approach. If you’re going for purity and programmability (the left-hand side of our plane), I suggest you try out the F# library, Prime, here. If you’re going for raw machine efficiency (the right-hand side), try out the C++ library, ax, here. If you don’t like either of those languages, the relevant portions of these libraries are portable to most other languages.