Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
We were given some money from the Prombilia foundation to look at how to bridge the gap between virtual and physical games so as to make them accessible to people with physical impairments.
This is a modified version of a post first published on Meeple Like Us.
You can read more of my writing over at the Meeple Like Us blog.
---
Anyone who’s getting our regular Patron series ‘Gaming in Gothenburg’ has been treated, if I may use that word, to regular updates on what work I’ve been doing at Chalmers. It’s a breakdown of the grants for which I have submitted applications, and the projects I really want to do if anyone will provide the funding to make it happen. I hope it’s an interesting behind the scenes look at how much effort in the academic system has to be expended on long-odds funding calls. Perhaps one in ten grant applications receive money, but the nine that didn’t still took a lot of time to produce.
One small application I made came back with a thumbs up a few months ago though, awarding me 200,000 SEK (approximately $20,000) for a project to create accessible dexterity games. This was from the Promobilia Foundation, an organisation aimed at creating technical aids that help people with physical disabilities lead a more active life.
Twenty thousand dollars may sound like a big chunk of money to have available, but in real terms it’s not. It pays for 10% of my time in 2021, with enough left over to buy the equipment needed to make the project happen. A lot of the initial parts of the project will be speculative, so the exact equipment needed is still a question that’s to be answered. That’s why we call it research, baby.
First of all, let’s talk about the research question. Every grant application needs one, and its job is to set out the terms of the answer you’re offering to provide. I’m not going to quote from the document or anything – these are dry texts full of academic legalese. Essentially though my research question is ‘How do we actually make it possible for people with disabilities to play a dexterity game?’
We’ve been doing Meeple Like Us for a long time now. We’ve looked at hundreds of games. And we have yet to find a dexterity game that could be considered accessible for people with physical accessibility issues. It’s understandable really – as we’ve often said, sometimes inaccessibility is where the fun comes from. I don’t expect to ever find a dexterity game that excels in this category, although I remain committed to the search. What I do have though is an idea for how we might begin to explore solutions to that accessibility problem.
Accessibility in its purest form is really driven in large part by nothing more complicated than translation. Take one form of input or output, and convert it into another form of input or output. If someone can’t visually interpret text, then translate it into audio. If someone cannot click a button on a controller, translate it into a switch controller that can accept the movement they can do. If they can’t write, accept spoken word. Braille is the conversion of text to touch. It’s basically a rewiring of the I/O model used in day to day life.
For most activities this isn’t a problem. To print a document It doesn’t matter if you click a menu icon or shout a magic spell at your Alexa. It’s a lossless translation. It doesn’t matter if you use stairs or a ramp to access the second floor of a building. Nothing about the building changes. It’s why accessibility in most domains is relatively straightforward to proscribe, even if it’s difficult to motivate people to do it. Most accessibility research these days is aimed at either edge-case scenarios or optimising the effectiveness of compensation. There’s no serious question mark, day to day, as to how we make an accessible society.
It’s different for games though, because the nature of the interaction is often part of the fun. Rock Band wouldn’t be remotely as enjoyable if it didn’t come with plastic guitars. Picking a lock by pressing a button is less immersive than gently rotating around the thumb-sticks to find the sweet spot between the rake and the pins. Translation in this kind of scenario risks the sacrifice of verisimilitude – that ineffable quality that makes things feel realistic. In video games, accessibility is a more challenging domain at least in part because of this problem. However, video games also have something board games don’t – a consistency of interface devices. We know how people interact with 99% of the games out there, and we can maximise the effectiveness of what we do by focusing things there.
I had a discussion a while ago with a colleague where I outlined the compensatory levels of video game accessibility. She said ‘Oh, great – can you give me a link to the paper on that’ and I realised I didn’t have one. I’m not sure then if this is just a little system my subconscious mind rigged up for me or if someone has modelled it more formally and I just forgot. Answers on postcards please. In essence you have what I’m terming a ‘compensatory stack’ made up of various levels at which you can aim an accessibility compensation:
Physical peripheral levels. Changing something in the monitor, or a controller, or in the communication between a system and its external parts. Compensations here are extremely high impact but also very low accuracy. They take a sledgehammer to a problem. For example, your monitor probably has a way to change the scaling of everything that appears on screen. Sure, that makes all the text easier to read but I bet it causes problems everywhere else.
System level. Platform based compensations that apply to everything running on that platform. Console level control remapping, for example. System wide text-to-speech settings. Whatever you’d find in the Xbox settings menu, or whatever Steam preferences might let you choose. As long as they are honoured by every game running on the platform, then this is a compensation that can be very effective. However, it’s still a broad-based intervention because it can never take into account game context. A platform may offer a generic colour-blind filter for example but it can’t individually mould each game’s palette.
Game level. Specific accessibility settings in game that tightly map developer intention to accessibility solution. These have the lowest breadth of impact in the accessibility ecosystem, but are also where the most effective and accurate compensations exist.
You could argue there are in-between layers and meta-layers and psychological layers and so on, but if you want to make your video game more accessible, you’ll find the solution is almost certainly in one of those three layers or in the interface between them.
So, what’s the corresponding model for board games?
Bad news – there isn’t one. If there was, it might look like this:
Component level. Accessible money, tactile dice, individually shaped meeples.
Mechanism level. Best practices for the individual systems used within a game.
Game level. Individual accessibility fixes for an individual game context.
This looks credible though, right?
The problem is – there is no interface to a board game. There is no platform upon which a board game runs except the human mind. They may share a design vocabulary, common components, and often even rule systems. That’s not enough though. The way Sagrada uses dice is massively different to the way Escape: The Curse of the Temple uses them. The way Lords of Vegas uses money is different to the way it’s used in Merchants and Marauders. The cubes of Village don’t work the same way as the cubes in Scythe. The similarities are surface level only. Some improvements in components are possible, but there’s no real way that I can see that ‘a set of accessible components’ could be produced that covers all possible use cases. At best you could do subsets for certain impairments, and have them work for certain subsets of games.
There’s a similar issue for mechanisms. They may share features, but it’s in the subtle intersections where their accessibility problems cannot be addressed. A real-time deck builder is a very different beast from any game that has one of those components in isolation. Again, there are best practices (and I should get around to publishing them properly at some point) but no universal framework onto which you can hang them.
Every game you ever buy is a unique artefact comprising its own component and mechanism layers. The comparison would be if every single time you bought a game from your local retailer you got a box containing a completely unique controller and a new operating system.
That leaves the game-based approach, which is the one we’ve been exploring for the past few years. The problem there is that all you ever fix is one individual game. You maybe fix it very well, but it doesn’t solve the other problems that exist elsewhere in the board game ecosystem. You might inspire others to address the issues in their own games, but you never fix them directly.
What does all of this have to do with Promobilia though?
Well, imagine if there was a way to start addressing the issue of accessibility in physical games at a higher level of impact. Imagine there was a way you could take a whole family of games and solve the problem at the root. Imagine if you could actually create a whole compensatory stack where none currently exists.
That’s what I’m planning to do in the next year.
This actually stems from a project I supervised back at Robert Gordon University, where my project student Hayley Reed was building a physically accessible hybrid version of the game Ice Cool. Degree projects rarely get as far as anyone would like – they’re usually very ambitious and time is always against them. It was a cool project though and it contained within it a seed that could grow into something more encompassing.
If a lot of accessibility is an act of translation, much of the rest of it is found in the concept of ‘optionality’. The simplest, most effective way to make something accessible is to offer translations and let the user, or player, decide on which combination of translation works best for them.
So, let’s look at a game like Ice Cool.
For those that don’t know, it’s a simple dexterity game of flicking a penguin around a slippery board with the intention of either passing underneath doors or hitting other penguins. It’s a lot of fun, but while in our teardown we said it was our most accessible physical dexterity game yet that didn’t translate into it actually being accessible. It’s the best of a problematic bunch. The main problems for someone with physical impairments are:
Being able to physically flick a penguin at all
Being able to move hand, arm and body in a way that lets you precisely position a penguin
Being able to accurately deliver a particular amount of force at a particular part of the penguin
Being able to move around the board to identify positions from which you can access penguins
It’s a tall order for anyone in, say, a wheelchair. It’s a tall order for someone with even minor tremors or stiffness/paralysis of their digits. It could be physically painful, or simply difficult.
So why not translate that physical inconvenience into something else? Like, for example, an app on a mobile phone?
That instantly solves some of the problems. There’s no need to physically flick anything, and no need to move around the board. All you’d do would be interact with the app to indicate direction and force. And even that could be done in levels that lets the player indicate exactly how it is to be translated. Some examples:
Flicking a finger across the screen, with a force dictated by speed and a direction indicated by angle. (This is the approach we investigated with Hayley’s project)
Hitting a quick moving direction indicator when it rotates in the right direction, and then another to indicate force. This is kind of like you’d see in a certain flavour of golf video game.
Pulling an invisible ‘elastic band’ and releasing it to indicate direction and force.
Hold to set direction, hold to set force.
Shouting at your phone to do a virtual Fus-Ro-Dah
There are so many different options here, all of which would work differently for different people. Maybe we’ll look at using the accelerometer in a mobile device, or layering in some augmented reality plane detection. For a while I was thinking about whether we should be looking at biosensors – letting people remotely move their pieces like they were Jedis using the force. Maybe we’ll still do that, but honestly there are already an awful lot of options without buying those kind of devices.
This one handheld translation scheme would work for a lot of dexterity games. It could even work for things like Pool or Snooker. Games that are definitely going to be evaluated during the project are Klask, Crokinole, and Flick ‘Em Up. Other games will be considered on a case by case basis. The one I’m most invested in exploring though is Crokinole, because how often does someone pay for you to install a Crokinole board in your office?
‘Hey, you’ve missed something’, you might be saying at this point. And you’re right!
The real trick and challenge here is not translating motion and force into an app interaction. It’s translating back – to take the force and direction information and reflect it in the game. This is where the majority of the work is going to go because it will need a blend of technologies all working together in novel ways. The quick summary version is ‘I’m going to model them in Unity, and automatically populate the Unity version with the pieces from the physical version’. For that, I’m going to evaluate several techniques. The two that are most likely to work though are:
Some combination of Wifi, Bluetooth, RFID, lasers and signal triangulation algorithms
Image recognition of board game states
Eventually I’d like to have the two work together seamlessly, so that when you flick a piece in your game it’s moved with magnets to where it should go on the physical game. You watch the physics play out on your device, or on a television screen, and then the game orients itself appropriately. That’ll need some follow-on funding. That may be forthcoming if the first version of the project shows promise.
The idea behind all of this is that you won’t need to play a digital version of a game. People with and without physical accessibility issues will be able to get around the same table, play the same game, with broadly the same effect. And, once the techniques are worked out for one game, they should be transferable to the context of any other game in that family. All I, or a publisher, would need to do would be put together the simplest of all possible scenes in Unity and upload them to a website somewhere. Perhaps it even becomes a general service tool – not a Tabletop Simulator but a Tabletop Integrator.
As I say, this is a relatively small grant – at least in comparison to the rest of the funding I’ve put in for over 2020. It therefore has relatively small ambitions. It’s not going to change the world, but it will be a proof of concept for something that could have genuine wide-ranging applicability as an accessibility support tool. And if it works, and works well, who knows where it could end up going.
So, thank you to the Promobilia Foundation for being cool and approving the funds for this weird and interesting project. I promise that I will do my level best to give you something to be happy about at the end.
If you’re a publisher that would like to send your dexterity game so it could be considered for the project, drop me a mail at [email protected]. I can’t guarantee that your game would be a good candidate or be the focus of any work, but I’ll certainly acknowledge your contribution in any literature published.
Read more about:
BlogsYou May Also Like