top of page
  • Writer's picturermaudiodesign

Procedural Sound Design


When designing sound for games as opposed to other forms of media such as film or television, there are a number of technical problems to solve which are not inherent to linear media. The principal problem being that in interactive media a sound may be repeated many times, in many different situations. The repetition of sound can be a deciding factor in player immersion, especially when a sample is re-triggered again and again in exactly the same way.


In an effort to solve this issue, sound designers and audio programmers alike have taken a unique systemic approach to sound reproduction in games, which is now known as procedural sound design.


Definition


Procedural sound design can be defined on a spectrum from procedural audio to procedural sound design, however as Stevens and Raybould state (2015 p59) “Procedural sound design is about sound design as a system, an algorithm, or a procedure that re-arranges, combines, or manipulates sound assets so they might: produce a) a greater variety of outcomes (variation or non-repetitive design) b) be more responsive to interaction (parameterization)”.

In practical terms this practice is about the manipulation of samples at runtime by the game engine using logical systems, often created using game engine tools such as blueprints in Unreal Engine 4. This is related to but distinctly different to procedural audio, which broadly speaking is concerned with the generation of audio at runtime, usually through synthesis and is not the topic of this article.


Categorising Sounds


When discussing game audio, the need for categorisation arises. In much of the available literature sound designers use the same system of categorisation as in the film and television world. Sound is broken down into two main categories: diegetic and non-diegetic or in game world (heard by in-game characters) and outside of game world (audio heard only by the player).


This broad terminology is not without its difficulties when describing sound and interactivity. As Karen Collins (2013 p8) suggests “one difficulty with defining interactivity is that a single media object or text may be fluid in its degrees of interactivity and may afford different degrees of interactivity at different times. This fluidity suggests that there are a variety of different types of interactivity that take place with media such as video games”. Due to the interactive nature and fluidity of game media, it is possible for a single audio object to be described as both diegetic and non-diegetic at the same time or at different times within the same game.


Sounds heard only by the player are often not subjected to the same procedural audio systems as their diegetic counterparts. This is due to the fact that often these sounds play a notification role and are designed to be consistent as a warning to the player in certain situations. However, game music can be considered as being non-diegetic and often dynamic music systems can have a certain amount of procedural sound design applied to them, as different tracks and stingers are selected by the audio engine in open ended and changing scenarios.


In the video example below, a player is embarking on the Combat Zone quest in Fallout 4. During the introduction sequence to the set piece, the music is playing a tension score, giving the player a sense of foreboding as the scenario unfolds in front of them. This tension music could be one of a number of tension music pieces chosen by the system at random for this scenario. Once the player has entered into the main arena, both diegetic (cries from non-player character’s) and non-diegetic (music cues) sounds are used to alert the player to the situation as it is and to how it changes in real time. The music shifts into a battle score as the player is discovered, again accompanied by further information to the player (war cries from the NPC’s). As you can see combining diegetic and non-diegetic audio is a powerful technique employed by sound designers looking to create an exciting and immersive experience.



However, if every scenario the player encounters simply repeated the same set of music, dialogue and sound effect cues ad nauseam, the immersion would be broken, and the player would likely experience fatigue and boredom. So how do sound designers tackle this problem?



Why Procedural Sound Design?


One of the primary reasons for applying procedural sound design to game assets is variation. If a sound designer needed to create variation in an overhead power cable rattling sound, one method of doing this would be to create a large repository of audio files from recordings of power cables for the game to call upon. This would not be a very good solution to the problem, as all of these audio files would need to be loaded into RAM or streamed directly from the hard disk. This methodology would put a large demand upon the game engine system as a whole (Steven and Raybould 2015 p.60).


This problem is circumvented by procedural audio, creating a large number of variation possibilities from a small number of audio assets. By splitting up a sound into its constituent components and then recombining them at runtime. This is done using logic programmed into the game engine often using visual programming such as blueprints in UE4 or an API such as WWise. From this method the sound designer is able to get many subtle variations of the original audio as the pieces combine in random orders and subtle variations in pitch, volume and timing are applied.


In this next video example, you hear that as the player cycles through the various options on the Pipboy, there are a lot of subtle variations to the switches and radio static as the player selects through the channels. The subtle variations allow the player to interact with the object in a way that feels realistic and non-fatiguing and can easily be achieved with procedural methods.





In the picture and audio example below, are a number of different metallic rattling sounds taken from recordings. These are of a hospital trolley being pushed through a corridor, a number of metallic objects being rifled through a drawer and rope whipping sounds. They have been split into different groups. In each of the groups, a very short file of silence has been added so the system may select it, adding further variation to the pattern of sounds triggered. These groups then have randomisation applied to the selection of the sample without repetition ensuring that the same sample cannot be played twice in a row. Modulation is then applied by allowing the system to select between a range of volumes and pitches adding further subtle variation to the samples. These random variations are then combined and played back on an endless loop.


A UE4 Blueprint example of a procedural sound design system

https://soundcloud.com/rmaudiodesign/metal-rattles


This means that the sample pool can be significantly lower as the system combines the constituent parts in widely varying manner. This not only has an effect on the number of variations possible from a limited source pool but also on the sound designers workflow as they no longer need to design as many sounds for the same asset (Stevens, Andersen 2016). Another less effective way of handling this would be to create a very long loop of audio and play it on repeat. As stated before this would not be as effective a solution because it would require a very long loop indeed, to be able to have the same amount of variation possible from the system. It also would run the risk of being noticed by the player as a loop thus breaking immersion in the game world.


This article has hopefully given you some insight into some of the inherent problems around the repetition of sound in interactive audio, as well as how procedural sound design can be used as a solution to a number of the problems all games sound designers face.



Bibliography


Stevens, Richard. Game Audio Implementation: A Practical Guide Using the Unreal Engine (Kindle Locations 1547-1549). CRC Press. Kindle Edition. 2015


Karen Collins. Playing with sound: theory of interacting with sound and music in video games PDF Publisher: Mit Press Published: 22 February 2013 eISBN-13: 9780262312295


Stevens, Richard, Asbjoern, Andersen. Why procedural game sound design is so useful https://www.asoundeffect.com/procedural-game-sound-design/ 2016


Videos


Fallout 4 Quest: Combat Zone (No Commentary) https://www.youtube.com/watch?v=b5J3-LY_7fg


Fallout4 Pip Boy 2000 (No Commentary) https://www.youtube.com/watch?v=YJwtpbMxYPg


Games


Fallout 4

58 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page