Original FM Synth and Wavefolder dsp code implemented in Unreal Engine Metasound nodes, all packaged in an Unreal Engine plugin. There's also a Metasound asset (the 3 voice fm synth described later in the README) in the Content/
folder that implements the custom Metasound nodes.
Later in the README you'll find examples of the Metasound nodes integrated into an Unreal project game.
I created and implemented two different Metasound nodes, an FM Synth Tone Generator and a Nonlinear Wavefolder/Saturator. The TestNode
is a simple hard coded gain node I used as a template for the other nodes.
Header declarations for the custom nodes are located in Source/MetaNodes/Private/
and the cpp implementations are in Source/MetaNodes/Public/
.
Look for the Execute()
method implementation in each class to see the dsp algorithims applied to incoming audio buffers.
The FM Synth node has a single carrier and single modulator oscillator. At moderate param values the tones stay harmonic and diatonically tuned but some enharmonicity is possible at extreme values. In other words there are sensible parameter constraints but classic wacky bell/metallic fm sounds are possible. Apply short envelope times and modulate params for some great enharmonic percussion.
Params
- Frequency
- Modulator Ratio
- Carrier Ratio
- Modulation Index (modAmp/modFreq)
- Modulation Envelope
- Amplitude Envelope
Special thanks for Eli Fieldsteel for his lucid explanation of fm synth principles/parameters.
The Wavefolder node adds harmonics to incoming audio by folding waveforms over themselves around floating point audio bounds (-1.0, 1.0). Particularly nice on bass sounds. There's also a tan
derived saturation factor with a feedback component, for extra drive.
I expected the algorithm to produce some aliasing. That's typical of nonlinear dsp. To my surprise there was no audible aliasing, even at extreme frequencies. I started implementing oversampling but ultimately decided the extra cpu cycles weren't justifiable in this case.
Params
- Depth
- Frequency
- Feedback Drive
Special thanks to Jatin Chowdhury. His CCRMA publication and Medium Article pointed me in the right direction(s) here.
With those Metasound nodes and their custom DSP complete I created a small demo project in Unreal to test them out.
I started by building a three voice synth with the FM Tone Generator. Each voice has an amplitude envelope, mod envelope, and the custom FM tone generator itself. The envelopes are controlled by a single AD env node upstream.
Then I applied my custom Wavefolder node to the bass voice.
And fed each voice a series of randomized midi notes.
Next I implemented some blueprint logic (combo of Anim Notifies and Event Dispatches) to trigger the FM Synth with each player character mesh's footsteps and placed a capsule asset over the center of the level, this suspended chrome marshmellow looking fellow.
I fed the distance between the player character and the chrome marshmellow to a Low Pass Filter that muffles the middle and upper synth voices. The nearer the player character the brighter the sound. All the player character blueprint logic looks like this.
I recorded a short piano improvisation to harmonically contextualize the synth sounds and break up the monotony of the footstep rhythm. The piano wav is triggered on BeginPlay
in the level blueprint.
The final result looks and sounds something like this.
ue-metasound-demo.mp4
Attn: Be sure to unmute the above video. Github mutes embeds by default.
The player can activate time dilation (slowmo) with a middle mouse click to vary their step frequency.
There's an opportunity for optimization here in the modPhaseInc
calculation. If the contributing variables (parts of the mod osc frequency component) are not updated between buffer loops there's no reason to recalculate modPhaseInc
every iteration. Those variable values could be cached and checked against incoming update param values. I'll implement that optimization in the future. For this first version I just wanted to standup something functional.
The carrier and modulator oscillators share much of the same logic. It makes sense to abstract out an Osc class. That'd simplify the FMGenerator class and separate concerns more cleanly. It'd be easier to expand the FM logic too if I want to introduce more carriers/modulators.