Converting 24PPQ Midi Sync in Java/Processing

I would be the first person to say that for the most part, MIDI is perfectly acceptable as an interface between musical devices, and has survived for as long as it has because of how dead simple it is. MIDI is still plenty fast, and in terms of interoperability, has yet to be bested. However MIDI does have its shortcomings, and while helping John Keston over at AudioCookbook with his Gestural Music Sequencer, I ran in to a big one.

Way back in the day, I can only assume before Mu-ziq and BT, MIDI clock was implemented at a rate of 24 pulses per quarter note. This means that when you press play on your sequencer, the devices that are synced via this clock hear 24 pulses for ever quarter note, and as a result can playback in time along with these pulses. After 12 pulses pass, you would move by an eighth note, 6 pulses a sixteenth notes and 3 pulses a 32nd note. You should see that as soon as I try to drop to a 64th note resolution, I’m in trouble. How do I wait a pulse and a half if all that I’m relying on is the pulse to tell me when to move forward? I’m sure that while scoring the soundtrack for Legend, Tangerine Dream were perfectly happy being able to sequence 32nd notes, but how am I supposed to start an Aphex Twin cover band without 128th note stuttering beats and retrigger effects?

There are ways around this of course, you can use the native time code in your sequencer or application and just use SMPTE frame based timing to keep it all in sync, but if you’re writing a simple step sequencer to interface with some master sequencer, this is a huge hassle. Whats a guy to do? Convert it!

Conceptually, the steps to do this are simple:

1. Intercept pulses

2. Subtract the space between two pulses(in milliseconds)

3. Divide this value by the quotient of your conversion divided by 24 (to get n milliseconds)

4. Start a thread and send out a pulse to your sequencer every n milliseconds

So, if I want to convert 24 ppq to 96 ppq, I divide 96/24 to get 4. This means that for every pulse I get from the master sequencer, I should generate 4 pulses for my local sequencer. Now, look at the time of the first pulse, and subtract it from the time of the second pulse. Lets pretend I get something like 100 milliseconds. So at this point while the master sequencer will be sending me pulses every 100 milliseconds, I should generate pulses every 25 milliseconds. Then just do this for every pulse I receive (in the case that the time changes and I need to reduce or increase the amount of time between pulses, in the case of a ritardando or accelerando. Seems easy enough, but how to put it in practice.

I worked on a standalone project in java first. Wired through RWMidi, I passed the pulses from the event handler in my main java class to the Sync Converter, then using reflection mapping passed the newly generated pulses back into another even handler in my main class. I decided that this approach sucked for a few reasons. For one, the performance hit I took doing the reflection mapping twice (Once in RWMidi, then again in my sync converter. Also, this required me to have two event handlers to catch the pulses in my main class, one for the originating pulses, and another for the newly generated pulses. Thinking in terms of an end-user, who would likely not want to go through all of this trouble just to convert some pulses, I decided it would be better to jam the sync converter right in line with the MidiInput class of RWMidi. Since I’m no stranger to modifying that library (I enabled sync in it a few months ago), I figured Wesen wouldn’t mind some additional modifications to his work to create this sync converter.

At first, I assumed the best approach would be to create separate thread that handles the entire timing routine, a thread that would just stay in sync with the incoming pulses from the external sequencer. This approach turned out to be silly after 5 minutes. When do the math, at 120 BPM the MIDI timing pulses are roughly 21 Milliseconds apart. Asking a thread to perform even a simple algorithm like the one proposed above while making sure to send a sync pulse on the arrival from the external reference along with a calibrated “in-between” pulse (every 5 milliseconds, in this case), was asking too much. I tried it anyway, even though the math wasn’t working out, and sure enough, a significant amount of drift was occurring between the external and internal sequencers.

So no dice on a replacement time reference. Perhaps I could let the original pulse fall through to keep my internal sequencer in sync, and create a timer/thread to handle the “in-between” pulses? This approach was on the right course, but failed for a few reasons. I couldn’t the separate thread to wake up quickly enough to send the pulse (or pulses), so would fall behin the external sync every 8 to nine pulses. The result was drift. The timer was closer, but the overhead of creating a new thread per timer every 21 milliseconds (at 120 BPM), was just not cutting it, the result was still a bit of drift.

Frustrated, I did what I always do in these situations and went to sleep. It was clear that I was losing track of the number one priority of the clock, keeping that internal device in lock step with the external device. These in-between pulses are technically superfluous, and won’t even be accounted for unless you drop below a note resolution of 32nd note triplets. I started doing the math in my head. If a 1/4 note occurs every 504 milliseconds (120 BPM), than a 64th note only occurs arrives every 31 milliseconds. Would a listener be able to distinguish if a 64th note were off by a few milliseconds one way or the other at this speed? What is the threshold where individual notes just sound like pulses in an oscillation? I speculated that that threshold is the minimum amount of space I would need between pulses, after that it would just be a matter of sending the in-between pulses as quickly as possible.

In order to find out exactly what that threshold was, I ran a sleep test directly on the MidiInput class. I determined what my sleep time was using the same formula from above, and slept the entire midiInput thread for the determined amount. My assumption regarding this was that my sleep times would be less than the speed that any quantized midi message could possibly arrive, so sleeping would be safe, especially if the midiInput would only be used for sync messages. As I increased the tempo, I found that the threshold was around 5 milliseconds, after that you couldn’t sleep the thread for a short enough period of time to meet the 64th note requirement. Once this 5 millisecond threshold is surpassed, the thread doesn’t bother sleeping before sending the extra pulses, it just blasts them on to the receiver of the messages. I also discovered that this method of sleeping the thread was working just fine, and there was no reason to implement another method.

A few tests and mistakes on my part later and Keston was up and running with synced 64th notes on the GMS. Keep an eye on his space to watch his progress. The next release of GOL Sequencer will also include 64th (and possibly 128th note) capability