Mark Wherry on 20 years of building tech for Hans Zimmer
Photos provided by Mark Wherry
Most people are interviewed for a job by their boss; Mark Wherry did the complete opposite and interviewed his boss, Hans Zimmer, for an article instead. Hans then invited Mark to fly to LA and work at what is now Remote Control Productions as a Cubase expert. "This week is exactly 20 years to the day since I started working here," the now Director of Music Technology says humbly.

Since then, Mark has done everything, from creating faked taiko samples with Hans and preparing orchestra sessions to writing album liner notes and working with spatial audio technologies like Dolby Atmos. He still remembers his first project, though. "We were working on the first Pirates of the Caribbean film and it needed to be rescored within five weeks," Wherry explains, noting that Zimmer had an interest in using Nuendo specifically around the same time. "I ended up recording the cello overdubs with Nuendo 2, which is funny now given how recognizable the Jack Sparrow theme became." In between his duties, Mark did miscellaneous things like collecting feedback on the various features and writing an 8000-word email to Steinberg to share Hans' feedback.
Still, he calls that film an exercise in what the post-production company was capable of doing. The project that followed was The Last Samurai, which began with the idea of instrument design. "It ended up being just Hans and myself in the studio at 4am in the morning, with him triggering taiko drum sounds with a modular synth and manually creating different velocity layers," Mark recounts. "He had one hand on the filter cut-off frequency and the other holding a Marlboro Light, and it was exactly the studio experience you'd always dreamed about! We spent weeks recording [them], using what was unbelievably expensive digital outboard gear like Sony's convolution reverb, and just doing sound design."

When it comes to the scoring process today, it has largely remained the same. Having around two or three months to work on a film and think it through, Zimmer often begins with sound design, recording a new vocabulary of sounds to kick off the composition process: "Dunkirk, for instance, was all about ticking clocks and little noises; with Interstellar, it was working with church organs."

Wherry's job, then, is supporting Zimmer's vision from a technical standpoint, and many times this begins with creating MIDI plug-ins for Cubase. "Since Hans comes from that modular world with step sequencers, he'll often ask: 'Wouldn't it be fun to have a MIDI plug-in that does this?'" Mark shares. However, supporting the creative vision doesn't stop at developing plug-ins.

Early on, whilst working on Batman Begins in 2005, Mark developed a networked MIDI system to overcome the limitations of the solutions that were commercially available at the time. "Back then, we were using Emagic's Unitor eight-port MIDI interfaces in a 64-port configuration," he explains. "Since we were moving two studios to England to work on that project, it made logistical sense to find software-based alternatives where possible. So, I wrote a Windows driver and then crossed my fingers it would work!"

Since the very beginning, Mark had a desire to bring more server-oriented technology into what was essentially a personal computer-driven environment. "In the early 2000s, people were using individual computers as samplers and it was a very old-fashioned way of building a studio, connecting everything with MIDI and audio cables," he tells me. "At that time, you'd be rebooting the computer every couple of hours, usually due to a crash, compared to the server world where you might be required to do that every few months. So there was a big chasm in the reliability and the general infrastructure between the two domains."
From there, Wherry developed the first 64-bit, multi-core sampler engine to be used on a feature film soundtrack in early 2007, which was before commercial samplers like Kontakt were able to support a 64-bit memory address space. By the time 2008 came around, it was a much more mature and generalized sampler. "When we started on The Dark Knight in 2008, we ordered 100 Dell servers to support all of the composers and the technology we needed," he says, laughing at how crazy this idea seems now. "We had our own audio-over-ethernet system I had developed, both for flexibility and to reduce the number of physical audio outputs required, which was partly informed by the fact every sampler voice across the whole system was mixed into a quad configuration."

While all of this technology seemed exciting back in the day, in Wherry's words, the landscape right now is pretty damn boring. "There's more and more stuff, but I don't see the conceptual nature of it changing," he explains. "There are more plugins and sample libraries on the market than ever, [yet] the number of hosts and actual programs you use to create music has more or less stayed the same."

What he sees now reminds him a lot of the early 2000s, since personal computers have once again begun to dominate the technology used by professional musicians. Although he finds the advancements in accelerated computing fields interesting, he doesn't really see a point in just using them to rehash the same old ideas from the past. As Mark says, "using a GPU simply to offload the same old audio processing algorithms sounds a lot like a DSP accelerator card from the 1980s, which is what early digital audio workstations did." However, long term, he believes companies like Apple that are embracing domain acceleration in hardware will change the landscape considerably.
"All of these programs – Pro Tools, Logic, Cubase – have existed for decades. None of them are exciting anymore in the way they once were!"
Another big technology shift is AI. For Mark specifically, he is watching Microsoft closely with its notion of co-pilots. "There's always been the possibility of using algorithmic composition to seed musical ideas, but to have someone beside you to help guide your process is interesting," he says. For RCP, which has its own large datasets for orchestra content and libraries, analyzing that with the help of artificial intelligence and finding different ways to use machine learning as a form of data compression is the first application that comes to Mark's mind. But more than that, RCP can also build prediction models for how sound can be played back.
"Creative people always see technology as the villain because the human ego naturally wants to make itself the center of creativity. The musicians who will succeed in the future will be those who accept a more Copernican view of collaboration."
At the same time, Mark is a big fan of spatial audio and has done a lot of work with Dolby Atmos, including mixing the soundtrack album for No Time to Die in this format. Still, it all circles back to the commercial nature of distribution formats and, the way he sees it, the reality that there isn't really a market for that on a musical level. "It's one of those great disappointments since there's no financial incentive. From a record label's point of view, a stream is a stream is a stream, and it doesn't matter if it's spatial or not," Wherry elucidates. "There isn't much commercial value right now because, sadly, not enough listeners are going to notice and care in terms of a standalone experience."

As we continue talking about music technology innovation, he mentions a number of innovative ideas that are taking place now stem back to the 1980s. For instance, back when Next and the Lucasfilm Computer Division were dominating computer music research, one of the researchers had the idea of creating a tool where composers could prototype the way a score sounds on a computer and hear it all played back digitally. "If you go back to those research papers, there were a lot of ideas created that were ahead of their time. The idea of prototyping scores in film production is now commonplace, of course, but other concepts were just left on the table," the music technology director opines. There is a lot of innovation to come – musicians just have to be more open-minded about it.
Made on