The figure above shows the phase responses brought together between sub and main. The crossover is not steep and therefore the phase response overlap over a wide range.
The combined response of a 2-way xover
Alignment and Design of Professional Sound Systems
The figure above shows the phase responses brought together between sub and main. The crossover is not steep and therefore the phase response overlap over a wide range.
The combined response of a 2-way xover
This week I spent two days at the Disney Animation Studios in Burbank California. This is the place where major animation films are created. Unless you have been living under the Rock of Gibraltar you know what these films are. Mermaids, Beauties, Beasts, Princesses and more come to life in this space in the form of hand-drawn and digitized artwork. It is a fascinating place, nestled amongst the neighborhood full of the studios of Warner Brothers, Universal and other major players. This studio is quite different from the traditional movie lots in that a great volume of material is generated from a small amount of real estate.
The animation audio product is similar – and different from its counterpart in the world of flesh and blood actors in front of cameras. The recording process for animation dialog can be much more carefully controlled, since there is no need for on-site microphones with all of their challenges with noise and synching. The final stages – the mixing of audio and its inevitable translation into the cinema and home environment are faced with the same challenges – whether they are animated or live-action originals. The cinema challenge is about standards of reproduction. The media leaves the studios and is reproduced is a new space – cinemas, homes and whatever else. The creative designers – audio and video – must have faith that their work is accurately represented out in its public form.
This is a very different world than our live audio perspective. A live show has no requirement to adhere its reproduction to an ongoing standard. If the guy mixing ZZ Top thinks that he wants some more 400 Hz, then who is to argue with him? The 80 mic lines coming INTO the mixing console are not a finished product to be shipped to the listeners. The tom drum mic may have severe leakage issues from the snare. Reproducing it “accurately” could be an idiotic idea. The finished product in live sound is inherently – and continuously – a closed loop of self-calibration. The mix engineer constantly readjusts the system like the hydrodynamic stabilizers that continuously keep a cruise ship upright.
Where a standard is applicable in live sound is between the mix position and the audience – and that is where the worlds of live sound and cinema sound meet. In live sound, the self-calibrated mix location meets the audience at the same time, just beyond the security fence. In studio world, the self-calibrated mix position meets the audience in another room, at another time. Creativity is the king in the mixing space, but objectivity is the only hope for accurately translating that creative undertaking to our listeners –whether it be live or Memorex.
Standards of reproduction
The cinema world has long adhered to standard practices in order for its media to be accurately represented. The audio standards were quite lax historically, but have come great strides in the last 30 years with standards, verification and testing brought to the industry through THX, Dolby and others. It is not my intent to provide a history of cinema sound here – suffice to say that, unlike live sound – we can measure a sound system in a room have a target response that we can see on an analyzer. The reason is that –unlike live sound – the media presented to the speaker system IS the finished product and therefore can be objectively verified. A reproduction that is more accurate – closer to the standard is objectively superior to one that is less accurate. If there is a peak in the sound system 400 Hz for a live sound system, the mix engineer can – and will – modify the mix to reduce 400 Hz until the offending frequencies are tamed. If a cinema playback system is left with such a peak, it will be left there for all to hear. If the speaker in the recording room is left with such a peak – the inverse will occur. There is no feedback loop between the recording studio playback monitors and the house speakers. This loop can only be closed by adherence to a standard in both locations.
A simple case in point is subwoofer level. Live engineers consider this a continuously variable creative option. In cinema world the sub level MUST be set objectively or major points of impact will be either over- or under- emphasized.
The SIM3 Analyzer
The folks at Disney Animation have added a SIM3 System to their inventory of acoustic analysis tools. This is an excellent tool for the job of calibration for studio and large spaces – and for verification that such spaces match. My purpose over these two days was to train a group of engineers there how to operate the analyzer and to open their perspectives up to seeing how measurable physical acoustics affects their work and its translation. The addition of SIM3 opens up a lot of doors for cinema sound. The adherence to standards can stand to be greatly improved by the use of a complex, high resolution FFT analyzer such as SIM.
In the next part I will describe some of the interesting things that came up during our two days there.
Here is a photo of the Disney Animation studios from their web site. Interesting note is that the building was built by McCarthy Construction Company. This was my family’s (I am 5th generation) company and I expected to grow up and join the company. Instead it did neither – blame it on rock and roll. But either way, I guess I would have ended up here!
OK. The saga continues down below……………………….
What is the best way to phase align our subwoofers to the mains? There is a hint in the way the question was phrased. I didn’t say time align (and it is not because I am afraid of copyright police). I say phase align because that is precisely what we will do. Simply put, you can’t time align a subwoofer to the mains. Why? because your subwoofers are stretched over time – the highest frequencies in your subwoofer can easily be 10-20 ms ahead of the lowest frequencies. Whatever delay time you choose leaves you with a pair of unsettling realities: (a) you are only aligning the timing for a limited ( I repeat LIMITED) frequency range, and (b) you are only aligning the timing for a limited ( I repeat LIMITED) geographical range of the room. So the first thing we need to come to grips is with is the fact that our solution is by no means a global one. There are two decisions to make: what frequency range do we want to optimize for this limited partnership and at what location.
Let’s begin with the frequency range. What makes the most sense to you? 30 Hz (where the subs are soloists) , 100 Hz (where the mains and subs share the load) of 300 Hz (where the mains are soloists)? This should be obvious. It should be just as obvious that since we have a moving target in time, that there is not one timing that can fit for all.
Analogy: a 100 car freight crosses the road in front of you. What time did the train cross the road? The answer spans 5 minutes, depending on whether you count the engine, the middle of the train, or the end. Such it is with the question: when does the subwoofer arrive? (and is also true for when does the main arrive?) How do we couple two time-stretched systems together? In this case it is pretty simple. We will couple the subwoofer train right behind the mains. The rear of the mains is 100 Hz and the front of the subs is the same. We will run the systems in series. The critical element is to link them at 100 Hz. (I am using 100 Hz as an example – this can, and will vary depending upon your particular system).
The procedure is simple. measure them both individually, view the phase and adjust the delay until they match. You have to figure out who is first and then delay the leader to meet the late speaker. This will depend upon your speaker and mic placement. I say this is simple – but in reality , it is quite difficult to see the phase response down here. Reflections corrupt the data – it is a real challenge. Nonetheless, it can be done. It’s just a pain.
When I get a moment I will post up some pics to show a sub phase-align in the field.
Wouldn’t it be nice if there was a simpler method? Like using the impulse response to get a nice simple answer directly in milleseconds, instead having to watch the fuzzy phase trace. It is absolutely true that the impulse response method is easier. In my next post I will explain why the easy way lacks sufficient accuracy for me to ever use with a client.
****************** Part II *****************************************
FFT measurement questions and answers
The first thing to understand about an impulse response is that it is a hypothetical construct. This could, to some extent, also be said about our phase and amplitude measurements, but it is much more apparent – and relevant with an impulse response.
The response on our analyzer is always an answer to a question. The amplitude response answers the question: What would be the level over frequency if we put in a signal that was flat over frequency. This is not hard to get our heads around. If we actually put in a flat signal (pink noise) we would see the response directly in a single channel. If not, we can use two channels and see the same thing as a transfer function. This makes it a hypothetical question- what would the response be with a flat signal – even if we use something like music.
Same story with phase but this gets more complex. Seen any excitation signals with a flat amplitude AND phase response? You won’t find that in your pink noise. Pink noise achieves its flat amplitude response only by averaging over time. Random works for amplitude – but random phase – yikes – this will not get us any firm answers. In the case of phase we need to go to the transfer function hypothetical to get an answer – the phase response AS IF we sent a signal with flat phase in it. Still the answer is clear: this is what the system under test will do to the phase response over frequency.
The impulse response display on our FFT analyzer answers this question: what would be the amplitude vs. time response of the system under test IF the input signal was a “perfect” impulse. Ok……….. so what is a perfect impulse? A waveform with flat amplitude AND phase. That can’t be the pink noise described earlier, because pink noise has random phase. So what is it? A single cycle of every frequency, all beginning at the same time. Ready set, GO, and all frequencies make a single round tripand stop. They all start together, the highest freq finishes first, and the lowest finishes last. If you looked at this on an oscilloscope (amp vs time) you would see the waveform rise vertically from a flat horizontal line, go to its peak and then return back to where is started.
IF the “perfect” impulse is perfectly reproduced it will rise and fall as a single straight line. The width of the line (in time) will relate to the HF limits of the system. The greater the HF extension, the thinner the impulse. As the HF range diminishes, the shortest round trip takes more time, and as a result the width of the impulse response thickens as the rise and fall reflect the longer timing. A system with a flat phase response has a single perfect rise and fall in its impulse response and a VERY important thing can be said about it: a single value of time can be attributed to it. The train arrives a 12:00 pm. All of it.
The impulse response on the FFT analyzer is not an oscilloscope. We do not have to put in a perfect impulse. We will use a second generation transfer function, the inverse Fourier transform (IFT) , which is derived from the xfr frequency and phase responses. This is the answer to the hypothetical question: what would the amplitude vs time response be IF the system were excited by a perfect impulse.
If the system under test does not reproduce the signal in time at all frequencies, then the impulse response shape will be modified. Any system that does NOT have a flat and amplitude and phase response will see its impulse response begin to be misshapen. Stretching and ringing, undershoot and overshoot will appear around the vertical peak. Once we are resigned to a non-flat phase response we must come to grips with the fact that a single time value can NOT describe the system. The system is stretched. The time is stretched. The impulse is stretched.
This is where the FFT impulse response can be misleading. We can easily see a high-point on the impulse response, even one that is highly stretched. Our eyes are naturally drawn to the peak – and most FFT analyzers automatically will have their cursors track the peak – and lead us to a simple answer like 22.4 ms, for something that is stretched 10ms either side of that. And here is where we can really get into trouble: we can nudge the analyzer around to get a variety of answers to the same question (e.g. the same speaker) by deciding how we want to filter time and frequency: ALL OF WHICH ARE POTENTIALLY MISLEADING BECAUSE NO SINGLE TIME VALUE CAN DESCRIBE A STRETCHED FUNCTION.
Did I mention that all speakers (as currently known to me) are time stretched? So this means something pretty important. The simplistic single number derived from an impulse response can not be used to describe ANY speaker known (to me) especially a subwoofer.
Does a stretched impulse response tell you what frequencies are leading, and by how much? Good luck. You would have a better chance decoding a German Enigma machine than divining the timing response over frequency out of the impulse. This brings us back to the heart of the problem with our original mission: we are trying to link the low frequencies of the main speaker (100 Hz) to the high frequencies of the subwoofer (100 Hz). The peaks of these two respective impulse responses are in totally different worlds. They are both strongly prejudiced toward the HF ranges of their particular devices which means the readings are likely to be the timings of 10 kHz and 100 Hz respectively.
Simple answers for complex functions. Not so good. That’s it for the moment. Next I will describe some of the different ways that impulse responses can be manipulated to give different answers and when and where the impulse response can provide an accurate means of setting delays.
********************** Part III ***********************************************
The linear basis of the impulse response
Those of us using the modern FFT analyzers that are purpose-built for pro (and amateur) audio have been spoiled. We have grown so accustomed to looking at a 24 or 48 point/octave frequency response display that we forget that this is NOT derived from logarithmic math. The FFT analyzer can only compute the frequency resp0nse in linear form. The quasi-log display we see is a patchwork of 8 or so linear computations put together into one (almost) seamless picture. Underlying this is the fact that the composite picture is made up of a sequence of DIFFERENT time record lengths. Bear in mind that the editing room floor of our FFT analyzer is littered with unused portions of frequency data. We have clipped and saved only about half the freq response data from any of the individual time records.
How does this apply to the impulse response? Very big. The impulse response is derived from the transfer function frequency response (amp and phase). It is a 2nd generation product of the linear math. The IR is computed from a single frequency response – from a single time record – which means it comes from LINEAR frequency response data. The inverse fourier transform (IFT) cannot be derived from the disected and combined slices we use for the freq response. The IR cannot contain equal amounts of data taken from a 640 ms, 320 ms, 160 ms…. and so on down to 5ms to derive it response. Think it through……… there is a time axis on the graph. It has to come from a single time event.
The IR we see comes from a single LINEAR transform. The importance is this: linear data favors the HF response. If you have 1000 data points, 500 of them are the top octave, 250 the next one down and so on. This means that our IR peak – where the “official” time will be found, is weighted in favor of the highest octave. If you have a leading tweeter, The IR will find it ahead of the pack (in time and level). The mids and lows will appear as lumpy foothills behind (to the right) of the Matterhorn peak. If you have a lagging tweeter, the IR will show the lumpy foothills ahead of the peak (to the left), but the peak will still be the highest point. Our peak-finding function will still be drawn to the same point – the peak.
Now consider a comparison of arrival between two speakers – if they both extend out to 16 kHz (mains and delays) then the prejudice of the IR in favor of the HF response evens out. If we find the arrival time for both we can lock them together. Their response will be in phase at 16 kHz and remain in phase as we go down – (TO THE EXTENT THAT THE TWO SPEAKER MODELS ARE PHASE MATCHED). This is a PARALLEL operation. 10kHz is linked to 10 kHz and 1k to 1k and 100 to 100 for as long as they share their range. If the speakers are compatible, one size fits all and the limitations of the IR are even on both sides of the equation. If they are not compatible over frquency, we will need to see the PHASE response to see where they diverge, and solutions enacted within this viewpoint. – later on that.
Now back to the subs…………
It should be clear now that the linear favoritism over frequency will NOT play out evenly in joining a sub to a main speaker. This is also true of aligning a woofer and tweeter in a two-way box. This problem holds for ANY sprectral crossover tuning. Linear frequency math does not have a and fair and balanced perspective over frequency. If you are looking at devices with different ranges they are subject to this distortion. The location of the peak found in our IR is subject to the linear focus. If the main speaker is flat the peak will be found where there are more data points: the top end – 4 to 16 kHz. All other freq ranges with appear RELATIVE (leading or lagging) to this range. If you have a speaker that is similar to 100% of the speakers I have measured in the last 26 years, then one thing is certain: the response at 100 Hz is SUBSTANTIALLY behind the response we just found at 8 kHz.
The sub is NOT flat (duh!!) so there is a tradeoff game that goes on in the analyzer. As we lose energy (frequency rising) we gain data points (liner acquisition) so the most likely place the peak will be found is in the upper areas of the subwoofer range and/or somewhat beyond, before it has been too steeply attenuated. If you have a subwoofer that is similar to 100% of the speakers I have measured in the last 26 years, then one thing is certain: the response at 30 Hz is SUBSTANTIALLY behind the response we just found at its upper region.
One of the reasons I have heard given as the reason to use the IR values alone to tune sprectral crossovers (subs+mains, or woofer+tweeter) is that the IR gives us “the bulk of the energy” for each driver and aligning “bulk of the energy1+bulk of energy2 = maximum bulk of energy.” Sounds good in text. But it does NOT work that way. You are making a series connection at a specific freq range, not a parellel connection (where bulk might apply). Futhermore, the bulk formula is flawed anyway – because the linear freq nature if the IR means that the two “bulks” are weighed differently.
********************** Part IV ******************
There are a variety of ways to compute an impulse response on an FFT analyzer. All of them haqve an effect on the shape of the response, how high the peak goes, and where (in time) the peak is found. Without going hard into the math we can look at the most decisive parameters.
VERY SIMPLIFIED IR Computation Features
1) The length of time included after time zero (the direct sound), in seconds, milliseconds etc.: This differs from the the actual time record captured, since there is positive and negative time around time zero – but the math there is not important . In the end we have a span of time included in the computation. This puts and end-stop on our display – we can’t see a 200ms reflection if we have only 100ms of data after the direct sound. We could, however choose to display less than the full amount of data we have. The visual may be a cropped version of the computation, or it could be the full length. The capture time also limits how low we can go in frequency. We can’t see 30 Hz if we only have 10ms of data. Most IR response have the option of large amounts of time, so getting low frequencies included will not be a big issue. The fact that the frequency response is LINEAR means that frequency weighting favors the HF – no matter how long – or short our capture is.
2) Time increments/FFT resolution/sample freq: How fine do we slice the response in time. The finer the slices, the more detail we will see. More slices = higher frequencies. If we have slice it into .02 ms increments (50 kHz sample rate) we can see up to 25 kHz. If we slice at lower sample rates, the frequency range goes down. The same speaker, measured over the same amount of time, with different sample rates/time increments will include different frequeny ranges – and therefore MOST LIKELY will have its impulse peak centered at a different time. This is important. The speaker did not change, but our conclusions about it did. This is a non-issue if we are comparing two speakers that each cover the same range – they would both have the same shift applied to them. But if we have one speaker with a full HF range and one without the playing field just got tilted. If one speaker really has no HF, and the other one does – but it is filtered by the anaylzer, then we can assume that synchronizing the two peaks will put the speaker in phase.
Vertical scale: Linear/Log: The uncultured version of the IR is linear in time, freq and level. This means that things that go negative will peak downward while positive movement goes upward. Polarity (and its inversion) can be seen. The down side of this is that the linear vertical scaling translating vewry poorly visually toward seeing the details of the IR such as late arrivals, reflections, etc. Worse yet is trying to discern level differences in linear. The Y axis does not read in dB. It reads in a ratio and this has to be converted. Upward peaks have a positve value and downward have a negative value. The strength of an echo can be computed by the ratio of the direct to the echo levels – and converted by the log 20 formula into dB. Where it strange is when you try to compute positive direct sound to a negative going reflection.
The log version is obtained by the Hilbert transform and shows the vertical scale in dB. But the downside is that there isn’t a downside. Pun intended. What I mean is that the negative side of the impulse is folded over with the positive and these are combined into a single log value. This can now be displayed in dB since everything is going one way. This has various names: Energy-time-curve (ETC) amoung others. The visual display is blind to the polarity but I am told by sam Berkow that the cursor in SMAART shows whether or not the energy is positive or negative – even though it all displays positive.
So once again we are back to the same place. If you are going to use the impulse response alone (I say you because it will not be me) to align speakers in different freq ranges you are prone to computational items that will affect the HF and LF sides of the equation differrently. One technique I have seen advocated is the push down the sample freq so low that the upper regions of the HF speaker are filtered out. The idea is this: if the Xover is 100 Hz, then drop the resolution of the analyzer down to filter out the region above 100 in the HF speaker. Then we will see the impulse response at 100 Hz of BOTH speakers – and VOILA we have aour simple answer. BUT – one impulse response (the HF) has filtered the device by computation – the other (the LF) is filtered by a filter. We have a merger of the VIRTUAL – a computationally created phase shift and freq response filtering (which we don’t hear) with an actual – the filter response of the Xover. It is possible that the value for the impulse will give the correct reading so that the Xover is actually in phase – possible – not probable – but we won’t know until we measure the phase – which is the whole point of this exercize.
Simply put: why bother with a step-saving solution ( Xover alignment by IR) if it is so prone to error that you have to do the second step (Xover alignment by phase) any way? If a step is to be skipped it is the IR – not the phase.
A hot topic of discussion is cardioid subwoofers, so this will be the place to get that topic going. At the moment I will use this as a test to see if I can upload a few graphics from the book here. The first will be some pics that describe the behaviour of end-fire arrays.
The above figure shows the timing chain for a set of four speakers in the end-fire configuration. The basic principle is a game of acoustic “leap frog” where the rearmost speakers jump sequentially over the front unit. The timing is set up so that all four speakers are synchronised at the front of the 1st (the rightmost in the figure) speaker. This “in phase” situation causes the signal to sum additively in this forward direction. The phase angles shown at the right for 3 different freq ranges are color coded to reflect their position on the phase cycle (green = +/- 90 deg, yellow = 90-120 deg and red = >120 degrees. The key item here is NOT the exact phase angle, but rather the amount of AGREEMENT in phase between the 4 speakers. In this case 31 Hz shows perfect agreement at 98 degrees so the addition wiull be strong. 63 Hz shows 4 speakers all synch’d at 198 deg and will achieve the same effect.
Meanwhile in the rearward direction (shown on the left ) the timing chain reveals four speakers out of time as they move over the rearmost speaker. The 4 elements are all at different times and range over 17.4 ms apart. The phase responses also fall apart – ranging from a 1/6 cycle (65 deg at 31 Hz) to 2.16 cycles (125 Hz) and all sorts of values in between. These disparities in phase values cause the amplitude response to sum very poorly in the rearward direction, the intended result of the design. In sync at the front – scrambled at the rear. The side directions fall somewhere between and the end result is shown in the 3 polar plots at the bottom of the chart.
This figure shows an alternate spacing/timing configuration. Instead of a constant spacing (as the upper version shows), with a consistent delay timing, this config has a log spacing – and log adjusted timing. The leap-frog game is still played the same at the front – everybody in sync at the front cabinet, but thing play out somewhat differently at the rear. The difference is small but illustrates the options we have available to us.
This 3rd figure has a different twist to it. In this case the intent is NOT to have perfect synchronicity at the front of the array. Instead the timing sequence is set up so that they are slightly off – such that there is about a 90 degree spread at 125 Hz, 45 degree spread at 63 Hz etc. This creates a less than perfect addition at front/center – but causes a better addition at the front corners. The result is a flattened front and an overall triangular shape. This configuration was shown to me by Mitchell Hart way down in Australia.
On November 20th and 21st I went to LDI and conducted seminars on (guess what) sound system design and optimization. These were part of a Cirque du Soleil sponsored education program and each session was 2 hours. I find it difficult to cram that subject into a 32 hour seminar so compressing it down to 2 is just too much fun. Nonetheless we did have two good sessions – the best was the second one where I was joined by Paul Garrity, Matt Ezold (both from Aurbach and Associates) and Bob Barbagallo from Solotech. All of us have been involved in a large number of the Cirque productions such as the Beatles, Zumanity, Zaia and others. This gave us an opportunity to share our perpectives on how these projects come together. Paul and Matt described their role in translating the artistic vision into a stage, flyspace, lifts and a room while Bob B described the process of taking the macro version down to the minute details of wire terminations etc. I described my role in taking the hundreds of speakers and making them work together to create even coverage over the space. It was an informative day for me, getting a chance to sit back and see what goes on before I am ever brought in to the picture.