P&E Wing Recommendations for Archival Media

P&E Wing Recommendations for Archival Media

by Rob Schlette

“How to Archive Multitrack DAW Recordings” discussed how the contents of an archive (the data itself) can be consolidated or flattened to ensure that the digital audio remains executable as long as the audio file format is a supported technology. Another critical consideration for an archive is the storage medium, or container, that is used.

There are two important goals when selecting archival media:

  1. Each medium should be a standardized, widely supported archival technology. This will ensure that the hardware and software that facilitate playback will be maintained well into the future. It also ensures that once the technology is flagged for obsolescence, there will be a standardized migration strategy.
  2. All of the typical practices related to redundant storage should be followed. These would include providing copies of the archival data on multiple media, to be stored in different locations. Redundancy in count, container type, and location are all required to ensure data safety.

These basic redundancy practices are described in detail in “Best Practices for Backing Up Your Data”.

Rather than share a lot of generalities about archival storage media, let’s look at the music industry standard protocol for providing archival materials, “Recommendations for Delivery of Recorded Music”, from the Producers and Engineers Wing of the Recording Academy.

Archival Media Choices

The P&E Wing recommendations specify that each data set should be provided on three separate archival media. The specific container choices are presented in two tables.

The first table lists “Primary Master Delivery Media” choices based upon the multitrack format (i.e. DAW, hard disk recorder, ATR, etc.). The P&E Wing recommendations require that all master recordings be archived to the specified ‘Table 1’ medium. In addition to long-term storage, these common playback media also facilitate short-term re-use.

The hard drive specification is a significant example of how these recommendations serve to cull the wide variety of media choices and variations. HDD choices are limited to spinning disks in FW 400/800, USB 2.0/3.0, or Thunderbolt enclosures. As of the last revision in 2013, "the Deliverables Committee [was continuing] to evaluate various technologies such as Solid State Drives (SSD) and Flash memory. Until more longevity studies are published, the
committee does not consider them to be archival in nature."

Could other technologies work? Sure; but if every workable choice becomes a long-term option, there is very little chance that the hardware and software that facilitate playback will be maintained for the long term. There would be even less chance of a standardized migration strategy.

A second table of “Transitional Master Backup Storage Media” is also provided. These include technologies like LTO data storage tape and optical media choices. The P&E Wing recommendations specify that each archival data set be provided on two different media from ‘Table 2’. This provides for very thorough protection against the failure or obsolescence of any single storage technology.

The P&E Wing recommendations also take into account that one form or another of Archival/Storage Application is necessary for effectively using many common archival media- particularly data storage tape. The recommendations narrow the choices to, TOLIS Group's BRU, EMC/Dantz's Retrospect, and Unix tar.

As part of considering Archival/Storage Applications, the P&E Wing stresses the importance of the manufacturer’s commitment to support the application source code over the long term. Specifically, will the archives that are dependent upon these companies’ software technologies survive beyond the useful life of the companies themselves?

The only two ways to have this important assurance is to use an open source solution, or to choose a manufacturer who is willing to put their source code in escrow with a non-commercial third party, like the Library of Congress. Check out the “Recommendations for Delivery of Recorded Music” for a detailed breakdown of manufacturers.

Standards Set the Path

Industry standards such as the one discussed here are important for delivering a viable archival data set. They do not extend into the equally important long-term ‘stewardship’ (as it were) of the archival media. HDD’s, analog tapes, and data storage tapes all have required use and replacement considerations over time. That said, a thorough study of the “Recommendations for Delivery of Recorded Music” will get you started turning in master recordings that stand a chance of being around long after you are.


Audio Perception and ABX Listening Tests

Audio Perception and ABX Listening Tests

by Rob Schlette

Audio production requires a lot of decision making. In fact, most of audio production is decision making.

Unfortunately, pro audio marketing nonsense has devalued sensory feedback, and created elaborate preconceptions about how certain things 'sound'. We are routinely exposed to the most outrageous qualitative claims that have never been proven (or even suggested) with a systematic listening test.

We owe it to ourselves and to our clients to make good choices with our ears. Let’s take a look at how anyone with a basic DAW setup might be able to go about conducting a listening test of their own.

Ground Rules

  1. If a claim or question includes phrases like, “sounds better” or “can hear the difference”, the most direct way to prove it (or dismiss it) is a listening test.
  2. A listening test is useless if the listener can see what he or she is listening to (e.g. selections labeled MP3 and CD, or any visable waveforms). Our eyes will betray our ears, and reinforce preconceptions.
  3. The results of a test aren’t results if they can’t be repeated.

All of these types of premises are debated in online communities, but we have to draw some practical boundaries. We only care about answering the question, "can you hear the difference between these two things?"

ABX Testing

An ABX listening test takes two audio samples (A and B), and provides a method for determining whether they are distinguishable to a listener. During the ABX test the listener is asked to answer whether each of a series of playback examples X is sample A or B. Most test cycles will run about 10 times. The listener’s score is typically quantified in terms of percentage of correctly identified X’s. If you factor in chance, then you can calculate the reliability of identifying A vs. B.

Presumably if you identify X correctly close to 100% of the time, you can hear a difference. If your scores keep landing in the 50% range, or vary widely across multiple tests, the suggestion is that you’re not hearing a reliable difference between the two audio examples. If any of the above is true across reasonably well-selected groups of listeners, then you're really starting to learn something.

Keep in mind:

  • If you can't hear a difference between two things, it doesn't necessarily mean you're tin-eared or dim. You might have just freed yourself from a myth.
  • ABX is not about minutia or 'detecting small differences'. Those things tend to fall completely away in an ABX test. That's part of the point.
  • If the results of your test embarass you, you might be about to learn something.

ABX Testers

There are several software ABX apps available. I use Takashi Jogataki’s (free) ABXTester all the time. I highly recommend it for Mac users. QSC made a fairly famous hardware ABX Comparator until 2004.

If you're interested in ABX listening tests for digital audio codecs, the Sonnox Fraunhofer Pro Codec includes an excellent built-in ABXer.

Application Example

Lots of software companies offer different codecs for creating compressed audio formats like MPEG-3 and AAC. There are a lot of reasons to prefer one over another, including user interface, cost, and brand association. To keep myself honest, I’ll typically download a demo of a new codec, and ABX it against my current preference.

I’ll bounce the same audio source twice – once with each codec product set to identical digital audio precisions. Absolutely nothing else about the two bounces can be different, or the test is pointless. If I’m really being honest, I get someone else to load up the examples into the ABX app so I don’t know which is which.

After one round of ABX testing (repeatedly identifying X as either A or B based on what I’m hearing) I observe my success rate. I’ll usually repeat the test 3 to 5 times, maybe using different monitors (i.e. limited bandwidth versus full-range monitors). If the results suggest any ability to hear the difference (especially if I'm preferring the new codec), I’ll usually repeat all of the above with a wide variety of playback samples from different musical genres.

This process isn’t objective or blind enough to qualify as a truly scientific test, but it is goes a long way to eliminate a lot of self-deception and marketing fog.

Cautions

The most important step in the ABX testing process is defining a test that has a single variable. If there is more than one thing different between A and B, you’re not really going to learn anything useful.

For example, a question like, “does mic pre A sound different than mic pre B,” is complicated. First, you have to consider the wide range of variables between two successive performance examples. Eliminating those with a mic splitter (or a playback example), you would need to consider the gain staging of the two mic pres. Devise a standard for establishing ‘equal gain’ between the two signal chains (e.g. acoustic test noise metered at a reference level at the mic pre outputs).

A question like, “can I hear the difference between a 96kHz digital recording and one sampled at 44.1kHz,” would require you to have an acoustic or analog test source (a digitally derived source would be irrelevant). With an acoustic source you would need to have two identical converters feeding two different DAW setups with nothing different between them but sample rate. You'd need to bounce both examples with the same digital audio precisions in order to be able to conduct the ABX test, which would re-defining the question as, “can I hear the difference between a 96kHz digital recording and one sampled at 44.1kHz once they’re both at 44.1kHz?”

Obviously the simpler the question, the simpler the test. After getting used to thinking through the variables that affect our perception, marketing claims will begin to inspire the question, “how would you test that?” The answer will either inspire some new exploration of your own, or instantly expose the silliness that often lies just under the surface of pro audio marketing.

ABX testing is just one way of attempting to determine unbiased answers to questions of audio perception. Other methods like null testing might be better for particular scenarios, as long as the test is well-conceived. There are some popular examples of tremendously silly null testing on YouTube, but you’ll be smart enough to consider a single variable at a time.