Recorded Music Revenue Survived 2020
Recorded Music Revenue Survived 2020
2020 wasn't fun for anyone, the music world included. That said, the Year-End 2020 RIAA Revenue Statistics are out, and the news isn't so bad for recorded music revenues. While live music suffered in the extreme last year, recorded music revenues in the U.S. grew 9.2% to $12.15 billion worth of estimated retail value. That makes 2020 the 5th consecutive year of growth for the industry. Here are some of the highlights from the report:
What were the growth areas in 2020?
- Digital distribution continued to grow, making up 90% of shipments in 2020
- Streaming revenue increased 13.4% in 2020, while digital downloads tumbled 18%
- The most valuable growth segment was paid streaming subscriptions, up in number by 25% and up in revenue by 14.6%
What's up with physical media?
- Physical media continue to decline in the mass consumer market, shipping 17.4% fewer units in 2020, and a 0.5% dip in revenue. Physical media made up just 9% of total revenues in 2020
- Vinyl is a bright spot in a small segment, now comprising the majority of physical media revenue for the first time since 1986. Vinyl shipped 23.6% more units in 2020, for revenue growth of about 29%
In 2021, as the music industry tries to heal from the loss of live performance revenue, hopefully we can see this growth in consumer demand and resulting revenue as a cause for hope. People love music, and they are increasingly willing to pay for access to great records. You can check out the full report here.
Information Security for Media Producers
Information Security for Media Producers
System continuity means business continuity. Keep your clients happy and your invoices rolling out by covering as many free and inexpensive information security controls as you can. Your high-value data is a ransomware target, but you can defend against the majority of attacks by taking advantage of simple, low-cost resources like these.
Access Controls
- define a password policy based on current recommendations
- implement 2-factor authentication on important accounts like email, banking, and shared storage
- deploy a password manager to generate and save long random passwords, and eliminate password reuse
- implement a policy of least privilege, especially in OS user accounts
Vulnerability Management
- inventory and patch your systems
- deploy a free vulnerability scanner like Nessus, MBSA, or Qualys FreeScan, and address major issues
Change Management
- implement least privilege on your OS user accounts
- deploy a free endpoint malware scanner like Sophos Home, Malwarebytes, or HouseCall
Data Recovery
- deploy automated backup tools like Crashplan or Backblaze
- make a plan for data recovery
Network Security
- implement simple network segmentation to keep systems in defendable networks
- implement strong wifi security
Additional Resources
- SANS work from home recommendations
- understanding data value
- NIST Incident Handling Guide
Why the Apple Digital Masters Program Matters
Editorial Note
Mastered for iTunes is now Apple Digital Masters. On 08/07/2019 Apple Music announced the launch of Apple Digital Masters, a rebranding initiative that combines all of Apple's Mastered for iTunes collections into one catalog. The iTunes® brand is finished, but the technical standard, and its justification, remain the same.
Apple notes that the majority of top releases on Apple Music as of 2019 are Apple Digital Masters, specifically about 75% of of the Top 100 in the U.S. and 71% of the Top 100 globally. The details in this 2013 article remain as valid as ever. Enjoy.
Why Mastered for iTunes Matters
by Rob Schlette
This article explains the Mastered for iTunes program, and why it’s an important benchmark for digital music distribution.
Masters v Premasters
Like nearly every other digital music distribution channel, iTunes doesn’t really want a master; they want a premaster. The so-called ‘mastering’ process is actually two related, but operationally discrete sub-processes:
- Premastering, which encompasses all of the aesthetic decisions related to preparing a set of mixes for an audience. Premastering answers any questions related to what the record is going to sound like.
- Mastering, which is the process of creating the delivery media that will allow the audience to access the material. In the case of iTunes, this includes encoding an Apple AAC file and adding the appropriate metadata.
It doesn’t matter if you’re a major label dealing directly with iTunes or a self-released artist using an aggregator; iTunes wants a specific digital audio premaster so that they can do the encoding (i.e. mastering) in-house.
So what do they want, and how do you know that your master is going to turn out the way you want it to? That’s precisely what the Mastered for iTunes program is all about.
Codecs and Compensated Premasters
The goal of any digital audio codec is to take a high-resolution or CD-quality digital audio file, and generate an encoded file that retains as much of the subjective audio quality as possible, but with a reduced file size. Regardless of anyone’s opinion of any particular codec, if your music is going to be sold on iTunes, it’s going to be encoded by Apple using their variable bit rate 256kbps AAC encoding format.
This introduces an important issue. Apple AAC is not lossless. This means the sound of the master may be discernably different than the premaster.
The key to managing this off-site encoding issue is to reference mixes and premasters through a round-trip codec plugin that provides real-time auditioning of the Apple AAC process. Apple has provided the AURoundTripAAC Audio Unit for that very purpose. In addition, the Sonnox Fraunhofer Pro-Codec provides the same facility. The purpose of these tools is not to create encoded masters, but to audition them using the very same technology that will be used when they are encoded off-site.
If your workstation lacks the technical specs to run these real-time tools, Apple has also provided the afconvert utility. This command-line tool facilitates off-line encoding using the same codec technology as the tools described above.
Best Practices and Deliverables
The Apple AAC codec performs measurably better with high-resolution digital audio input than with CD-quality input. A cursory review of the technology behind variable bit rate encoding should confirm this. ABX listening tests can do the same.
Apple’s Best Practices for Mastering for iTunes lays out the recommended WAV file deliverables:
- “An ideal master will have 24-bit, 96kHz resolution.”
- “Don’t upsample files to a higher resolution than their original format.”
To reiterate, a CD premaster is an inferior input to the iTunes delivery system. 24-bit WAV files with sample rates from 44.1kHz-192kHz are recommended.
Additionally, Apple recommends that these digital audio files include “a small amount of headroom (roughly 1dBFS).”
When D/A converters generate a continuous waveform using digital audio data, there are analog levels greater than the maximum peak sample level. When oversampling is used during playback, this issue can be compounded. It is common practice to leave some amount of headroom to avoid inter-sample clipping, even for un-encoded masters.
Many mixers and mastering engineers employ reconstruction metering to manage inter-sample audio levels. Apple has also provided afclip, which is a command-line tool for checking both sample and 4x oversampled peaks. afclip provides both quantitative and GUI outputs for locating clips.
Bottom line: Apple is requesting 24-bit WAV premasters with native sample rates and approximately 1dB of peak headroom.
Why Mastered for iTunes Matters
CD-DA is not the primary music delivery medium anymore. The 2016 RIAA Music Industry Shipment Stats showed that, “digitally distributed formats comprised 75.5% of the total US market by dollar value in 2016.” What is the sense in continuing (intentionally or not) to master strictly for CD? Ignoring a concerted, thoughtful approach to delivering music for digital distribution ignores the statistical bulk of the listening audience.
The Mastered for iTunes Best Practices aren’t the least bit obscure or esoteric. In fact, they’re a good de facto standard for any digital premaster delivery. The loudest argument for ignoring the Mastered for iTunes program is that streaming music sources have overtaken digital download outlets as the primary source of music (digital or otherwise). That observation seems to make a better case for coordinated premaster deliver standards than against.
The basic premise behind Mastered for iTunes is that AAC is not CD-DA, so best-quality results require unique deliverables. This has been true (and well-accepted) for vinyl, cassette, broadcast radio, etc. The technology required for listenable streaming music certainly requires a similar (if not more complex) consideration.
Mastered for iTunes is a simple standard for providing predictable high-quality results for one of music’s most popular retail distribution channels. Beyond that, it is an excellent chance for the pro audio community to get in the good habit of carefully considering and accommodating digital music distribution.
A Guide to Mastering for Digital Distribution
A Guide to Mastering for Digital Distribution
by Rob Schlette
Mastering for digital distribution isn’t really mastering at all; it’s premastering. This article will help you know what to ask for from your mastering engineer.
The so-called ‘mastering’ process is actually two related, but operationally discrete sub-processes:
- Premastering, which encompasses all of the aesthetic decisions related to preparing a set of mixes for an audience. Premastering answers any questions related to what the project is going to sound like.
- Mastering, which is the process of creating the delivery media that will allow the audience to access the material. Mastering includes the creation of digital delivery media from the previously separate digital audio and consumer metadata resources.
Mastering for digital distribution isn’t really mastering, because the vast majority of digital distribution channels don’t want encoded deliverables like MP3, AAC, etc. They have their own internal processes, and in some cases proprietary tools, for creating (mastering) encoded formats themselves.
Digital music outlets like iTunes and Spotify, and the aggregators that connect independent musicians with them, want digital audio premasters. The table shows a selection of some of the most widely used digital distribution channels, and the premaster materials they accept.
MFiT | CD-Quality wav | FLAC | MP3 (320kbps) | |
---|---|---|---|---|
iTunes | Yes | Yes | No | No |
Bandcamp | Yes | Yes | Yes | No |
CD Baby | Yes | Yes | No | No |
Tunecore | No | Yes | No | No |
Reverb Nation | No | Yes | No | Yes |
LANDR Distribution | No | Yes | No | No |
‘Mastering’ for iTunes
In 2012 much ado was made over Apple’s Mastered for iTunes (MFiT) program, but most of the commentary failed to address anything that a producer or self-produced musician might actually need to know. Mastered for iTunes is two things:
- A detailed specification for providing iTunes or an aggregator with digital audio premasters (not masters). iTunes encodes the masters.
- A suite of simple tools that assist in the creation of those premasters. These include a simple utility that allows you to create AAC masters to audition.
What makes MFiT unique is that Apple is asking for high-resolution digital audio assets. Specifically, 24-bit wav files at the mix master source sample rate. Any specialized, professional mastering engineer working today should be familiar with these specifications. You can read about them in detail in Why Mastered for iTunes Matters.
Producers should be sure to ask for MFiT premasters from their mastering engineer.
CD Quality as a Rule
As you can see for yourself, the predominant request from digital music distributors is for ‘CD quality’ digital audio premasters. That is, 16-bit wav files with a sample rate of 44.1kHz.
Obtaining these premasters is easy, especially if the project in question is being released on CD. In fact, these are the same digital audio assets that your mastering engineer will use to prepare the CD replication master.
Be sure to ask for these as a discrete file set, as opposed to ripping them from a CD later.
An Example
Let’s say you’re planning on releasing your project with a limited CD release, and widespread digital distribution. This is a very common scenario. So, what will you need from your mastering engineer?
- A Red Book CD master in the form of either a hard copy CD-DA, or a DDP file set. This will be delivered to your CD replicator.
- A folder of the CD-quality premasters. These wav files will be delivered to the majority of digital music distribution channels – those that don’t accept high-resolution assets like iTunes.
- A folder of Mastered for iTunes premasters. These high-resolution wav files will be delivered to iTunes, and any other digital music distribution channels that have adopted the MFiT spec (or quietly emulated it).
It’s not hard to navigate the delivery specs for online music distribution, especially if you’re working with a specialized, professional mastering engineer. Be aware that there are, of course, additional specs and requirements for non-audio deliverables like metadata and album art. All of these are available online, and simplified by using an aggregator.
An Introduction to Disc Description Protocol (DDP)
An Introduction to Disc Description Protocol (DDP)
by Rob Schlette
By the time a recording project is written, produced, and mixed, nobody really wants to consider a detail as deceptively small as how you will convey your CD master from mastering to replication. Nonetheless, there are a number of reasons to think twice about burning a CD-ROM and mailing it to your replicator.
These reasons include the cost and fragility of the materials, and the debatably relevant, but indisputably real, margin of error that optical media introduce. In addition, the time and expense of shipping CD reference discs disconnects the otherwise limitless convenience of producing music across global boundaries using the Internet.
The only contemporary alternative is one of the least known, most under-utilized tools in the independent music production world. The Disc Description Protocol™ (DDP) is a mandatory delivery method for many of the world’s biggest record labels. This article will help you understand why, and get you introduced to DDP.
Background
Doug Carson and Associates (DCA, Inc.) developed DDP to provide replication facilities with a “consistent and complete description of the input media” used to manufacture CDs and DVDs. In other words, a DDP file set is data that completely describes a CD or DVD. From this data your master can be replicated without transmission error.
DDP Anatomy
A DDP 2.0 file set for a Red Book Audio CD will typically contain five components:
- Files labeled DDPID, DDPMS, and DDPPQ (or SUBCODES) all describe and organize the disc
- A file that contains the audio data for the disc
- A file that contains the CD Text information for the disc
The specific naming conventions of the audio and CD Text files are not defined by the DDP standard, so it will vary based upon the source application.
DDP Advantages
DDP facilitates direct upload to clients and/or replication facilities, so CD materials costs, shipping time, and the vulnerabilities of optical media are all eliminated. When paired with checksum data, DDP upload is a flawless delivery method.
Checksum error detection schemes like MD5 provide a unique answer to an equation embedded in a file. When verified by a recipient, the checksum answer verifies the integrity of the file copy.
For more information on using Apple’s built-in checksum tools, check out this tutorial.
Tools
There are numerous affordable DDP player applications available that allow you to audition a DDP image and burn Red Book reference CDs. Some, like Audiofile Engineering’s Triumph DDP Mobile, even let you use your iOS device to audition your CD program.
Many mastering facilities will provide complimentary DDP player applications for clients. Since DDP is a licensed standard, it doesn’t matter which applications you and your mastering engineer use, or whether they’re the same.
Delivery
Not every CD replication facility is willing or able to accept DDP from their clients. Assuming that DDP file sets are accompanied by checksum information, there is no technical or cost-based justification for this policy. You might consider using ‘DDP-friendly’ as a pass/fail test of whether a CD replicator is reputable or up-to-date. Incidentally, ‘DDP-friendly’ should not involve any additional fees for using DDP.
If you’re using DDP to deliver your master to replication or duplication, upload is the best delivery method. When that’s not possible, you should ask your mastering facility to provide the DDP file set on a well-labeled DVD-ROM or USB flash drive. Some less experienced (or maybe less literate) CD duplicators have been known to take a CD-ROM and haphazardly produce a large run of discs containing the DDP image itself, as opposed to the disc it describes. You’ll want to avoid this.
All of the above is only relevant for making CDs. If you’re choosing to distribute your music online-only, none of this will apply to you.
A Guide to ISRC
A Guide ot ISRC
by Rob Schlette
As audio professionals, we’re charged with providing our clients with the most viable master recordings that current technology allows. In the last fifteen years that viability has come to depend more and more on metadata.
Both the marketplace and the consumer interface have evolved to make metadata as important as the audio media itself. Indeed, our meticulously crafted recordings could scarcely be bought, sold, stolen, searched, sorted, found, or collected without relevant metadata.
With the RIAA reporting 71% of 2015 shipments being distributed digitally, and many hundreds of thousands more independently released recordings being offered online, it is nothing less than essential that our master recordings include metadata that completely identifies the recording and its stakeholders.
History
It may be slightly reductive, but certainly not inaccurate, to say that consumer media was hardware based until iTunes. With a hardware delivery method like the CD or an LP, the master can be conceptually divided into two parts:
- Contents: the audio content delivered to the audience
- Carrier: the media object that stores the contents
In a hardware delivery system, the carrier would typically include a tool like a UPC/EAN ‘barcode’ which would be used as a point of sale tool for tracking sales, managing inventories, and identifying sound recording ownership.
Since the contents could only be accessed through the carrier, it wasn’t necessary to identify each individual track. Since record sales were retail, the point-of-purchase nature of UPC/EAN worked very well.
The Problem
Today we all realize that the carrier has largely evaporated, leaving the job of identifying digital contents to metadata (data about data, encoded in the files or file system). We also recognize that digital distribution is a global track-based system, as opposed to the brick and mortar album sales tracked by UPC/EAN databases.
A standardized approach to identifying individual digital audio assets through metadata allows us to:
- Track online sales
- Collect statutory royalties for online streaming
- Facilitate publishing and placement
- Provide complete and accurate ownership information
The Solution
The International Standard Recording Code (ISRC) is a metadata facility developed and maintained by iso.org, ISO Technical Committee 46, Subcommittee 9. As detailed on the US ISRC website, “each ISRC is a unique and permanent identifier for a specific recording, independent of the format on which it appears (CD, audio file, etc.) or the rights holders involved.”
The system is straightforward. Each ISRC is a unique 12-digit string (e.g. US-S1Z-09-00001), whose segments provide the following information:
- Country Code designated by the ISRC agency to indicate where the copyright owner has registered the ISRC. This does not limit the global distribution of the recording. It simply facilitates administration and tracking;
- Registrant Code is a unique identifier for each copyright owner who wishes to register recordings using the ISRC system. Each label entity or independent artist is allocated a single Registrant Code by the ISRC, indented to be used for every recording registered;
- Year reference is used for subdividing registrant catalogs chronologically. The year is intended to reflect when the recording was registered, not when it was made.
- A 5-digit Designation code allocated by the Registrant to identify a unique recording or track. This facilitates thousands of ISRC’s per year, per Registrant.
A copyright owner (independent artist or record label) pays a one-time fee to setup a Registrant code with US ISRC. From there, the Registrant assigns unique year and designator codes to each track that they release.
The assigned ISRC numbers should be cataloged, and supplied to the mastering engineer to be included/encoded in the master recordings. As the digital recordings are distributed, retail and streaming sites will request the codes in the submission process.
Who’s On Board (and Who’s Not)?
iTunes needs no introduction. iTunes says ISRC’s, “are required metadata for all audio delivered to iTunes.”
SoundExchange is a non-profit performance rights organization entrusted by the Library of Congress as the sole agency for collecting and distributing digital performance royalties, “on behalf of featured recording artists, master rights owners (like record labels), and independent artists who record and own their masters.” SoundExchange uses, and encourages artists to use, ISRC’s.
Spotify, Grooveshark , and Pandora all use ISRC as an organizational tool, and as part of the search facilities in their user interfaces. The list is long and varied. Check out your favorite music site, and they’re probably using ISRC as well.
The biggest detractors and opponents to ISRC are digital music distributors with their own proprietary identification systems. TuneCore’s founder, President, and CEO Jeff Price is well known for being a vocal detractor of ISRC. Many such arguments tend to focus on the limited application of ISRC in the business functions of online music retailers.
Adopters and advocates concede, ISRC is far from perfect, and less than fully implemented. ISRC catalogs, for example, are still not centrally managed. SoundExchange notes, “Unfortunately, despite the name, UPCs and ISRCs are neither universal nor international.” Though they’re also quick to note that ISRC, in combination with artist name and title information, is essential metadata.
Why ISRC Matters
ISRC is the only standardized, non-proprietary system available to identify individual digital recordings in a global market. It may not yet be fully realized, but the facility is there. Like all standards, the more it is implemented, the better that implementation gets. A standard becomes accepted through repeated use.
ISRC has the unique ability to enhance and streamline user interfaces at the same time it facilitates digital commerce. This is the confluence that has marked every other pivotal tool in the evolution of digital media.
An independent artist can pay a one-time fee to setup a Registrant code with US ISRC, and begin taking advantage of this powerful (and required) tool. There is no other cost involved in using ISRC, unless an artist chooses to be late and lazy, and use a fee-based middleman to register individual recordings.
ISRC matters because it works, nobody owns it, and it’s virtually free for all to use.
3 Things Every Producer Should Know About Metadata
3 Things Every Producer Should Know About Metadata
by Rob Schlette
To successfully deliver music releases to a client and their listening audience, it is important to supply the media tools collectively known as metadata. This is the term we use to describe any secondary data that is used to describe or organize the digital media itself (i.e. the audio data).
Most metadata is medium-specific – meaning that the delivery method defines what the relevant metadata is, and how it will be delivered. Here are a few of the most frequently important points.
Compact Discs: CD-Text vs. Online Databases
CD-Text is an Interactive Text Transmission System that was added to the Red Book CD-DA standard in 1996. Using CD-Text, information like Artist Name, Album Title, and Song Title is stored in the lead-in of a Red Book CD. CD players and CD-ROM drives that support CD-Text can display that information during playback.
CD-Text does not supply metadata to digital media applications like iTunes, WMP, or ripper apps because the metadata is not embedded in the audio files themselves. Rather, these types of applications have to use an online databases like the Gracenote CDDB to access a hosted set of track-specific metadata.
Regardless of your choice to including CD-Text with your CD master or not, CD database information must be submitted separately. CD database information must be submitted separately. This is typically handled by your record label or your digial music aggregator (i.e. TuneCore, CD Baby, etc).
Digital Delivery: Embedded Metadata
Every digital audio file format has standardized metadata ‘tags’, or addressed locations in the file where specific information about the file is stored. Some of these metadata tags self-populate (like creation date), but others must be entered manually. Each audio file format is different, so knowing the specifics of the audio file format you've chosen is important.
Audio file formats developed as consumer delivery media will typically have dozens of user-populated metadata fields (tags) that are presented in the GUI of an app like iTunes or Spotify. The ID3 tags standardized for MPEG-3 files are an obvious example.
Other audio file formats, like Broadcast Wave, were developed as production tools, so their standardized metadata facilitate Media Asset Management as opposed to consumer GUI. This can be an important part of choosing your delivery media, so be sure to do your research or ask a mastering engineer.
Metadata and Mastering
The mastering process will conclude with the creation of the file sets that are delivered for CD replication and/or online distribution. These masters should include (or be accompanied by) complete metadata to avoid confusing or incomplete results over the recording’s useful life.
It is as important to bring metadata into the mastering process as it is to bring mixes. Here are some points to be certain to cover before your mastering session:
- Have all of your song titles solidified and spelled exactly as they are copyrighted;
- Consider using International Standardized Recording Codes (ISRC). If you’re not already an ISRC registrant, be sure to register in advance so that you can provide ISRC information to your mastering engineer. Deal directly with usisrc.org to avoid being over-charged;
- Take the time to make a spreadsheet of additional metadata tags (and their values) that you want to be included in your master media.
In an era when music listeners are as interested in finding, sharing, and referring to music as they are actually listening to it, producers should be sure to provide them with the metadata tools they need to stay interested.
Best Practices for Backing Up Your Data
Best Practices for Backing Up Your Data
by Rob Schlette
What Is a Backup?
A backup is a working safety copy of your production data. The goal of a systematic approach to backups is to keep data loss from stopping or significantly delaying your work. If properly implemented, a backup system will contain current production data for all in-progress projects as of the conclusion of the most recent session.
The importance of data backup cannot be overstated. What you may think of as ‘your data’ is someone else’s proprietary master recordings, not to mention their art. Preventing data loss will protect those valuable assets and preserve your professional integrity.
Best practices for backups are all about redundancy. In particular, this article describes three particular types of redundancy that are critical elements in maintaining working safety copies.
Redundancy #1: Multiple Copies
This point may seem obvious, but a systematic approach to backups should facilitate at least two complete, secure copies of the project data. Before you laugh, note the word ‘secure’. I’d encourage anyone who is concerned about the safety and integrity of client data to think of ‘secure’ copies as ‘non-public’ copies.
For example, leaving client data on local hard drives in commercial studio facilities is not secure. Anyone who needs more drive space can delete the data. Anyone who is bored or curious enough can open it or copy it.
Additionally, shared folders and dropboxes that aren’t password protected are essentially public places. A network is a community in a very real way. Proprietary data should be secured behind closed doors.
Redundancy #2: Multiple Locations
The two secure copies of your client data should be stored in two different locations. This practice will prevent the ugliest of the long list of data loss events from affecting both of your copies.
Fire, flood, theft, and power surge will invariably affect an entire facility. Nobody wants to consider that (myself included). Unfortunately there is a long list of stories that can be recounted in which something really bad happened to all of the “redundant” media.
The easiest way to facilitate multiple locations is to take advantage of the inherently remote ‘location’ that cloud storage provides. Just be sure to actively log in and copy your data at the end of each session. Most media applications suggest that automation tools like Time Machine be disabled to optimize system performance.
Redundancy #3: Multiple Storage Technologies
There are two ways that you can provide technological redundancy within your backup regimen. The first is by choosing different storage media for your two different copies. This redundancy can eliminate the risks introduced by unforeseen factors like defective optical media and infant mortality in spinning hard drives.
Some simple examples of technological redundancy in the primary backup technology could include:
- Copy #1 copied to network attached storage; copy #2 burned to DVD-ROM, or
- Copy #1 copied by the facility’s AIT library; copy #2 on your removable Firewire drive
This difference doesn’t have to be perfect or dramatic. The key is to avoid high-risk scenarios like two identical drives from the same manufacturer, or multiple DVD’s.
The other type of technological redundancy is the secondary technology. This includes any hardware or software necessary to write or read data to/from the primary storage medium. Common examples include:
- Automated archival applications like Retrospect or PresStor
- Optical media drives
- Tape drives
As a rule of thumb, it’s a good idea to rely on as little intermediate technology as possible when backing up client data. Each additional mechanism can represent a barrier to future retrieval. When secondary technology is needed, make it unique to one copy.
Good Habits Make It Easy
The difficult thing about unexpected crises is the whole ‘unexpected’ part. There’s no way to sit at the end of a session and know whether this is going to be one of the times you’ll end up using a backup to restore client data (and avert a larger crisis). The only option is to get in the habit of a sufficiently redundant, systematic backup routine.
I’ve found that there a number of 5 to 10 minute odd jobs that I can accomplish around the studio while my data copies to secure cloud and local storage. For that matter, having a little email time before barreling on to the next thing can be pretty luxurious as well.
The Limits of Backups
While backups can save the day during the production cycle, they’re not particularly useful once the project is over and the masters are ready for delivery. Since backups are working safety copies of your production data, they’re still completely reliant on specific versions of production technology, like DAW and plugin applications.
There are detailed, standardized practices for archives which are discussed in An Introduction to Archiving Music Recordings. For reference, check out the Recommendations for Delivery of Recorded Music Projects published by the Producers and Engineers Wing of The Recording Academy.
An Introduction to Archiving Music Recordings
An Introduction to Archiving Music Recordings
by Rob Schlette
Archiving may be the least exciting thing that happens in a recording studio, but none of the fun parts of music production have much point if the recordings they produce can’t be played back over time. Understanding and practicing systematic archival basics is a necessity, whether you’re in it for the art, or to make a living (or some combination of the two).
Archiving a recording should be distinguished from backing up your data as follows:
- A backup is a working safety copy of your production data. The goal of a systematic approach to backups is to keep data loss from stopping or significantly delaying your work in-progress. Backups contain current production data for in-progress projects as of the conclusion of the most recent session.
- An archive is long-term asset storage. The goal of a systematic approach to archives is to be able to reproduce the various (finished) master recordings associated with a project throughout its useful life, i.e. copyright term.
Why Not Use Your Last Backup as an Archive?
Paradoxical as it sounds, technology is the stuff that’s standing in the way of your master recording playing back over the long term. Specifically, the sort of production data that we backup after every session is heavily dependent on two different groups of technology:
- Primary technology like the DAW session/project files, the audio file format, and the hard drive used to store the recording;
- Secondary technology like the OS version that facilitates your DAW version, all of your plugins, and the hardware capable of hosting all of the above.
None of these may seem like imminent dangers, but just 5 or 6 years down the line there will likely be several technological barriers to playing back your DAW session in its current state, on the current storage device.
What Should You Archive?
The music production process usually produces a minimum of 3 sets of master recordings:
- Multitrack Masters – the complete edited multitrack recordings that fed the mixing process, e.g. Pro Tools sessions, 2-inch 24-track tapes, etc.;
- Mix Masters – the various 2-tracks and bounces that fed the mastering process, e.g. stereo audio files, ½-inch 2-tracks, etc.;
- Replication/Distribution Masters – the media that came out of the mastering process to facilitate distribution, e.g. CD replication master(s), file sets for upload, etc.
Within a systematic approach to archiving, each of these sets of master recordings needs to be addressed in terms of:
- ‘Contents’ that are dependent only on industry-standard primary technology, and independent of any secondary technology;
- ‘Container(s)’ (storage media) that are industry-standard archival media, intended to be used as part of a redundant storage regimen.
Defining ‘Industry Standard’
Media professionals depend on a lot of specialized technology and common practice, so there are a lot of niche standards organizations within the world for music, television, and film. Nonetheless, all of them are producing media that needs to endure throughout copyright term, so archival standards are a universal consideration.
In the music world The Recording Academy (a.k.a. Grammy.org) has a membership division called the Producers and Engineers Wing dedicated to forming, promoting, and tending to guidelines and recommendations specific to music recording. The P and E Wing’s “Recommendations for Delivery of Recorded Music Projects” establishes a comprehensive protocol of music deliverables, “in the interest of all parties involved to make [master recordings] accessible for both the short term and the long term.”
In their own words, The Association for Recorded Sound Collections (ARSC) "is a nonprofit organization dedicated to the preservation and study of sound recordings, in all genres of music and speech, in all formats, and from all periods.” To those ends, their technical committee has developed and published their own document of recommendations for the ”Preservation of Archival Sound Recordings”.
Some other detailed resources include:
- The U.S. Library of Congress’ Digital Preservation site;
- The American Library Association’s overview of "Metadata Standards and Guidelines Relevant To Digital Audio";
- Open Archive eXchange Format (AXF)
Who Has the Answers?
All of the resources listed above (and many others) have excellent, time-tested processes and specifications for durable archives. Some of them may conflict or be incomplete, but any move toward standardization within a working group is a good thing.
None of us can claim the sort of foresight necessary to devise an archival scheme free of threat from technological obsolescence, but standard practices allow us to hedge our bets together. If we all carefully choose a small handful of archival technologies, we have the opportunity to have a coordinated approach to migration once those standardized choices (inevitably) begin to fail over time.
How to Archive Multitrack DAW Recordings
How to Archive Multitrack DAW Recordings
by Rob Schlette
Multitrack DAW recordings are dependent on a complex system of primary and secondary technologies. As discussed in An Introduction to Archiving Music Recordings, each of these technologies represents an obstacle to the long-term viability of a multitrack archive.
Simply put, if the various software and hardware products you’re using today aren’t going to be around in their current versions for the useful life of the sound recordings (i.e. the copyright term), the archived recording must be prepared to weather that obsolescence. The goal of preparing multitrack DAW data for archive is to minimize the layers of technology necessary to completely reconstruct the master recording in the future.
This article introduces some basic techniques for creating both Consolidated and Flat multitracks for archival purposes.
What is a Consolidated Multitrack?
A Consolidated Multitrack is a digital audio fileset that completely expresses the EDL information from a multitrack master recording. Specifically:
- Each DAW track is expressed as a single, continuous Broadcast Wave file (BWF);
- All of the consolidated audio files share the same start times and durations;
- All of the consolidated audio files share the same digital audio precisions, i.e. sample rate and bit depth;
- All of the consolidated audio files share the same descriptive naming convention, e.g. trackname_songtitle_artistname.wav.
If all of the above specifications are met, a folder containing the consolidated audio files could be used to perfectly reconstruct the multitrack recording as far into the future as the Broadcast Wave file format remains viable.
Since the Broadcast Wave file is a widely accepted standard file format for media producers, its long-term viability (and eventual uniform migration) is virtually guaranteed.
Creating a Consolidated Multitrack
- From your last active session/project file, ‘Save As’ to create a discrete file from which you will create a Consolidated Multitrack.
- Hide or delete any auxiliary signal path to simplify the working environment.
- If additional Takes or Playlists are to be included in the Consolidated Multitrack, create new tracks to allow all of the source audio to be simultaneously visible/accessible.
- Using session boundaries, location markers, or some other timeline tool, establish a repeatable global timeline selection that includes all audio from the earliest drop-in to beyond the longest running audio file.
- Once your global selection is made, use the Consolidate or Merge functions to create a single continuous audio file that expresses the EDL information for each track.
- Carefully, consistently label all of the newly consolidated audio files to reflect enough information that they could completely identify themselves by name, e.g. bassamp_take2_ohbabybaby_jimmysingsalot.wav
Once the above steps have been followed, a choice has to be made about how to present these consolidated audio files as a discrete multitrack recording for archive:
Minimally, a folder should be created to contain all of the associated audio files and metadata (like screen shots, rtf files containing session notes, credits, etc.). The folder should follow the same naming convention as the consolidated audio files. This method works fine, but will always require the multitrack to be reconstructed in a DAW for playback.
Alternately, a facility like Pro Tools’ ‘Save Session Copy’ could be used to create a new, independent playback session for only the archival material. Using this method one would need to be careful to remove any non-archival audio and metadata from the source session before saving the copy. This approach would facilitate more convenient short-term use of the archive, but doesn’t actually provide any additional content.
What is a Flat Multitrack?
A Flat Multitrack is a digital audio fileset that completely expresses the EDL information from a multitrack master recording, but also expresses some subset of DAW metadata. What metadata is ‘flattened’ into the archive is up to you, your client, or contractual obligations, but it could include:
- Plug-in processing like amp simulation, ‘printed’ effects from auxiliary channels, or automated processing;
- Automation data, like the fader rides on a lead vocal track;
- Bounced submixes that would otherwise require reconstructing both complex routing and plugin processing.
It is critically important to note that a Flat Multitrack should never be archived instead of a Consolidated Multitrack, but only in addition. The Consolidated Multitrack is the master recording; the Flat Multitrack (when applicable) is an extension of that master.
Creating a Flat Multitrack
Once a Consolidated Multitrack has been created, a Flat Multitrack can be created by repeating the process with a few additional steps:
- From your last active session/project file, ‘Save As’ to create a discrete file from which you will create a Flat Multitrack.
- Hide or delete all auxiliary signal path and metadata that is not going to be flattened.
- If additional Takes or Playlists are to be included in the Flat Multitrack, create new tracks to allow all of the source audio to be simultaneously visible/accessible.
- To flatten real-time processes like automation, time-based effects, or submixing, bounce/re-record the appropriate track outputs to new tracks, and remove the source tracks from the session. Note what metadata has been flattened.
- Flatten additional metadata by processing audio files with offline versions of real-time plug-ins. Note what metadata has been flattened.
- Make a global timeline selection, and use the Consolidate or Merge functions to create a single continuous audio file that expresses the EDL information for each track (including whatever metadata has been flattened into them).
- Carefully, consistently label all of the newly consolidated audio files to reflect enough information that they could completely identify themselves by name, e.g. bassamp_take2_flatcompression_ohbabybaby_jimmysingsalot.wav
Since it would be unlikely that every track within a DAW project would have metadata worth flattening, there will likely be some tracks that remain in their consolidated form. I would caution that it would be both redundant and confusing to include these audio files in a Flat Multitrack archive.
Preferably, an additional folder of flattened audio files can be clearly labeled and organized with the Consolidated Multitrack data. Future users can then reconstruct the Consolidated archive, and opt-in to any of the available, clearly labeled flat content.
Contents Versus Carrier
It should be noted that this tutorial only addresses the form of the contents of a multitrack archive. The question of how to effectively store this information is an entirely additional matter. You can read about archival media in P&E Wing Recommendations for Archival Media.