Chris Rutkowski, Editor of the Music Business and Industry component of Symposium, conducted this Q & A session with Blair Liikala, Director of the University of North Texas Department of Recording Services in July, 2013. A graduate of Michigan State University, with a degree in Telecommunications, Mr. Liikala engineered at the Interlochen Center for the Arts, The Banff Centre (Canada), and in Colorado while also working for Apple, before assuming his current position at UNT. While primarily a service department, Recording Services has begun to offer classes, starting with Intro to Audio and Recording Technology taught by Mr. Liikala.
Chris Rutkowski: Please give us an overview of the UNT Department of Recording Services.
Blair Liikala: The University of North Texas Recording Services Department records major ensembles, faculty and student recital with two professional full-time staff and about 6 students recording about 500 events a year.
The college's physical layout of multiple performance halls in just two buildings allows for one larger control room connected to each performance hall. In the main building the control room supports one video recording and editing area and several audio editing workstations, all tied to a central patch bay for easy signal routing between halls. During busy recital months the control room can have up to 4 simultaneous recordings of multiple track counts at one time.
Winspear Hall, the flagship concert hall about two blocks away, has dedicated audio and video control rooms for the 20 events a semester there, shared only with the 4 opera performances a semester. The suites are in the adjacent building and connected over fiber.
CR: Can you give us a brief description of your recording gear?
BL: The department holds about 80 microphones for installed hanging rigs, high track count jazz concerts, critical classical recording, and specialized opera. Two halls are capable of 56 channels split with the front-of-house mixer, though typically run only 32. For hanging systems we have a Decca Tree, Schoeps surround sphere, and ORTF rigs. The primary preamps are Grace Design 802 with Analog/Digital conversion, but also we also carry Millennia and Forssell Technologies. Recording and post-production revolves around digital audio workstations Logic Studio, and to a lesser degree, ProTools.
CR: In addition to the excellent recordings your department is noted for, you’ve also developed one of the most expansive live-streaming programs. Can you tell us something about that enterprise?
BL: While audio recording is fundamental to the department, video recording and live streaming have become the UNTs trademark skill. Beginning in 2009, the department repurposed an aging DVD production studio for live streaming, with upgrades to HD and expansion in 2011. The flagship hall is capable of a six PTZ (Pan/Tilt/Zoom) camera live stream/recording using just one operator, with a smaller recital hall having three. Live streams are compressed using broadcast-quality encoders and distributed through a combination of internal systems and external services for optimal delivery to as many devices as possible, and maintain parts of the website functionality and features. The goal is a high quality experience at a fraction of the cost.
Beginning in the Fall of 2012 the department also begin shooting intermission spots and some photography. The goal being to slowly make a library of footage, interviews, and content for quick assembly for intermissions and short features with a small footprint.
CR: Do you archive the live-streamed video?
BL: All UNT recordings are posted only online. The UNT YouTube channel carries mostly clips, with some full-length concerts. All ensemble video within the last 3 years is available for on demand streaming with meta and chapter makers to UNT students. Audio and video downloads are accessible through a UNT-only portal where users can subscribe to various tags, ensembles, RSS and podcast feeds to notify usually within the hour when a concert is available to listen and download. The system is also designed with levels of access, a storefront and credit card system for future developments, and some paid downloads. The final resting place of all recordings is the university library. All the tools have also been developed and deployed by a non-web development staff.
Audio is usually available within the hour after a concert, video by the end of the evening. The department shot 6 intermission features, and plans to slowly increase that count. Live stream yearly viewership usually hovers around 80,000 to 100,000 while on-demand activity reaches about 1500 hours.
CR: Can you provide some detail on the signal flow of both your audio and video recording?
BL: The Winspear control rooms have a minimal but standard workflow to producing HD video recordings and live streams. The cameras continue to be pan-tilt-zoom motorized cameras for their quiet operation, small footprint, security for those that are installed and low overhead from not needing to hire trained camera operators. It is a tradeoff, however, since image quality can suffer, and panning can look mechanical. During shoots the operator is trained with techniques to avoid showing off the weaknesses.
Signal is transported using HD-SDI over mostly coax and some fiber about 600ft to the adjacent building’s control room along with the control line. Once in the control room all input and output devices are connected through a single network-enabled router. A single point allows splits of individual sources like cameras or computers to be sent to specific locations like backstage or offices. Routing can be done in the studio with physical buttons or through a website accessible from anywhere in the world with an Internet connection.
A single operator runs both the switcher and camera controls while a second operator, usually staff, run audio and watch the live stream chat and other statistics. The current system only allows one camera to move at a time so shots tend to be static. To stay minimal and keep costs low graphics are not normally added, and only on large events is there a score reader/producer in the room. Just recently were features like piece tracking and intermission videos included, but done in a way to keep technical overhead as light as possible.
After being embedded with live-mixed audio, the program material is distributed to the other control rooms, offices, and the performance hall backstage, green room and lobby. Some remain digital, some paths are converted back to analog composite for better cost efficiency. Program is also sent to a backup solid state recorder for archiving and finally the streaming encoder.
The live hardware encoder takes the program material and intelligently discards almost 80% of the video data into multiple video files with different bitrates (or quality levels). Those are then sent to a content distribution network the viewer will connect to. The player on the viewer’s device then decides based on a variety of conditions which quality it can playback within periods of seconds. Because the compression phase is so processing intensive the encoder uses graphics processors for parallel encoding. The encoder also records a high resolution master file as well for editing and reprocessing for on-demand and download delivery.
CR: Any final comments?
BL: In all of our workflows we strive to create the most efficient process that can produce high quality products, while scaling to the quantity without exhausting our limited resources. Ideas start off as experiments using as much existing software, hardware, or personnel as possible. If they gain traction they are moved to a larger deployment, say from a single ensemble to all ensembles, increased hardware support, and increased training. The final step is removing the project once it has become either too expensive, falls out of usefulness or (what is usually the case) is replaced with a new system with less overhead and easier use. We experiment often, reckless amounts at times, but only a few continue to mainstream use.