Its time to look at the complete schedule of sessions, events and tours available for the 141st Audio Engineering Society Convention in Los Angeles. The largest audio conference of the year is truly a challenge for any visitor. Plus, with this year’s new timetable and exhibits starting on day one of the convention, a perfect schedule is essential. Everything takes place Thursday, September 29 through Sunday, October 2, 2016 at the Los Angeles Convention Center.
With a full range of events and presentations spanning all segments of the audio industry, the convention offers four full days of audio information and inspiration, presented by industry leaders in science and practice. All Access registration nets you the full convention experience. The gear exposition and special events are available to all attendees, including those registering for a FREE Exhibits-Plus badge option (apply promo code AES141NOW at checkout).
AES Los Angeles Convention Calendar of Events
The Audio Engineering Society has unveiled the detailed calendar of events for the 141st AES International Convention. With a full range of events and presentations spanning all segments of the audio industry, the AES Los Angeles Convention offers four full days of information and inspiration, led by industry leaders in science and practice, with attendance packages suited for every level of audio professional, student or enthusiast.
From the latest research and technology presentations to the largest pro audio gear exhibition and showcase of the year, with many of the greatest ears in audio in attendance, the AES Los Angeles Convention is the place to truly immerse in audio. The preliminary calendar is available at www.aes.org/events/141/program/.
The AES Los Angeles Technical Program and Events calendar, both online and in the free AES Events app, provides the clearest overview of the wide array of events covering Broadcast and Streaming Media, Recording and Production, Game Audio, Live Sound, Networked Audio, Product Design, Sound for Picture and much more. Sessions are laid out according to areas of interest, with varied program material featuring Tutorials, Workshops, Research Paper Presentations and Engineering Briefs, as well as a host of special guest speakers, expert panel presentations and face-to-face networking opportunities. Additional Student and Career events, Historical sessions, Tech Tours, Standards meetings and other standard AES Convention programs will also take place.
Keynotes and Lectures
Legendary instrument designer and “Father of MIDI” Dave Smith will give the convention’s Richard C. Heyser Memorial Lecture on September 29 at 6PM at the Los Angeles Convention Center. The presentation, titled “Synthesizers: From Analog to Digital to Software to Analog,” will explore the ongoing evolution of instrument design and synthesis and the 50-year history of the synthesizer and its impact on music and audio. The Heyser Lecture is part of the Special Events schedule, open to all convention attendees.
At the 141st AES International Convention’s Thursday Opening Ceremonies keynote speech on September 29 (12:30 – 2:00 pm), keynote speaker and composer Ron Jones will explore these considerations in his talk, “Remember the Human Receptor on the Road to the Future.”
As technological advancement continues to accelerate at an exponential rate, a key question looms ever closer: what about the human factor? Technology has enabled remarkable advances in audio with unexplored wonders yet to come – but what does this mean for listeners? Ron Jones, owner of Stanwood, WA’s SkyMuse Studios and Ron Jones Productions, is the GRAMMY- and Emmy-nominated composer for Star Trek: The Next Generation, Family Guy, Superman and more, with over 40,000 compositions to his credit. Jones has conducted his music with the London Philharmonic, is a member of the Pacific Northwest chapter of the AES and founded the non-profit Academy of Scoring Arts in Los Angeles.
Jones’ vision of audio’s future is clear: “I welcome the new directions where audio production is heading, but caution everyone in this rapidly changing time to not lose sight of what all this technology is for: to engage peoples’ emotions using the art and craft of music.”
Audio is processed very differently than visual stimuli by the human brain. Audio has deep connections to the brain’s neural network, connecting directly with neural electro-chemical transmitters. Without a firm understanding of the “human receptor” – the listener – new technologies and inventions won’t achieve their desired effect with people, with content creators and manufacturers, and with the markets for these products. But with quantum computing coming in the not-so-far future, the computing bandwidth of human brains will be left in the dust. “However, common sense and a solid regard for how the ‘human receptor’ works will serve as a compass and a wonderful guide to where the man-machine relationship is headed as the audio industry rockets forward,” Jones notes. In his work as a composer, his job is not to tell the story – that’s for the picture, the characters and the dialog. His job is to engage peoples’ emotions using music. “I know with a large degree of accuracy how to push emotional buttons in sync with the picture and the story.”
Sound for Picture Track
Audio today is increasingly associated with film or video, whether cinema, TV, the Internet, streaming broadcasts or mobile. The 141st International Audio Engineering Society Convention’s Sound for Picture Track events will feature some of the industry’s most recognized experts, who will take an in-depth look at recording, mixing and producing audio/video content in an audio industry that’s evolving to accommodate the audience’s ever-changing viewing preferences.
“This year’s 141st AES Sound for Picture Track will feature a remarkably talented lineup of presenters,” says AES Los Angeles Sound for Picture Chair Brian McCarty. “Among them they’ve garnered more than six OSCARs and 15 nominations for Best Sound, seven EMMY wins and 35 nominations, 5 BAFTA awards, a Golden Globe, two GRAMMYs and much more – and they’ll be sharing their knowledge and experience with us first-hand.”
Production sound is the primary method of capturing the dialog for film and TV, and Friday, September 30’s “Production Sound: the Sound Professionals Responsible for Telling the Story” will explain the specific and unique methods and equipment for this work, which is often done in the field. This always-popular seminar will feature top professionals including Brian McCarty (Coral Sea Studios), Jeff Wexler (JW Sound), Devendra Cleary (DC Audio and Music, Inc.), Peter Kurland and Matthew Nicolay, who will discuss their methods for dialog capture.
Like all areas of film and TV production, the craft of music scoring is undergoing significant changes to meet ever-growing demand. Friday’s “Music Scoring for Film and TV: How It Was, Where It Is, Where It’s Going” session will host moderator Brian McCarty and Leslie Ann Jones of Skywalker Sound, Jason LaRocca of La-Rocc-A-Fella, Inc., and the team of Composer Simon Franglen and Music Editor Jim Henrikson who supported the late composer James Horner (IMDB filmography) on some of the largest blockbuster hits of all time, including the Oscar-winning Titanic and Avatar, along with dozens of other top productions, in a look at the process of preparing and recording music for film and TV.
Nothing’s as important as hearing the dialog, yet there have been growing reports that the audience has been increasingly unhappy with what they’re hearing – or not hearing – in the cinema and at home. “Dialog Intelligibility: the Challenge of Recording the Words So the Audience Can Understand Them” will round out the Friday sessions with acoustician Peter Mapp, BBC Principal Technologist Simon Tuff, sound editor Marla McGuire and others in an examination of the issues involved from the microphone through final mix, and what can be done to overcome them.
Saturday, October 1, kicks off with “World-Class Film and TV Sound Design,” where a panel of experts including Brian McCarty, Lon Bender of Soundelux and sound editor Karen Baker Landers will address the critical importance of sound design in bringing the director’s vision to the audience. Saturday’s world-class theme will continue with “World Class Sound Mixers Discuss Their Craft,” featuring McCarty, Bender and Bob Bronow (The Bronow Group and Audio Cocktail). Once the dialog, music and sound effects have been prepared, the dubbing mixers for film and TV have the final impact on the productions – and as the panelists will discuss the complexity of the task and the skills required have increased dramatically over the years.
Immersive sound formats are growing in use, but the challenges of realistic and engaging immersive sound field design can be at odds with the fast-paced production environments of film and TV. On Sunday, October 2, “Immersive Sound Design with Particle Systems” will feature Nuno Fonseca (ESTG/Polytechnic Institute of Leiria and Sound Particles), Will Files (sound designer and re-recording mixer at Skywalker Sound) and sound designers Jason Jennings and Mark Mangini in a discussion of the ways they craft immersive sound fields and their use of specialty tools like particle systems.
Networked Audio Track
Audio and video delivery continues to make the move from traditional cabling to content delivery over networks, in particular Ethernet, LAN (Local Area Network) or IP-based WAN (Wide Area Network). At the 141st International Audio Engineering Society Convention (at the Los Angeles Convention Center, September 29 – October 2, 2016), the Networked Audio Track, supported by the AES Technical Committee on Networked Audio Systems, will explore the latest developments in the methods, protocols and applications of networked audio.
The AES67 standard is a cornerstone for audio-over IP interoperability. As such it will be the topic of no fewer than four seminars. “AES Discovery,” the first of three seminars to be presented on Friday, September 30, will be moderated by Aidan Williams of Audinate (the company behind Dante). It will examine the pros and cons of the multiple device discovery methods allowed by AES67 and highlight the importance of establishing a reliable A/V industry standard. “Rolling Out AES67 into Real-World Applications” will provide tips and real-world examples about bringing AES67 AoIP (audio over IP) networking to installations, and examine the applicability of AES67 to network requirements in general.
The “AES67 and the Audio Industry” panel, to be presented by QSC’s Rich Zwiebel, will discuss the many audio networking standards available today and the issues involved in either adopting a single platform or linking disparate network pools together. AES67 promises to solve this dilemma by providing a common interchange format. The final AES67-related seminar, “AES Interoperability Testing – the Plugfest Report” (Sunday, October 2), will feature AVA Networks’ Kevin Gross explaining the roles of different network technologies, and other considerations.
IT convergence is upon the industry in a major way, but many in A/V are unfamiliar with the details of audio networking from the perspective of an IT manager. Friday, September 30’s “Understanding Audio Capabilities and Bandwidth in Mixed-Use Networks” session will dispel some commonly held misconceptions about audio bandwidth requirements and the capabilities of modern switched networks. Later on Friday, “University of North Texas College of Music – Recording with Networked Audio” will look at the recent installation of a Dante network at one of the nation’s largest music colleges.
The Internet of Things (IoT) is a hot topic, and on Saturday, October 1, “The Internet of Media Things for Installed A/V, Recording and Live Events” will feature Greg Schlechter of Intel pointing out how the next natural step for networked A/V is in becoming part of the IoT. Saturday will also include a must-see session: “Optimizing Audio Networks,” presented by Patrick Killianey of Yamaha Professional Audio. He’ll cover the key methods and technologies for optimizing low latency and high bandwidth for audio networks. The day will close with Rich Zwiebel of QSC and Terry Holton of Yamaha moderating an industry panel in “An Overview of AES67,” exploring the remarkable advances and remaining challenges in making media networks from different manufacturers truly interoperable.
The 141st AES Networking Audio Track events will conclude on Sunday, October 2, beginning with a Jeff Berryman-led session on the AES67 companion networked control standard, AES70. The final session is “Who Owns the Audio Network, IT or AV?” In an audio network the line between the audio and IT trades can be blurred, and this seminar by Patrick Killianey will address the specific ways in which both professions should be involved when specifying and maintaining a network infrastructure.
“Networked media delivery has literally become intertwined into every aspect of live, installed, broadcast, studio and mobile audio/video,” said Bob Lee, AES Networked Audio Track Chair. “Even longtime—you could say even traditionalist—holdouts against putting audio onto data networks are being won over by the immense flexibility and capability that current media networking technology offers. Being savvy in networked audio is becoming increasingly mandatory for audio pros, and the 141st AES will examine the very latest developments in the field.”
Broadcast and Streaming Media Track
If there’s one thing that’s constant about broadcast and streaming media, it’s constant change – and the 141st AES International Convention will keep attendees on top of the latest industry developments and trends with its Broadcast and Streaming Media Track events. These sessions, professionally organized by Track chair David Bialik, will offer in-depth panel discussions and presentations from some of the most influential names in the industry over the four days of the AES Los Angeles Convention, September 29 – October 2, 2016, at the Los Angeles Convention Center.
Thursday, September 29, will kick off with “Immersive Audio Absorbing Radio and TV Audiences in 2016 and Beyond,” led by John Storyk of Walters-Storyk Design Group, on the technical and acoustical challenges of upgrading existing broadcast studios to handle immersive audio. Subsequent panels will examine the physical and psychological effects of listener fatigue and what can be done to reduce it, and the unprecedented capabilities of immersive and object-oriented audio in customizing the home listening experience. In the spirit of SMPTE’s 100th anniversary, the day will end with a historical look back at key developments in audio technologies for broadcast and cinema.
Friday, September 30’s “Audio Considerations for 4K and 8K Television” seminar will look at the evolution of 4K and 8K UHD broadcasting as it becomes increasingly prevalent in events like the Super Bowl, the Masters and the Summer Games; the session will also cover ATSC 3.0 and Super Hi-Vision experimental transmitters. The subsequent Engineering Brief lecture covers a host of broadcast and production topics, from a new, more accurate proposed film leader format to Dysonics’ Rondo360 spatial audio post-production toolkit and techniques for mixing hip-hop with distortion.
Friday’s sessions will also include an interview with industry legend Bob Orban, creator of the Orban Stereo Synthesizer, Optimod FM audio processor and other game-changing studio hardware, as well as an “Audio Considerations for Over-the-Top Television” (OTT) presentation, which will examine the latest advancements in online content delivery.
Saturday, October 1, will focus on practical considerations for broadcast and streaming, starting with “Designing, Building and Maintaining a Radio Performance Space,” where CBS Radio’s Tracy Teagarden will talk about equipping such a facility when faced with limited resources. The day’s second session addresses IP in the broadcast world, where Steve Lampen of Belden will look at the realities of implementing wired, fiber-optic and wireless cable and hardware for IP audio applications. “Considerations for Podcast Audio” is sure to be a popular session to end the day, with experts from American Public Media, Love + Radio and others discussing the evolving craft of sound design for podcast audiences.
On Sunday, October 2, a “Grease: Live – the Mixer’s Perspective” special event will discuss the recent Fox television special “Grease: Live” – one of the most exciting and challenging events in contemporary broadcast production. Moderator Mark King and his panel will share their methods and techniques for mixing a live TV show where there’s no chance for a retake.
Changing Face of Television Audio Production and Delivery
This year’s DTVAG Forum, promoted jointly by the Audio Engineering Society and the DTV Audio Group (DTVAG) takes place Saturday, October 1, from 1:30pm – 6pm. Part of the 141st Convention Special Events program (open to all attendees), the presentation, titled “The Changing Face of Television Audio: Objects, Immersivity, and Personalization,” will take an in-depth look at a variety of new and exciting developments, and the issues involved with common content production and delivery methods.
With the explosion in streamed-content delivery to fixed and mobile devices accelerating the adoption of advanced audio services for television and broadcast, new possibilities in immersive sound, enhanced personalization, and improved bandwidth efficiency have emerged. Cinema-quality immersive soundtracks are now starting to show up on popular streaming platforms at the same time that VR is driving interest and innovation in personalization and virtualized surround sound on mobile devices. These issues can be addressed through understanding how Hollywood is coping with streamlining object workflows for episodic production and manage the loudness and consistency issues this created by outdated format-and-dynamic-range-limited encoding workflows still being used.
Discussion topics will include:
- The Impact of VR on Immersivity and Personalization in Television
- VR is the ultimate personalized immersive experience. How will technologies and trends driven by VR re-calibrate our thinking about television sound?
- Evolving Tools for Object Audio Post Production
- How do theatrical workflows and tools get faster and leaner for the demands of premium episodic TV?
- Advanced Authoring Tools: Live Audio Production
- As the ability to deliver advanced audio expands, can live production fill the pipe?
- Challenges and Opportunities for Live Production Deliverables
- Can adaptive rendering address the persistent challenge of format compatibly? Are we ready to get on board?
- The challenges of Loudness Management in Multi-Platform Streamed Content Delivery
- Can a line still be drawn between fixed and mobile or desktop streaming? How do content preparation and audio encoding processes need to catch up?
Roger Charlesworth, Executive Director, DTV Audio Group, remarks, “The impact of streaming is upending the entire television business, and audio is benefiting. The migration from traditional broadcasting to an IP stream-based model is accelerating the uptake of advanced encoding solutions with sophisticated audio services. This is good news, but expect turbulence along the way.”
Game Audio Track Events
Game Audio is one of the most technologically advanced and fastest-growing segments of the audio field. The AES Game Audio Track sessions make sure attendees are right in the middle of the action at this year’s 141st AES International Convention. “If you want to learn about the latest developments and opportunities in game audio, the 141st AES Convention will be a ‘can’t miss’ destination,” says Steve Martz, 141st AES Game Audio Track Chair. “Our sessions and expert presenters will bring attendees not just up to date, but ahead of the curve in this ever-evolving industry segment.”
AES 141st’s Game Audio Track gets out of the gate Thursday, September 29, with “Tales of Audio from the Third Dimension!,” presented by Microsoft’s Scott Selfon, which will present both a technical and creative primer on topics including dynamic simulation of position, distance, interaction with game geometry, environmental reverberation, and more. “Designing, Planning and Creating a Dynamic Music System” will follow, as Sound Librarian’s Stephan Schütze explains the complex process of creating dynamic video game music, where the music interacts with the game action to dramatically enhance the player’s experience. Thursday will conclude with “Dialogue Recording Workflow,” a look at the technical and artistic challenges of dialogue for video games, which may have as many as 10 to 100 times more spoken lines than in a movie.
Friday, September 30’s “Impact of Immersive Audio for Today’s Games” will feature New Audio Technologies’ Tom Ammermann in an examination of current strategies for headphone virtualization and immersive home theater environments. Additional seminars will include a look at VR game audio design for the Sony Computer Entertainment America game Bound in the presentation “Adapting Traditional Game Audio for VR Experiences: A ‘Bound’ Post-Mortem” by Daniel Birczynski, and Steve Martz moderating “VR Audio Renderer Panel – Process and Discussion,” which will help game makers understand the spatializer rendering process and work more easily within the format.
On Saturday, October 1, “Game Audio Education – Get Smart! How and Where to Get the Training You Need” will enlighten attendees on this hot topic. What are some of the latest training and degree programs? How can I start a game audio program if I’m a teacher? What about learning on your own? Moderator Scott Looney of the Academy of Art University and entrepreneurs from top educational institutions will cover these topics and more. Steve Horowitz of the Game Audio Institute and Nickelodeon Digital will lead a “Careers in Game Audio – Understanding the Business of Games” seminar, in which game studio professionals will discuss what people need to know when looking for work in the gaming industry.
Furthermore, as part of the AES Los Angeles special events on Thursday, September 29, the “Implementation & Mixing for VR Games as Both Art & Science” event will gather experts from PyraMind Studios, Zero Latency VR, Technicolor, and Sony to explore both the variable aesthetics at play as well as discuss some of the latest platform, middleware and plug in developments being used to achieve realism and sonic aesthetics in today’s latest games.
A newly announced AES Technical Tour to the Sony Computer Entertainment America facilities where attendees will tour their state-of-the-art audio facilities located within Santa Monica Studios, the game studio behind the hit God of War franchise. This walkthrough tour of the facilities includes an explanation of tools, process and pipeline, and samples of the studio's audio work in recent releases. Attendees will need to sign a non-disclosure agreement to enter the studio.
Audio for Virtual and Augmented Reality
The inaugural AES International Conference on Audio for Virtual and Augmented Reality, to be held on September 30 and October 1, 2016, will feature some of the most cutting edge content developers, researchers and manufacturers currently working in audio for VR / AR such as Magic Leap, BBC, Dolby, Sennheiser, Qualcomm, NASA, Fraunhofer, DTS, and many more. Presentations at the conference, co-located with the 141st AES Convention at the Los Angeles Convention Center’s West Hall, will appeal to wide audience, from those who are currently involved in the audio for VR / AR industry to anyone who is interested in entering in this exciting field. The companion technology showcase will feature displays and demonstrations of the latest offerings from leading manufacturers in VR / AR audio.
“This industry is moving so fast that it’s been really exhilarating to watch it evolve. Audio takes on a new role when it comes to VR / AR content as sound is now part of the experience, not just an aid in conveying story,” states Linda Gedemer, conference co-chair and CTO of Source Sound VR. Linda further explains; “We can hear in 360º unlike our visual field, which is obviously limited to what is seen directly in front of us. Because of this, audio now takes a leading role for content creators by aiding them to direct their audience on where to look next; it is central to guiding the audience through the story.”
The conference program has been carefully developed to provide a comprehensive overview of AR/VR creative processes, applications, workflows and product developments. Presentations will cover a variety of technical and practical aspects of audio for VR / AR, including papers on the latest developments in research to workshops and tutorials demonstrating cutting edge production tools. The opening keynote speech will be given by Philip Lelyveld, who runs the Virtual Reality / Augmented Reality Initiative at the University of Southern California's Entertainment Technology Center which is a think tank within USC's School of Cinematic Arts. The closing keynote will be given by George Sanger, 'Grand Vizier of Noise' for Magic Leap. In his role as Audio Director, George guides the Sonic Arts team at Magic Leap - a startup that is reimagining reality and inventing the future using digital lightfields.
The two-day program of technical papers, workshops, tutorials along with a manufacturer’s technology showcase will highlight challenges as well as creative and technical solutions to providing spatial audio for virtual reality and augmented reality media. This conference proves timely as virtual and augmented reality are demonstrably the fastest-growing section of the entertainment-audio markets.
This must-attend conference will be held within a recently-remodeled 300-seat theater and companion seminar room at the LA Convention Center. Conference registrants can also attend the 141st AES Convention’s companion exhibition along with select educational sessions and special events free of charge with an Exhibits-Plus badge included in the cost of their conference fees.
Technical Tour Schedule
For those with a clear focus on some particular topics of the convention and some free time during the 141st AES International Convention, there’s also a complete tour schedule, which includes 20th Century Fox, the Dolby Theatre, Capitol Studios and other LA locales. Los Angeles is a world center of audio production, and 141st AES International Convention attendees will have the unique opportunity to visit some of the most storied audio-related locales in the area, including Sony Computer Entertainment America, Dolby's Umlang Theatre Atmos Post Production, 20th Century Fox, Universal Studios Hollywood’s WaterWorld Attraction, Sunset Sound, Capitol Studios and other legendary LA locales that include first-time visits as part of the AES Technical Tour events.
The AES 141st International Convention full Technical Tour schedule is available here: www.aes.org/events/141/tours/
aesshow.com
- on Industry News
- News
141st AES International Convention in Los Angeles is Largest Audio Industry Event of the Year
September 26 2016, 04:00
Its time to look at the complete schedule of sessions, events and tours available for the 141st Audio Engineering Society Convention in Los Angeles. The largest audio conference of the year is truly a challenge for any visitor. Plus, with this year’s new timetable and exhibits starting on day one of the convention, a perfect schedule is essential. Everything takes place Thursday, September 29 through Sunday, October 2, 2016 at the Los Angeles Convention Center.