Can Audio Quality Criteria Be Classified by Machines Using Artificial Intelligence?

March 15 2021, 00:30
The IEEE Broadcast Technology Society and the Society of Motion Picture and Television Engineers (SMPTE) are promoting a webcast on March 24 to discuss the use of machine learning and artificial intelligence to address audio quality classification. The topic will be addressed by Musician Tyler Morris, Tufts Graduate School of Electrical Engineering, Undergraduate WPI Electrical and Computer Engineering, and Dr. Karen Panetta, Dean of Graduate Education for the School of Engineering, authors of a paper on the subject.
 

The field of Audio Engineering has long relied on finely tuned ears of experts to accomplish the task of determining proper microphone location, thus maximizing audio quality. As home studio applications become increasingly popular, the need for a consumer solution is more apparent than ever. 

Meanwhile, the fields of Machine Learning and Artificial Intelligence have in recent years been responsible for tackling many diverse problems that were previously thought to be near impossible. With a recent worldwide pandemic increasing the demand for quality home audio applications, the authors of this paper have decided to investigate audio quality and determine if it is possible for non-human ears to classify and quantify an audio recording’s sound as “good” or “bad”. 

While Machine Learning and Artificial Intelligence have been used before in some audio applications, they have never been used in the manner that the author is proposing, which is to quantify audio recording quality. In this paper, the authors will discuss similar research that has been conducted, recall their development of a unique approach to solve the problem, analyze experiments that were performed in order to extract features from collected data, compare and contrast multiple methods of classifying test data and, finally, discuss future applications of this research that extend to many fields. 

The research will be presented by Tyler Morris, Tufts Graduate School of Electrical Engineering, Undergraduate WPI Electrical and Computer Engineering . Tyler Morris is a 22 year old Engineer from Boston, Massachusetts who is currently completing his Masters of Electrics Engineering at Tufts Graduate School in Spring 2021 and received his Bachelor’s Degree in Electrical and Computer Engineering from WPI. At Tufts, Morris has been working closely with Dean Dr. Panetta to combine the field of Machine Learning with Morris’ passion, Audio Engineering.  

Also a musician, Tyler Morris has released four internationally acclaimed albums, with the most recent, “Living in The Shadows”, debuting at #3 on the Billboard and iTunes Blues Charts. Tyler Morris has performed with music industry notables including Sammy Hagar, Steve Vai, Yngwie Malmsteen, and many others.

His passion for Engineering, music and audio has allowed him to succeed in numerous innovations developed from his unique drive and passion. Morris has designed audio effects for the likes of Joe Bonamassa, Conan O’Brien, Brad Whitford (Aerosmith), Elliot Easton (The Cars), Brian May (Queen), Warren Haynes and others. Tyler Morris Designs (TMD) just released their consumer line of pedals, and is an endorsed artist by Gibson Guitars, Marshall Amps, and Fishman.

Co-author Dr. Karen Panetta, is Dean of Graduate Education for the School of Engineering, Tufts University, Professor, Electrical & Computer Engineering, and Adjunct Professor of Computer Science and Mechanical Engineering.
www.smpte.org/
www.ieee.org

IEEE BTS Webcast: Classification of Audio Quality Using Machine Learning and Artificial Intelligence
Wed, Mar 24, 2021 5:00 PM - 6:00 PM GMT
Registration available here
 
related items