Keynote Talk: 17th Saturday, 10:10-11:10

Music Cultures Opened up by Music Technologies

Music technologies have opened up various music cultures. For example, any musical instruments such as guitar, piano, and sound synthesizers were originally invented by state-of-the-art music technologies and have had huge influences on music cultures. Since singing synthesis software such as Hatsune Miku based on VOCALOID has been attracting attention since 2007, the world's first culture that actively enjoys songs with synthesized singing voices as main vocals has been emerged in Japan. Singing synthesis thus breaks down the long-cherished view that listening to a non-human singing voice is worthless. In fact, live concerts featuring Hatsune Miku based on singing synthesis have been successful not only in several cities in Japan but also in Taipei, Los Angeles, New York, Singapore, Hong Kong, Jakarta, etc. This is a feat that could not have been imagined before.

In the future, further advances in music technologies will give birth to another new culture of music creation and appreciation. Technologies for automatic music creation might begin breaking down the view that listening to a non-human composition is worthless. Music understanding technologies not only augment listener's abilities in appreciating music but also might enable computers to be an audience of human performances.

This keynote talk will demonstrate several practical systems such as VocaListener that can synthesize natural singing voices by analyzing and imitating human singing, VocaWatcher that can generate realistic facial motions of a humanoid robot, Songle (http://songle.jp) that has analyzed more than 840,000 songs on music- or video-sharing services and facilitates deeper understanding of music and music-synchronized control of robot dancers, and Songrium (http://songrium.jp) that allows users to explore music while seeing and utilizing various relations among more than 690,000 music video clips on video-sharing services.

VocaWatcher


Songle

Dr. Masataka Goto
(National Institute of Advanced Industrial Science and Technology (AIST),Japan)


Masataka Goto received the Doctor of Engineering degree from Waseda University in 1998. He is currently a Prime Senior Researcher and the Leader of the Media Interaction Group at the National Institute of Advanced Industrial Science and Technology (AIST), Japan. In 1992 he was one of the first to start work on automatic music understanding, and has since been at the forefront of research in music technologies and music interfaces based on those technologies. Over the past 23 years, he has published more than 220 papers in refereed journals and international conferences and has received 40 awards, including several best paper awards, best presentation awards, the Tenth Japan Academy Medal, the Tenth JSPS PRIZE, and the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology (Young Scientists' Prize). He has served as a committee member of over 90 scientific societies and conferences, including the General Chair of the 10th and 15th International Society for Music Information Retrieval Conferences (ISMIR 2009 and 2014). In 2011, as the Research Director he began a 5-year research project (OngaCREST Project) on music technologies, a project funded by the Japan Science and Technology Agency (CREST, JST).