Person-Centered Multimedia Computing: A New Paradigm Inspired by Assistive and Rehabilitative Applications
Dr. Sethuraman Panchanathan
Abstract: Human-centered multimedia computing (HCMC) focuses on a tight engagement of humans in the design, development and deployment of multimedia solutions. Today’s multimedia technologies largely cater to the needs of the “able” population, resulting in HCMC solutions that mostly meet the needs of that community. However, individuals with disabilities have specific requirements that necessitate a personalized, adaptive approach toward smart multimedia computing. In addition, individuals with disabilities have largely been absent in the design process, and have to adapt themselves (often unsuccessfully) to available solutions. To address this challenge, we recently introduced the concept of person-centered multimedia computing (PCMC), where the emphasis is on understanding the individual user’s needs, expectations and adaptations towards designing, developing and deploying effective and smart multimedia solutions. In this talk, PCMC will be discussed from two application viewpoints: (i) social assistive aids to enrich the interaction experience of individuals with visual impairments and (ii) cyber-physical systems for stroke rehabilitation. Both these applications embody person-centeredness as the underlying methodology. Our research not only demonstrates the significant potential in using person centered multimedia solutions to enrich the lives of individuals with disabilities, but also the criticality of using a person centered approach to effectively address complex smart multimedia challenges in designing real-world solutions.
Executive Vice President, ASU Knowledge Enterprise Development
Chief Research and Innovation Officer
Director, Center for Cognitive Ubiquitous Computing
Foundation Chair in Computing and Informatics
Panchanathan was the founding director of the School of Computing and Informatics and was instrumental in founding the Biomedical Informatics Department at ASU. He also served as the chair of the Computer Science and Engineering Department. He founded the Center for Cognitive Ubiquitous Computing (CUbiC) at ASU. CUbiC’s flagship project iCARE, for individuals who are blind and visually impaired, won the Governor’s Innovator of the Year-Academia Award in November 2004. In 2014, Panchanathan was appointed by President Barack Obama to the U.S. National Science Board (NSB) and is Chair of the Committee on Strategy. He was appointed by former U.S. Secretary of Commerce Penny Pritzker to the National Advisory Council on Innovation and Entrepreneurship (NACIE). Panchanathan is a Fellow of the National Academy of Inventors (NAI), the American Association for the Advancement of Science (AAAS) and the Canadian Academy of Engineering. He is also Fellow of the Institute of Electrical and Electronics Engineers (IEEE), and the Society of Optical Engineering (SPIE). He is currently serving as the Chair of the Council on Research within the Association of Public and Land-grant Universities. Panchanathan was the editor-in-chief of the IEEE Multimedia Magazine and is also an editor/associate editor of many other journals and transactions. Panchanathan’s research interests are in the areas of human-centered multimedia computing, haptic user interfaces, person-centered tools and ubiquitous computing technologies for enhancing the quality of life for individuals with disabilities, machine learning for multimedia applications, medical image processing, and media processor designs. Panchanathan has published over 440 papers in refereed journals and conferences and has mentored over 100 graduate students, post-docs, research engineers and research scientists who occupy leading positions in academia and industry.
Towards Smart Signal Processing in HDR Video Pipeline
Guan-Ming Su, Dolby Labs, USA
Abstract: With the advancement of hardware technologies, such as camera and display, we have arrived at High Dynamic Range (HDR) video era. HDR related devices and content are already available to purchase and subscribe in the current market. On the other hand, HDR video exhibits different characteristics from conventional video owing to covering wide range of luminance and needs smart signal processing to fully utilize the pipeline for optimal perceptual performance. In this talk, we tackle several issues along the HDR video pipeline. More specifically, we will first discuss the HDR signal format, and point out the major difference and challenges from conventional video. Then, at the content processing stage, we present an efficient technique to transfer the image statistics to reuse the current legacy pipeline. At the content distribution stage, we address how to transmit a single bitstream to support different capability of display in a progressive manner.
Guan-Ming Su is with Dolby Labs, Sunnyvale, CA, USA. He is the inventor of 80+ U.S. patents and pending applications. He is the co-author of 3D Visual Communications (John Wiley & Sons, 2013). He served as an associate editor of Journal of Communications; associate editor in APSIPA Transactions on Signal and Information Processing, and Director of review board and R-Letter in IEEE Multimedia Communications Technical Committee. He also serves as the Technical Program Track Co-Chair in ICCCN 2011, Theme Chair in ICME 2013, TPC Co-Chair in ICNC 2013, TPC Chair in ICNC 2014, Demo Chair in SMC 2014, General Chair in ICNC 2015, Area Co-Chair for Multimedia Applications in ISM 2015, Demo Co-Chair in ISM 2016, Industrial Program Co-chair in IEEE BigMM 2017, Industrial Expo Chair in ACMMM 2017, and TPC Co-Chair in IEEE MIPR 2019. He serves as chair of APSIPA Industrial Publication Committee 2014-2017 and VP of APSIPA Industrial Relations and Development starting 2018. He is a Senior member of IEEE. He obtained his Ph.D. degree from University of Maryland, College Park.