9:00 - 9:50


9:50 - 10:00

Opening Remarks

10:00 - 11:00
Invited talk 1

Takeo Igarashi (Unv. of Tokyo, CG+HCI)

"Design Tools in the Age of Personal Fabrication"

Computational fabrication devices make production of high-quality artifacts accessible to everyone. However, the design of these artifacts still requires skills and experience, making it difficult for individuals to design customized artifacts that satisfies their specific needs. To address this problem, our group has been developing various easy-to-use tools for individuals to design their own original artifacts such as toys, clothing, musical instruments, and furniture. This talk will introduce some of these tools with live demonstrations.

11:00 - 12:00
Invited talk 2

Stefano Baldassi (Meta company, VR/AR)

"Science, Product and the building of the Natural Machine "

The ultimate wearable, gesture-control augmented reality (AR) technology is the one that delivers seamless interactions with both the digital and the physical environment. This is what I define a "Natural Machine". For this to happen, several domains of the greater area of Cognitive Neuroscience need to be integrated in Product Engineering. In my talk, I will highlight some key milestones of the scientific and professional pathway that I followed in the last four years and that took me from a perception research lab into leading a team of researchers for delivering one of the world’s most advanced AR technologies. With a long road ahead to better and better products, I will try to give evidence of how Science, Product Engineering and Human Centered Design Thinking need to converge in order to deliver the best possible user experience and performance while interacting with digital tools.

12:00 - 12:30
Spotlight talk

CT students

12:30 - 14:00

Poster session + Lunch

14:00 - 15:00
Invited talk 3

Masataka Goto (AIST, Music Technology)

"Intelligent Music Interfaces "

Automatic music-understanding technologies (automatic analysis of music signals) make possible the creation of intelligent music interfaces that enrich music experiences and open up new ways of listening to music. In the past, it was common to listen to music in a somewhat passive manner; in the future, people will be able to enjoy music in a more active manner by using music technologies. Listening to music through active interactions is called active music listening. In this talk I first introduce active music listening interfaces demonstrating how end users can benefit from music-understanding technologies based on signal processing and/or machine learning. By analyzing the music structure (chorus sections), for example, the SmartMusicKIOSK interface enables people to access their favorite part of a song directly (skipping other parts) while viewing a visual representation of the song's structure. I then introduce our recent challenge of deploying such research-level music interfaces as web services open to the public. Those services augment people's understanding of music, enable music-synchronized control of computer-graphics animation and robots, and provide various bird's-eye views on a large music collection. In the future, further advances in music-understanding technologies and music interfaces based on them will make interaction between people and music even more active and enriching.

15:00 - 16:00
Invited talk 4

Edward Shanken (UC Santa Cruz, New Media Art)

"INVENTING THE FUTURE: Collaborative Research at the Intersections of Science, Engineering, Art, and Design "

An analysis of historic artist-engineer collaborations at Bell Labs and Philips Corporation provides the basis for considering the potentials and challenges of contemporary hybrid labs and transdisciplinary research. On a philosophical level, if the fruits of experimental research are not strictly art, science, or engineering, what exactly are they? What new knowledge do they produce or enable? What is their function in the world? On a practical level, the future sustainability of such research depends on answering these questions, because the labs themselves, like the careers of artists and scholars whose work fuses disciplines, will be prematurely curtailed if their contributions are not recognised and rewarded. As an integral part of their mission, labs must develop rigorous criteria for evaluating and documenting the processes and products of the transdisciplinary collaborations they facilitate. They must develop compelling rationales for the importance of such research as an engine for innovation – innovation not just as an immediately marketable commodity but as constituting more subtle and perhaps more insidious and profound shifts in the conception and construction of knowledge and society. Labs must also play a pivotal role in cultivating broader public recognition of the cultural value of research at the intersections of art, science, and engineering and in helping to make resources and expertise more widely distributed.

16:00 - 16:30
Spotlight talk

CT students

16:30 - 18:00
Poster session

CT students

18:30 - 22:00
CT Party

Dinner + Music Performance + DJ Party

(International Center W2-1)

Speaker Information

Takeo Igarashi

Takeo Igarashi is a professor at CS department, the University of Tokyo. He received PhD from Dept of Information Engineering, the University of Tokyo in 2000. His research interest is in user interface in general and current focus is on interaction techniques for 3D graphics. He is known as the inventor of sketch-based modeling system called Teddy, and received The Significant New Researcher Award at SIGGRAPH 2006.


Stefano Baldassi

Stefano Baldassi, is the Senior Director of Analytics and Neuroscience at Meta. His team’s research at Meta integrates data across all aspects of R&D, Engineering, Product and Marketing. As a former visual neuroscientist in top academic institutions worldwide, he’s one of the thought leaders of Meta’s Spatial Interface Design Guidelines.

Masataka Goto

Masataka Goto received the Doctor of Engineering degree from Waseda University in 1998. He is currently a Prime Senior Researcher at the National Institute of Advanced Industrial Science and Technology (AIST). In 1992 he was one of the first to start working on automatic music understanding and has since been at the forefront of research in music technologies and music interfaces based on those technologies. In 2016, as the Research Director he began a 5-year research project (OngaACCEL Project) on music technologies, a project funded by the Japan Science and Technology Agency (ACCEL, JST).


Edward A. Shanken

Edward A. Shanken writes and teaches about the entwinement of art, science, and technology with a focus on interdisciplinary practices involving new media. He is Associate Professor at UC Santa Cruz, where he has served as Director of the Digital Arts and New Media (DANM) MFA program. Recent publications include essays on art and software, art historiography, land art, investigatory art, sound art and ecology, and bridging the gap between new media and contemporary art. His critically praised survey, Art and Electronic Media (Phaidon Press, 2009) has been expanded with an extensive, multimedia.


CT Student Posters

Session #1

[1-1] Motion Sensor-Based Approach for Automatic Jump Photos Presentation

Sanzhar Bakhtiyarov (CML Lab, Prof. Jean-Charles Bazin)

[1-2] Projective Motion Correction with Contact Optimization

Sukwon Lee (Motion Lab, Prof Sung-hee Lee)

[1-3] Aura Mesh: Motion Retargeting to Preserve the Spatial Relationships between Skinned Characters

Taeil Jin (Motion Lab, Prof Sung-hee Lee)

[1-4] Spline Interface for Intuitive Skinning Weight Editing

Seungbae Bang (Motion Lab, Prof Sung-hee Lee)

[1-5] Tele-Avatar: The Effect of Virtual Avatar Representation on User’s Co-presence in a Augmented Reality-based Remote Collaboration System

Hyung-il Kim (UVR Lab, Prof. Woontack Woo)

[1-6] Deep estimation of natural illumination from a single RGB-D image

Jinwoo Park (UVR Lab, Prof. Woontack Woo)

[1-7] Hybrid 3D Hand Articulations Tracking Guided by Classification and Search Space Adaptation

Gapyong Park (UVR Lab, Prof. Woontack Woo)

[1-8] Interactive Shadow Removal using Conditional Generative Adversarial network

Kwanggun Seo (VML, Prof. Junyong Noh)

[1-9] Chameleon : Adaptable Remote Controller Based on the 3D Pointing Interface

Hyunggoog Seo (VML, Prof. Junyong Noh)

Session #2

[2-1] Dwelling Depression’ Analysis Based on Correlation of Elderly Depression and Dwelling Satisfaction

Ye-Won Lee (ET Lab, Prof. Sungju Woo)

[2-2] The effect of DJs’ social network on Music Popularity

Hyeung Seok Wi (Social Computing Lab, Prof. Won-jae Lee)

[2-3] Recommendation System Using Case Based Reasoning for IoT-based Smart Home Intelligent Agent

Minkyu Choi ( IBD Lab, Prof. Ji-Hyun Lee)

[2-4] The analysis of Ethical Decision Making of Autonomous Vehicles based on Human Moral Reasoning

Jimin Rhim  ( IBD Lab, Prof. Ji-Hyun Lee)

[2-5] Singing Expression Transfer from One Voice to Another for a Given Song

Sangeon Yong (MAC Lab, Prof. Juhan Nam)

[2-6] Representation Learning of Music Using Artist Labels

Jongpil Lee (MAC Lab, Prof. Juhan Nam)

[2-7] A Multi-Segment Similarity Analysis and Visualization for Melodic Similarity

Saebyul Park (MAC Lab, Prof. Juhan Nam)

[2-8] Measuring quality of tie in sports match from the changes of in-game dynamics

Gyuhyeon Jeon (The Q group, Prof. Juyong Park)

[2-9]  On quantifying career success of Joseon Dynasty’s Yangban

Donghyeok Choi (The Q group, Prof. Juyong Park)

[2-10] Bias Correction for Social Media Data Using the Big Five Personality Traits Extracted from User-generated Text
Jinah Kwak (Social Computing Lab, Prof. Won-jae Lee)

[2-11] Button++: Designing Risk-aware Smart Buttons
Eunji Park (Interactive Media Lab, Prof. Byungjoo Lee)


Juhan Nam (KAIST CT, Co-Chair)
Jaehong Ahn (KAIST CT, Co-Chair)
Sunghee Lee (KAIST CT, Program Committee)
Byungjoo Lee (KAIST CT, Program Committee)
Jeongmi Lee (KAIST CT, Program Committee)