On Mon March 09, 2026

Speaker

Leonidas J. Guibas


Title

The Space Between The Images -- Visual Learning From Relations


Abstract

In understanding or generating images or videos, visual relations play a fundamental role, reflecting basic principles that underlie the physical world. These can range from symmetries and repetitions, to groupings and compositional structures, to invariance and equivariance under various transformations, to cycle consistency. Today, almost all supervision we provide to our models is first-order and value-driven, such as specifying desired color or semantic class for pixels, expected depth for 3D points, etc. Yet the most important visual structure is encoded in binary or multiway relations between these values --- reflecting compositional scene hierarchies or respect for geometric or physical laws. In this talk we examine a number of ways that second-order, relational supervision can be provided, from being baked into the model to contrastive learning, to consistency losses for RL. We show that relation-awareness can vastly reduce the amount of training data needed and lead to superior performance across multiple applications, including classification, segmentation, reconstruction, and VQA.


Bio

Leonidas Guibas is the Paul Pigott Professor of Computer Science at Stanford University and a Principal Scientist at Google Deep Mind. He has worked in numerous areas of computer science, such as geometric algorithms, 3D computer vision and geometric deep learning, computer graphics, robotics, discrete mathematics, and biocomputation. Dr. Guibas has been elected to the US National Academy of Engineering, the US National Academy of Sciences, the American Academy of Arts and Sciences, and the Siggraph Academy, He is is an ACM Fellow, an IEEE Fellow, and has won the ACM-AAAI Allen Newell Award, the ICCV Helmholtz prize, and Siggraph's Steven Anson Coons award.


Language

English (Offline)

On Mon March 16, 2026

Speaker

Sooyon Cho (조수연)


Title

인간 시스템의 취약점, '법'으로 패치하라: 소중한 내 돈 지키는 법


Abstract

우리가 매일 사용하는 네트워크가 정교한 프로토콜 위에 돌아가듯, 현대 사회는 '법'이라는 보이지 않는 거대한 설계도 위에 구축되어 있습니다. 그러나 많은 이들이 법을 막연하고 어렵게 느끼며 무관심하게 지내다가 예상치 못한 분쟁에 휘말리거나 사기 피해를 입고 나서야 큰 충격을 받곤 합니다. 이번 강연에서는 여러분이 반드시 알아야 할 실무적인 기초 법률 지식을 쉽고 재미있게 소개하고, 각종 사례를 통하여 우리의 소중한 돈을 지키는 방법에 대하여 알아봅니다. 나아가 여러분의 전공 역량이 미래에 어떻게 범죄 수사와 자금 추적의 핵심 도구가 될 수 있을지 그 가능성까지 함께 조망해보는 시간이 될 것입니다.


Bio

조수연 교수는 KAIST 산업디자인학과와 고려대학교 법학과를 졸업하고 제45회 사법시험에 합격하였다. 17년간 판사로 근무하다가 청주지방법원 부장판사를 끝으로 법원생활을 마무리하고 2025년부터 한국외국어대학교 법학전문대학원에서 민사소송실무를 가르치고 있다.


Language

Korean (Offline)

On Mon March 23, 2026

Speaker

Dooyoung Jung (정두영)


Title

Emotional Care Using AI Conversational Agents (AI 대화형 에이전트를 활용한 정서적 돌봄)


Abstract

TBD


Bio

TBD


Language

Korean (Offline)

On Mon March 30, 2026

Speaker

Kazuhiro Nakadai


Title

Robot Audition in the Wild: Toward an Inclusive Society


Abstract

Robot Audition is a concept originally proposed by Nakadai and colleagues to enable robots to perceive and understand complex acoustic scenes in real-world environments where noise, reverberation, and multiple sound sources coexist. In this invited talk, I revisit the development of robot audition from the perspective of “in the wild” sensing, highlighting how auditory and multimodal perception must evolve when robots operate beyond controlled laboratory settings. The talk begins by introducing the core technologies of robot audition, including sound source localization and separation, and by discussing the fundamental challenges that arise when these techniques are deployed in real environments. Building on this foundation, I present a series of research efforts that extend robot audition toward real-world applications, such as locating humans using sound in search-and-rescue scenarios, inferring environmental and surface properties from acoustic signals, and analyzing bird songs for ecological monitoring in outdoor environments. I also discuss how the same technical principles can be extended toward human-centered interaction, particularly sign-language-based human–robot interaction, where communication relies on non-verbal and multimodal signals rather than speech alone. Although these research topics address different application domains, they are unified by a common technical direction: expanding the signals, agents, and environments that intelligent systems are designed to perceive and reason about. By framing robot audition as a foundation for multimodal perception and interaction in the wild, this talk presents a technical pathway through which such expansions can lead toward the realization of an inclusive society, enabling intelligent systems to engage not only with diverse humans, but also with challenging environments and even non-human entities.


Bio

Kazuhiro Nakadai received a B.E. in electrical engineering in 1993, an M.E. in information engineering in 1995, and a Ph.D. in electrical engineering in 2003 from the University of Tokyo. He worked at Nippon Telegraph and Telephone as a system engineer from 1995 to 1999, at the Kitano Symbiotic Systems Project, ERATO, JST as a researcher from 1999 to 2003, and at Honda Research Institute Japan, Co., Ltd. as a principal scientist from 2003 to 2022. Currently, he is a professor at the Department of Systems and Control Engineering, School of Engineering, Institute of Science Tokyo (formerly Tokyo Institute of Technology). He concurrently served as a visiting associate professor at Tokyo Institute of Technology from 2006 to 2010, a visiting professor from 2011 to 2017, and a specially appointed professor from 2017 to 2022. He also held a concurrent position as a guest professor at Waseda University from 2011 to 2018. His research interests include artificial intelligence, robotics, signal processing, computational auditory scene analysis, multimodal integration, and robot audition. He has served as an executive board member for the Japanese Society for Artificial Intelligence (JSAI) from 2015 to 2016 and from 2024 to 2025, and for the Robotics Society of Japan (RSJ) from 2017 to 2018. He is recognized as a Fellow of both the IEEE and RSJ.


Language

English (Offline)

On Mon April 06, 2026

Speaker

Seong Joon Oh (오성준)


Title

Deploying General AI in the Private World


Abstract

TBD


Bio

TBD


Language

English (Offline)

On Mon April 13, 2026

Speaker

Tae-Ho Kim (김태호)


Title

Trends in Inference Optimization and Lightweighting for Sustainable AI (지속 가능한 AI를 위한 추론 최적화·경량화 기술 트렌드)


Abstract

TBD


Bio

TBD


Language

Korean (Offline)

On Mon April 27, 2026

Speaker

Hwalsuk Lee (이활석)


Title

The Current and Future of the AI B2B Market


Abstract

TBD


Bio

TBD


Language

Korean (Offline)

On Mon May 04, 2026

Speaker

Minki Hhan (한민기)


Title

Cryptography in Quantum World


Abstract

TBD


Bio

TBD


Language

English (Offline)

On Mon May 11, 2026

Speaker

Insu Yun (윤인수)


Title

The Dark and Bright Side of AI in Cybersecurity


Abstract

TBD


Bio

TBD


Language

English (Offline)

On Mon May 18, 2026

Speaker

Jean Oh


Title

Creative Physical AI


Abstract

TBD


Bio

TBD


Language

English (Offline)

On Mon June 01, 2026

Speaker

Ziwei Liu


Title

From Multimodal Generative Models to Dynamic World Modeling


Abstract

TBD


Bio

TBD


Language

English (TBD)

On Mon June 08, 2026

Speaker

Jung-hee Ryu (류중희)


Title

The Era of Physical AI: Challenges in East Asia (Physical AI 시대, 동아시아의 도전)


Abstract

TBD


Bio

TBD


Language

Korean (Offline)