Sanghun Jung

I am a M.S. candidate at KAIST AI, being advised by Prof. Jaegul Choo.

At KAIST, I've worked on Domain Generalization of Semantic Segmentation and Out-of-Distribution Detection on Semantic Segmentation. Before participating Master's program at KAIST, I worked at BearRobotics as a Robotics Software Engineer for two years. I did several projects about camera extrinsic calibration, odometry calibration, safe velocity controller, and auto testing simulation infrastructure.

Email  /  CV  /  LinkedIn  /  Google Scholar  /  Github

profile photo
Research

I'm interested in computer vision, robotics, and autonomous driving. Much of my research is done on computer vision, especially for autonomous driving by delving into semantic segmentation approaches.

SML Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation
Sanghun Jung*, Jungsoo Lee*, Daehoon Gwak, Sungha Choi, and Jaegul Choo
ICCV, 2021   (Oral Presentation)
arXiv / code

A simple yet effective approach for out-of-distribution detection in semantic segmentation. Based on the finding that max logits have their own range according to their predicted classes, we standardize them, which enables them to reflect their relative meaning in their predicted classes.

RobustNet RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening
Sungha Choi*, Sanghun Jung*, Huiwon Yun, Joanne Taery Kim, and Jaegul Choo
CVPR, 2021   (Oral Presentation)
arXiv / code

Enhancing the generalization performance of deep neural networks is crucial for safety-critical applications such as autonomous driving. Hence, we propose a novel Instance Selective Whitening Loss that disentangles the domain-specific style and domain-invariant content encoded in the higher-order statistics.

Visualiation for Non-Visuals Visualizing for the Non-Visual: Enabling the Visually Impaired to Use Visualization
Jinho Choi, Sanghun Jung, Deokgun Park, Jaegul Choo, and Niklas Elmqvist
EuroVIS, 2019

The majority of visualizations on the web are still stored as raster images, making them inaccessible to visually impaired users. We propose a deep-neural-network-based approach that automatically recognizes key elements in visualization and the original data conveyed in the visualization. We leverage such extracted information to provide the reading to visually impaired people via Screen Reader built as a Chrome Extension.


Patents
BearRobotics Method, System, and Non-Transitory Computer-Readable Recording Medium for Controlling a Robot
Sanghun Jung, Henry L. Leinhos, Fangwei Lee, and Ina Liu.

US Patent In Progress
BearRobotics Method, System, and Non-Transitory Computer-Readable Recording Medium for Controlling Movement of a Robot
Bryant L. Pong, Henry L. Leinhos, and Sanghun Jung

US Patent In Progress

Experience
DAVIAN Lab, KAIST AI GPU Server Maintainer | DAVIAN Lab., KAIST AI
Seongnam, South Korea | Aug. 2020 ~

Currently managing Linux GPU servers in DAVIAN Laboratory. We built a server monitoring and auto-reporting system using Linux shell scripts and python.
BearRobotics Robotics Software Engineer | BearRobotics Korea, Inc.
Seoul, South Korea | Apr. 2019 ~ Jul. 2020
  • Safe velocity controller
  • Automated simulation testing infrastructure
BearRobotics Robotics Software Engineer Intern | BearRobotics, Inc.
Redwood City, CA, US | Jul. 2018 ~ Mar. 2019
  • Depth camera extrinsic calibration
  • Odometry testing framework

Invited Talk
Hyundai Research Hyundai Motor Group AI Research Seminar
  • RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening

This website is adapted from Jon Barron's template. source code