Sanghun Jung


I am a second-year PhD student at the University of Washington, where I am advised by Byron Boots. I am interested in visual learning for robotics, bridging the gap between visual perception and robot planning/control. My recent research covers effective transfer learning for LiDAR segmentation and traversability prediction from visual cues by self-supervision.

Mail: shjung13 [at] cs [dot] washington [dot] edu

CV  /  LinkedIn  /  Google Scholar  /  Github

profile photo

Research
RACER Project DARPA Robotic Autonomy in Complex Environments with Resiliency (RACER)
University of Washington, 2023
Visual traversability prediction V-STRONG: Visual Self-Supervised Traversability Learning for Off-road Navigation
Sanghun Jung, JoonHo Lee, Xiangyun Meng, Byron Boots, and Alexander Lambert
ICRA, 2024  
arXiv
LiDAR-UDA LiDAR-UDA: Self-ensembling Through Time for Unsupervised LiDAR Domain Adaptation
Amirreza Shaban*, JoonHo Lee*, Sanghun Jung*, Xiangyun Meng, and Byron Boots
ICCV, 2023   Oral Presentation (<1.8%)
arXiv / code
Class-aware Feature Alignment CAFA: Class-aware Feature Alignment for Test-time Adaptation
Sanghun Jung, Jungsoo Lee, Nanhee Kim, Amirreza Shaban, Byron Boots, and Jaegul Choo
ICCV, 2023  
arXiv
Debiasing DebiasBench: Benchmark for Fair Comparison of Debiasing in Image Classification
Jungsoo Lee, Juyoung Lee, Sanghun Jung, and Jaegul Choo
Preprint, 2023  
arXiv
Conditional Generative NeRF CG-NeRF: Conditional Generative Neural Radiance Fields
Kyungmin Jo*, Gyumin Shim*, Sanghun Jung, Soyoung Yang, and Jaegul Choo
WACV, 2023  
arXiv
3D-GIF 3D-GIF: 3D-Controllable Object Generation via Implicit Factorized Representations with Unposed 2D Images
Minsoo Lee, Chaeyeon Chung, Hojun Cho, Minjung Kim, Sanghun Jung, Minhyuk Sung, and Jaegul Choo
Preprint, 2022  
arXiv
SML Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation
Sanghun Jung*, Jungsoo Lee*, Daehoon Gwak, Sungha Choi, and Jaegul Choo
ICCV, 2021   Oral Presentation (<3.0%)
arXiv / code

RobustNet RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening
Sungha Choi*, Sanghun Jung*, Huiwon Yun, Joanne Taery Kim, and Jaegul Choo
CVPR, 2021   Oral Presentation (<4.1%)
arXiv / code

Visualiation for Non-Visuals Visualizing for the Non-Visual: Enabling the Visually Impaired to Use Visualization
Jinho Choi, Sanghun Jung, Deokgun Park, Jaegul Choo, and Niklas Elmqvist
EuroVIS, 2019


Work Experience
Amazon Lab126 Applied Scientist Intern - Summer 2024 | Amazon Lab126
Bellevue, WA | Jun. 2024 ~ Sep. 2024
  • Visual affordance learning
  • Robot manipulation
  • Vision-language model / Large-language model
BearRobotics Robotics Engineer | BearRobotics Korea, Inc.
Seoul, South Korea | Apr. 2019 ~ Jul. 2020
  • Safe velocity controller
  • Odometry and localization testing
  • Automated simulation testing infrastructure
BearRobotics Robotics Engineering Intern | BearRobotics, Inc.
Redwood City, CA, US | Jul. 2018 ~ Mar. 2019
  • Depth camera extrinsic calibration

Invited Talk
Conference on Robot Learning 2nd Pre-Training for Robot Learning Workshop @ CoRL 2023
  • Gave a spotlight talk about image-based traversability for off-road navigation
KAIST AI Conference KAIST AI Conference
  • Presented my recent paper "Standardized Max Logits" during a poster session
Naver AI LAB Naver AI LAB
  • Guest speaker for corporate seminar (RobustNet)
Hyundai Research Hyundai Motor Group AI Research Seminar
  • Guest speaker for corporate seminar (RobustNet)


This website is adapted from Jon Barron's template.