Cheng-Yu (Benjamin) Chiang
I recently graduated from my Master Program at at Carnegie Mellon University studying Electrical and Computer Engineering.
I recieved my B.S. in Electrical Engineering from University of California, San Diego in 2023.
I worked on leveraging large scale data for artificial intelligence.
During my internships, I worked closely with research and product teams to build scalable data pipelines,
resolve data bottlenecks, train machine learning models, and present insights through interactive, user-friendly visualizations.
My passion lies in the intersection of artificial intelligence and electrical engineering, focusing on developing intelligent, scalable, and deployable solutions that utilizes both software and hardware for real-world impact.
Email /
GitHub /
Google Scholar /
LinkedIn
|
|
Map It Anywhere (MIA): Empowering Bird's Eye View Mapping using Large-scale Public Data
Cherie Ho, Jiaye Zou, Omar Alama, Sai Mitheran Jagadesh Kumar, Benjamin Chiang, Taneesh Gupta, Chen Wang, Nikhil Keetha, Katia Sycara, Sebastian Scherer
NeurIPS, 2024
arxiv
/
code
/
website
/
Map It Anywhere (MIA) is a data engine that leverages Mapillary and OpenStreetMap to create a 1.2 million-pair dataset for Bird’s Eye View (BEV) map prediction, enabling diverse and scalable training data. Models trained using MIA’s dataset achieved 35% better zero-shot performance over existing baselines, demonstrating its potential to improve autonomous navigation.
|
|
GenStreet: Augmenting Street View Generation with Geo-Referenced Data
2024-12
code
/
website
/
This project generates realistic First-Person View (FPV) street-view images from segmentation masks and natural language inputs by fine-tuning ControlNet, leveraging Birds Eye View (BEV) maps and zero-shot learning with Llama 3.2 for structural and contextual accuracy. The approach improves realism, structural alignment, and feature accuracy, with applications in robotic simulations, urban planning, and interior design.
|
|
Reward Multiverse: A Comprehensive Framework for Diverse Reward Models in Image Generation
2023-12
code
/
website
/
Customizing text-to-image diffusion with reinforcement learning and self-supervised reward models to align outputs with specific visual attributes like snow, pixelation, and day-night cycle. The framework introduces sliding-scale modification strength for fine control and unlocks new possibilities for image synthesis and editing. Applications include image editing, style transfer, and data augmentation.
|
|