Samsung Electronics | AI & HPC Specialist
15+ years of experience in Artificial Intelligence and HPC research and development as a technical leader and individual contributor. Throughout my career, I have delivered cutting-edge technology solutions across various industries while leading world-class teams. My work has resulted in numerous commercialized solutions, with a strong record of patent filings and publications in reputable conferences. My research interests include: personalized learning, generative AI, augmented reality (AR), parallel computing and computer vision systems. I combine technical expertise with leadership to drive innovation from research to real-world applications.
I bridge the chasm between theoretical academia and commercialized, on-device technology. My unique expertise combines deep technical mastery in parallel computing and computer vision with high-level executive leadership managing large-scale, distributed research teams.
As Head of Multimedia AI at Samsung Electronics, I lead a team of researchers while remaining deeply involved in hands-on technical work. My role combines technical leadership with strategic direction, including project proposals, patent writing, execution mapping, and MLOps platform development. I continue to contribute as a staff engineer on key projects, ensuring that our research translates into real-world applications.
Image Processing & AI Techniques
Top-tier Conferences (CVPR, ICASSP, etc.)
Leading Distributed Team Researchers+Enginners
A journey through technical evolution and leadership
Leading group of researchers on GenAI & Omni-LLMs agents.
Focus on commercialization of flagship features.
Core computer vision R&D.
HPC & Heterogeneous Computing.
Early innovation and mobile prototyping.
Core areas of technical expertise
Intelligent restoration and enhancement of visual content for mobile devices.
Shadow Removal: LP-IOANET (ICASSP 2023) - Efficient high resolution document shadow removal
Object Erasure: Shadow/Object eraser (2020) - Magic Eraser style features with mask-based pyramid networks
Relighting: Portrait relighting through inverse rendering techniques
Neural rendering and spatial computing for immersive experiences.
GSta: Efficient training scheme with siestaed gaussians for monocular 3D scene reconstruction (In Submission)
CheapNVS: Real-time on-device narrow-baseline novel view synthesis (ICASSP 2025)
Trick-gs: A balanced bag of tricks for efficient gaussian splatting (ICASSP 2025)
NeRF Exploration: Finding Waldo in 3D space
Temporal consistency and real-time video processing.
MobileVOS: Video object segmentation using knowledge distillation (CVPR 2023)
TRICKVOS: Contrastive learning meets knowledge distillation (2023)
Applications: Video bokeh, color point effects
MLOps infrastructure and privacy-preserving AI.
Federated Learning: 5+ patents on FL applied to computer vision
MLOps: DevOps, CI/CD, Docker, GitHub Actions implementation
Privacy: Training models without exchanging user data
The visual gallery of research output
In Submission. Anil Armagan, Albert Saà Garriga, Bruno Manganelli, Kyuwon Kim, M. Kerim Yucel. Samsung R&D Institute UK.
Optimizing Gaussian Splatting for mobile architectures. Presented at ICASSP 2025.
Redefining mobile rendering: Real-time novel view synthesis on edge devices. Presented at ICASSP 2025.
Enterprise-grade document digitization via deep learning. Presented at ICASSP 2023.
Real-time Video Object Segmentation using Knowledge Distillation. CVPR 2023.
Contrastive learning meets knowledge distillation. 2023.
Efficient exploration of NeRF scene spaces. 2024.
Shared structures for encoder-decoder flow. ICASSP 2025.
Efficient depth estimation for mobile devices with sparse supervision. CVPRW 2020.
Realistic bokeh rendering using adaptive mask-based pyramid networks. ICASSP 2023.
20+ Patents secured in Image Understanding, Device Personalization, and Federated Learning.
Explore our latest research innovations and breakthroughs
Discover our optimization techniques for efficient Gaussian Splatting, enabling real-time 3D scene reconstruction on mobile devices.
Learn about our approach to real-time novel view synthesis that runs efficiently on mobile devices, enabling immersive experiences.
Explore our innovative approach to efficiently navigate and explore NeRF scene spaces, making 3D scene understanding more accessible.
Dive into our document shadow removal technique that enables high-quality digitization of documents with complex lighting conditions.
Discover how we combine contrastive learning with knowledge distillation to achieve real-time video object segmentation on mobile devices.
Learn about our collection of optimization techniques that significantly improve the performance of video object segmentation models.