Back to blog

My Research Philosophy and Interests

research philosophy generative models geometric deep learning

Philosophy

I believe the most impactful AI research comes from understanding why algorithms work, not just that they work. My approach is rooted in mathematical theory — probability, differential geometry, and algebraic structure — and directed toward designing algorithms that are fundamentally more scalable and efficient.

The history of AI shows that algorithmic breakthroughs consistently deliver step-function gains that rival or exceed hardware scaling alone. The transition from GANs to diffusion models, from recurrent networks to attention, from dense to sparse computation — each was driven by a deeper mathematical understanding of the problem structure. I aim to be on the producing side of such transitions: extracting the right mathematical insight, and turning it into algorithms that push the efficiency-capability frontier.

I care about efficiency, but my angle is algorithmic, not systems engineering. I’m not writing CUDA kernels; I’m asking why flow matching needs fewer steps than diffusion, under what conditions sparsity preserves performance, and what structural properties make a generative process inherently cheaper. The efficiency I pursue lives in the math, not in the hardware — though I believe the two will increasingly need to speak the same language.

Ultimately, I want my research to matter beyond academia. My long-term goal is to build a startup that changes how the world works, and I see deep technical insight as the most durable competitive advantage in AI.

Research Interests

Generative Modeling with Diffusion and Flow-Based Models

My primary research focus. I study the mathematical foundations of continuous-time generative models — score-based models, denoising diffusion, flow matching, rectified flow, and consistency models — and work on extending these frameworks. I’m particularly interested in bridging continuous and discrete generative processes, and in understanding what makes certain generative procedures fundamentally more sample-efficient than others.

World Models

I’m drawn to world models as the backbone of AI systems that interact with the physical world. My interest lies in building world models on top of diffusion and flow-matching frameworks, with a focus on video-based world simulation. I believe this direction is more tractable and empirically grounded than latent prediction approaches, and it connects directly to downstream applications in robotics and autonomous systems.

Efficient Algorithms for Deep Learning

A cross-cutting theme across my work. Rather than optimizing implementations, I focus on the algorithmic and mathematical structures that make models inherently more efficient — fewer parameters, fewer steps, lower complexity, better scaling laws. This includes studying sparsity, low-rank structure, and compression from a theoretical angle: not as engineering tricks, but as consequences of the right mathematical formulation.

Geometric Deep Learning

An area I’ve actively worked in during my time at KAIST VLLab. I have experience with graph neural networks, equivariance, and symmetry-aware modeling. This background informs my broader interest in building generative models that respect geometric structure — for molecular generation, 3D modeling, and physics-informed learning where symmetries are not optional but essential.