About Me

I am a Master’s student in Computer Science at Georgia Tech focusing on multimodal intelligence. I study how AI learns from language, vision, and real physical signals, and how this can translate into systems that hold up in real deployment settings.

My work evolved into this direction through experience: optimizing large multimodal generation models on GPUs, designing retrieval and evaluation pipelines, studying identity and reasoning in language models, and modeling high resolution environmental signals. These efforts are not separate tracks. They are different viewpoints on one core aim: understanding how to build AI systems that generalize across modalities.

I value research that is rigorous, measurable, and usable in practice. I like taking ideas beyond theory and making them survive real constraints and real data.

I am currently exploring opportunities in multimodal applied research, machine learning engineering, and roles where strong engineering meets advanced modeling. I am also open to software engineering and data science positions where this foundation can create impact.