A selection of ongoing and past research projects.

Intent-Guided Tabular Explainability (ASU)

Developing an agent-driven framework that combines domain rubrics with multimodal reasoning to generate expert-aligned table explanations and identify structural or semantic issues.


Chart Edit Evaluation Using Agentic Assessment (ASU)

Building a systematic agent-based evaluation pipeline for chart edits, assessing correctness, semantic fidelity, and visual consistency.


Detecting & Mitigating Misleading Charts with MLLM Agents (ASU)

Investigating how MLLMs misinterpret charts and building an agentic system that detects, explains, and corrects misleading visualizations.


Uncertainty Quantification of MLLM-as-a-Judge (ASU)

Studying the reliability of multimodal LLM evaluators across pointwise, pairwise, and batch evaluation settings, with a focus on uncertainty estimation, calibration, and consistency in automated judgment.


Localized Concept Erasure in Generative Models (ASU)

Developing methods for fine-grained concept erasure in diffusion and flow-matching models, addressing limitations of prior approaches in multi-instance scenarios, and designing evaluation metrics that distinguish true unlearning from concealment across granular levels.