Investigating the "Complexity Gap" in modern AI. Currently architecting deterministic financial systems at AWS FinTech and researching Information Theory for reliable machine learning.
My work sits at the intersection of high-stakes industrial engineering and theoretical AI. At AWS, I lead Scenario Modeling for global cost allocation, where the margin for error is zero.
This experience, combined with my time as a Teaching Assistant for Professor Reza Rajati at USC, has driven me to research how we can strip away algorithmic noise to find mathematical ground truth.
My goal is to develop AI systems that are observable, verifiable, and human-aligned — systems that don't just predict, but collaborate with human discovery.
Why can we model millions of AWS billing scenarios with deterministic precision, yet still fail to build recommendation engines that genuinely align with human intent rather than echoing past patterns?
Let's discuss the intersection of deterministic systems, information theory, and human-aligned machine learning. I document my thinking at Medium.