About
I am a Staff ML Engineer and AI Architect who builds production systems that balance safety, measurable lift, and time-to-market. My scope spans ads marketplaces, personalization, fraud/abuse defenses, and GenAI safety rails. I combine architecture design with delivery coaching so teams can ship faster without trading off observability or governance.
Today I lead applied ML initiatives across DSP components and GenAI workflows, defining contracts between data, models, and runtime services. I focus on control planes, rollback playbooks, and evaluation loops that keep systems resilient under traffic spikes and changing incentives.
Previously I standardized ML architecture patterns and deployment guardrails for AB InBev and other enterprise teams, introducing CI/CD/CT for models, goldens-based regression testing, and capacity-aware rollouts. I still stay hands-on with modeling: uplift experiments for ads, GNNs for recommendations, and evaluators for LLM-based copilots.
I hold a Ph.D. in Mechanical Engineering (Mechatronics) from UNICAMP and an EMBA in Strategic Leadership. I speak and write about production ML, causal measurement, and platform thinking to help teams align narrative, constraints, and outcomes.
Proof modules
- Talks: Workshops on production ML safety nets, GenAI evaluation, and causal ads measurement. Recent sessions cover rollback runbooks, fast-fail experimentation, and human-in-the-loop design. See talks.
- OSS and writing: Playbooks and templates for measurement plans, ML design docs, and experiment dashboards. Check the blog and portfolio for live examples.
- Coaching: Mentored teams on platform APIs, schema contracts, and runbook design so that ML services stay reliable at scale.
See the long-form background
Read the full journey, including research and early robotics work, in the background page.
Continue the conversation
Need a sounding board for ML, GenAI, or measurement decisions? Reach out or follow along with new playbooks.
