Iām a principal applied AI scientist at OCI / Oracle. I work on multimodal GenAI, test-time reasoning, and alignment when I can. My AGI timeline is now: short-medium.
Other than ML research, I sometimes work on creative applications of AI for art generation. While large-scale text to image models are impressive, I perfer to use them in limited contexts, and create my own algorithms instead. Some of my work here can be seen in the Gallery.
I completed my PhD at Oregon State University in 2021, advised by Dr. Fuxin Li. I was previously at Intel Labs, HRL Laboratories, and Oregon State Univeristy
News
- LieCraft (LLMs deceive you more than you think) was accepted to AAAI-25
- LMM steering for alignemnt was accepted to ICCV-25, other papers at CVPR, etc
- I moved to Oracle/OCI to work on frontier models
- We had three workshop submissions accepted to Neurips 2024
- I'm moving (back) to Intel, but its Labs this time.
- I accepted a research scientist position at HRL Laboratories in Malibu, CA.
- Our paper that proposes a new method for identifying covariate shift in image data was accepted to IEEE VIS 2021
- I successfully defended my dissertation on uncertainty quantification in deep learning with implicit distributions over neural networks
Selected Publications
LieCraft: A Multi-Agent Framework for Evaluating Deceptive Capabilities in Language Models
Neale Ratzlaff, et al.
We present a multi-agent framework for evaluating the deceptive capabilities of language models through strategic gameplay.
2024
Steering Large Language Models to Evaluate and Amplify Creativity
Matthew Lyle OlsonāØ, Neale RatzlaffāØ, Musashi Hinck, Shao-yen Tseng, Vasudev Lal
NeurIPS 2024 (CreativeGenAI Workshop) Spotlight Talk
(āØ): Equal Contribution
Contrastive activation-based steering can induce better creative story generation in Llama3-8B. We also construct a creativity estimator that evaluates the creativity of text that aligns with both humans and 70B models.
Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Neale Ratzlaff, Matthew Lyle Olson, Musashi Hinck, Shao-yen Tseng, Vasudev Lal, and Phillip Howard
NeurIPS 2024 (SafeGenAI Workshop)
We perform model steering on large vision langauge models like LLaVA 1.5, and find that we can significanly reduce the model's propensity to mention protected attributes when describing images.
Navigating Neural Fields with Vision-Language Models
Neale Ratzlaff, Phillip Howard, and Vasudev Lal
NeurIPS 2024 (CreativeGenAI Workshop) art submission
Art generation via implicit neural fields, curated by prompting VLMs.
Theses
Uncertainty in Deep Learning with Implicit Neural Networks
š PhD dissertation, Computer Science, Oregon State University (2021)