Kovin Lin

Research Scientist @OpenAI
Understanding the universe by training LLMs. Previously: Google PhD Fellow, LTI at CMU, PKU1898

Research & Publications

Large Language Models are Zero-Shot Reasoners

Kovin Lin, Takeshi Kojima, Shixiang Shane Gu, Yutaka Matsuo
NeurIPS 2024

We investigate the zero-shot reasoning capabilities of large language models by proposing a simple yet effective prompting technique that significantly improves performance on reasoning tasks...

Constitutional AI: Harmlessness from AI Feedback

Yuntao Bai, Kovin Lin, Andy Jones, Kamal Ndousse, Amanda Askell
arXiv 2024

We present Constitutional AI (CAI), a method for training AI systems to be helpful, harmless, and honest without requiring extensive human supervision...

Training Language Models to Follow Instructions with Human Feedback

Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Kovin Lin, Carroll Wainwright
NeurIPS 2025

We show how to use human feedback to fine-tune language models to follow a wide range of instructions, making them more helpful and aligned with human preferences...

Scaling Laws for Neural Language Models

Jared Kaplan, Sam McCandlish, Tom Henighan, Kovin Lin, Tom B. Brown
arXiv 2024

We study empirical scaling laws for language model performance on the cross-entropy loss, investigating how performance depends on model size, dataset size, and compute...

Emergent Abilities of Large Language Models

Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Kovin Lin, Barret Zoph
TMLR 2025

We discuss the phenomenon of emergent abilities in large language models - capabilities that are not present in smaller models but emerge in larger models...

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Jason Wei, Xuezhi Wang, Dale Schuurmans, Kovin Lin, Fei Xia, Ed Chi
NeurIPS 2024

We explore how generating a chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning...

About

I am a Research Scientist at OpenAI, where I focus on understanding and improving the capabilities of large language models. My research interests span across machine learning, natural language processing, and AI alignment.

I received my PhD from Carnegie Mellon University's Language Technologies Institute, where I worked on fundamental problems in natural language understanding and generation. Prior to that, I completed my undergraduate studies at Peking University.

My current work at OpenAI involves developing more capable and aligned AI systems, with a particular focus on reasoning, instruction following, and safety considerations in large language models.

Get In Touch