Kovin Lin
Research & Publications
Large Language Models are Zero-Shot Reasoners
We investigate the zero-shot reasoning capabilities of large language models by proposing a simple yet effective prompting technique that significantly improves performance on reasoning tasks...
Constitutional AI: Harmlessness from AI Feedback
We present Constitutional AI (CAI), a method for training AI systems to be helpful, harmless, and honest without requiring extensive human supervision...
Training Language Models to Follow Instructions with Human Feedback
We show how to use human feedback to fine-tune language models to follow a wide range of instructions, making them more helpful and aligned with human preferences...
Scaling Laws for Neural Language Models
We study empirical scaling laws for language model performance on the cross-entropy loss, investigating how performance depends on model size, dataset size, and compute...
Emergent Abilities of Large Language Models
We discuss the phenomenon of emergent abilities in large language models - capabilities that are not present in smaller models but emerge in larger models...
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
We explore how generating a chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning...
About
I am a Research Scientist at OpenAI, where I focus on understanding and improving the capabilities of large language models. My research interests span across machine learning, natural language processing, and AI alignment.
I received my PhD from Carnegie Mellon University's Language Technologies Institute, where I worked on fundamental problems in natural language understanding and generation. Prior to that, I completed my undergraduate studies at Peking University.
My current work at OpenAI involves developing more capable and aligned AI systems, with a particular focus on reasoning, instruction following, and safety considerations in large language models.