Hi, I'm Pete Pittawat Taveekitworachai
I study how language models behave in practice, from prompting and context engineering to post-training and evaluation.
My recent work spans open reasoning models, medical reasoning systems, evaluation tooling, and the engineering patterns that make model behavior easier to inspect and improve.
Current focus
Reasoning models, post-training, evaluation, and inference-time strategies for systems that need to be useful beyond demos.
Research systems
Selected projects across reasoning models, evaluation tooling, healthcare, and developer-facing prototypes.
Working notes
Essays and technical notes on prompting, agentic workflows, model behavior, and software practice.
Talks and workshops
Conference talks, invited sessions, and practical walkthroughs on reasoning models and AI engineering.
A researcher-builder working across models and systems
I am Pittawat Taveekitworachai, a research scientist focused on making language models more reliable, steerable, and useful in real settings.
My work sits between research and implementation: building reasoning models, designing evaluations, studying post-training behavior, and turning findings into tools, prototypes, and public explanations.
Studying how model behavior changes under context engineering, prompting, supervised fine-tuning, and reinforcement fine-tuning.
Building and analyzing open reasoning models, including Thai and medical settings where reliability matters.
Designing evaluations for structured outputs, failure analysis, robustness, and practical deployment risk.
Turning research into usable tools, experiments, and prototypes for real teams and domain experts.
Current research themes
The threads connecting my publications, open models, evaluation work, and collaborations.
-
Behavior shaping
Studying how model behavior changes under context engineering, prompting, supervised fine-tuning, and reinforcement fine-tuning.
-
Reasoning models
Building and analyzing open reasoning models, including Thai and medical settings where reliability matters.
-
Evaluation and reliability
Designing evaluations for structured outputs, failure analysis, robustness, and practical deployment risk.
-
Applied systems
Turning research into usable tools, experiments, and prototypes for real teams and domain experts.
Recent ways this work shows up
A mix of papers, public communication, open models, and tooling informed by current projects and collaborations.
-
Publications and writing
Publishing on prompting, reasoning, and evaluation, while writing essays that connect research questions to practical engineering decisions.
-
Talks and workshops
Sharing lessons from reasoning models, context engineering, and applied AI systems through conferences, meetups, and research events.
-
Open models and tooling
Contributing to open reasoning models, benchmarks, and evaluation tools that other teams can study, test, and extend.
What I publish, ship, and share
Writing, research, and talks that document how model behavior changes under real constraints.
-
Blog posts
120 +
Essays, technical notes, and field observations on language models, agent workflows, and software practice.
Read the blog -
Publications
39 +
Papers on prompting, reasoning, evaluation, game-related LLM work, and applied AI systems.
View publications -
Talks
28 +
Conference talks, invited lectures, and workshops on reasoning models, evaluation, and AI engineering.
See talks
Recent notes from the research and engineering loop
New essays on model behavior, evaluation, agentic workflows, and the practical details that show up while building.
- AIENGLISH EDITION
FIELD NOTES
The Common Language of Knowledge: When AI Becomes the Lingua Franca of UnderstandingAIThe Common Language of Knowledge: When AI Becomes the Lingua Franca of Understanding
AI is transcending the boundaries of linguistics, forging a 'common language' that unites disparate disciplines. As we enter a new era where questions matter more than answers.
- AIENGLISH EDITION
FIELD NOTES
The Silicon Mind: When AI Forces Us to Question Who We AreAIThe Silicon Mind: When AI Forces Us to Question Who We Are
AI promises a more efficient future for companies, but its endgame is often substitution rather than assistance. Once our work, voice, and behavior can be copied, we are pushed into a deeper question: what, exactly, makes a person a person?