Research x Engineering x Design

Hi, I'm Pete Pittawat Taveekitworachai

I study how language models behave in practice, from prompting and context engineering to post-training and evaluation.

My recent work spans open reasoning models, medical reasoning systems, evaluation tooling, and the engineering patterns that make model behavior easier to inspect and improve.

Pete's profile picture

Current focus

Reasoning models, post-training, evaluation, and inference-time strategies for systems that need to be useful beyond demos.

Research systems

Selected projects across reasoning models, evaluation tooling, healthcare, and developer-facing prototypes.

Explore projects

Working notes

Essays and technical notes on prompting, agentic workflows, model behavior, and software practice.

Read the blog

Talks and workshops

Conference talks, invited sessions, and practical walkthroughs on reasoning models and AI engineering.

Browse talks
Research in practice

A researcher-builder working across models and systems

I am Pittawat Taveekitworachai, a research scientist focused on making language models more reliable, steerable, and useful in real settings.

My work sits between research and implementation: building reasoning models, designing evaluations, studying post-training behavior, and turning findings into tools, prototypes, and public explanations.

Behavior shaping

Studying how model behavior changes under context engineering, prompting, supervised fine-tuning, and reinforcement fine-tuning.

Reasoning models

Building and analyzing open reasoning models, including Thai and medical settings where reliability matters.

Evaluation and reliability

Designing evaluations for structured outputs, failure analysis, robustness, and practical deployment risk.

Applied systems

Turning research into usable tools, experiments, and prototypes for real teams and domain experts.

Learn more about my background

Current research themes

The threads connecting my publications, open models, evaluation work, and collaborations.

  • Behavior shaping

    Studying how model behavior changes under context engineering, prompting, supervised fine-tuning, and reinforcement fine-tuning.

  • Reasoning models

    Building and analyzing open reasoning models, including Thai and medical settings where reliability matters.

  • Evaluation and reliability

    Designing evaluations for structured outputs, failure analysis, robustness, and practical deployment risk.

  • Applied systems

    Turning research into usable tools, experiments, and prototypes for real teams and domain experts.

Recent ways this work shows up

A mix of papers, public communication, open models, and tooling informed by current projects and collaborations.

  • Publications and writing

    Publishing on prompting, reasoning, and evaluation, while writing essays that connect research questions to practical engineering decisions.

  • Talks and workshops

    Sharing lessons from reasoning models, context engineering, and applied AI systems through conferences, meetups, and research events.

  • Open models and tooling

    Contributing to open reasoning models, benchmarks, and evaluation tools that other teams can study, test, and extend.

Selected output

What I publish, ship, and share

Writing, research, and talks that document how model behavior changes under real constraints.

Latest writing

Recent notes from the research and engineering loop

New essays on model behavior, evaluation, agentic workflows, and the practical details that show up while building.

Browse all articles