Handshake AI Research Intern, Winter 2025
Handshake
Location
San Francisco, CA
Employment Type
Full time
Location Type
On-site
Department
Handshake AI
Compensation
- $12K – $15K per month
For cash compensation, we set standard ranges for all U.S.-based roles based on function, level, and geographic location, benchmarked against similar stage growth companies. In order to be compliant with local legislation, as well as to provide greater transparency to candidates, we share salary ranges on all job postings regardless of desired hiring location. Final offer amounts are determined by multiple factors, including geographic location as well as candidate experience and expertise, and may vary from the amounts listed above.
Why Handshake AI?
Handshake AI builds the data engines that power the next generation of large language models. Our research team works at the intersection of cutting-edge model post-training, rigorous evaluation, and data efficiency. Join us for a focused winter 2025 internship where your work can ship directly into our production stack and become a publishable research contribution. To start between December 1st and January 15th.
Projects You Could Tackle
LLM Post-Training: Novel RLHF / GRPO pipelines, instruction-following refinements, reasoning-trace supervision.
LLM Evaluation: New multilingual, long-horizon, or domain-specific benchmarks; automatic vs. human preference studies; robustness diagnostics.
Data Efficiency: Active-learning loops, data value estimation, synthetic data generation, and low-resource fine-tuning strategies.
Each intern owns a scoped research project, mentored by a senior scientist, with the explicit goal of an archive-ready manuscript or top-tier conference submission by spring 2026.
Minimum Qualifications
Current PhD student in CS, ML, NLP, or related field.
Publication track record at top venues (NeurIPS, ICML, ACL, EMNLP, ICLR, etc.).
Hands-on experience training and experimenting with LLMs (e.g., PyTorch, JAX, DeepSpeed, distributed training stacks).
Strong empirical rigor and a passion for open-ended AI questions.
Nice-to-Haves
Prior work on RLHF, evaluation tooling, or data selection methods.
Contributions to open-source LLM frameworks.
Public speaking or teaching experience (we often host internal reading groups).