V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
• 外包信息请发到 /go/outsourcing 节点。
• 不要把相同的信息发到不同的节点
sophiayao
V2EX  ›  酷工作

美国硅谷 AI 独角兽公司,招聘 LLM Pre-Training Researcher(base 新加坡)

  •  
  •   sophiayao · 13 小时 25 分钟前 · 80 次点击
    美国 AI 独角兽公司(排名前五),在新加坡设置研发中心,寻找 LLM Pre-Training Researcher ( base 新加坡),对标硅谷薪资水平,顶尖技术团队。

    About the Role

    Join a leading AI research company at the forefront of large language model development. As an LLM Pre-Training Researcher, you will shape the future of foundation models by working across the entire model lifecycle — from large-scale pre-training to post-training alignment. This role offers the rare opportunity to operate at the cutting edge of scaling laws, reasoning, and alignment, directly influencing how next-generation language models learn and behave in real-world applications.

    Key Responsibilities

    • Architect and scale large autoregressive language models, designing improved pre-training objectives to enhance reasoning and knowledge retention
    • Develop mid-training strategies including continued pre-training, domain adaptation, curriculum learning, and synthetic data integration
    • Advance post-training techniques such as instruction tuning, preference optimization, reinforcement learning, and inference-time compute scaling
    • Curate and construct massive, high-quality text corpora for pre-training while designing synthetic data pipelines for reasoning and structured problem solving
    • Train frontier-scale language models across large GPU clusters, optimizing distributed training and memory efficiency
    • Build infrastructure for large-scale experimentation, ablations, and reproducibility to support scalable deployment
    • Define evaluation frameworks for language intelligence including multi-step reasoning, coding, knowledge grounding, and agentic behavior
    • Track capability development across training phases and close the loop between evaluation signals and model improvements

    Requirements

    • Strong foundation in machine learning and large language models with deep understanding of autoregressive transformers
    • Hands-on experience with PyTorch and distributed training at scale in both research and production environments
    • Experience with pre-training large models and post-training techniques such as instruction tuning, RLHF, or preference optimization
    • Experience training frontier-scale language models from scratch (is a bonus)
    • Research contributions in scaling laws, reasoning, alignment, or inference-time compute (is a bonus)
    • Expertise in long-context modeling or structured reasoning systems (is a bonus)

    有意者请联系 WX:sophia_liu611
    或者 email: [email protected]
    目前尚无回复
    关于   ·   帮助文档   ·   自助推广系统   ·   博客   ·   API   ·   FAQ   ·   Solana   ·   887 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 26ms · UTC 20:24 · PVG 04:24 · LAX 13:24 · JFK 16:24
    ♥ Do have faith in what you're doing.