Former Qianwen post-training leader Yu Bowen reportedly joined ByteSeed, having announced his departure on the same day as Lin Junyang.

robot
Abstract generation in progress

Investing in stocks? Check out Golden Kylin Analyst Reports—authoritative, professional, timely, comprehensive—helping you uncover potential thematic opportunities!

(Source: Jiemian News)

On March 12, an insider revealed that Yu Bowen, the former lead trainer of Alibaba’s Tongyi Laboratory Qwen (Qianwen) large model, has joined ByteDance as the head of the seed team’s visual model and multimodal interaction team post-training.

Jiemian News reporters reached out to ByteDance for confirmation. As of press time, no response has been received.

The Seed team is a core department of ByteDance’s AI research and development, currently led by Dr. Wu Yonghui, who previously served as Vice President of Research at Google DeepMind and participated in the development of the Gemini large model. Wu Yonghui officially joined ByteDance in February 2025 and took over the Seed team, reporting directly to ByteDance CEO Liang Rubo.

The Seed department’s research areas include large language models (LLMs), speech, vision, world models, infrastructure, AI Infra, and next-generation AI interaction. Its developed Doubao large model has been applied in over 50 scenarios. The team focuses on breakthroughs in multimodal technology, having iterated and launched core models such as Seed 2.0 series foundational models, Seedance 2.0 video generation models, and Seed3D 1.0 3D generation models.

Public information shows that Yu Bowen graduated from the Institute of Information Engineering at the Chinese Academy of Sciences, specializing in natural language processing and information extraction. He has published multiple papers at international conferences such as ACL and EMNLP. Due to his outstanding academic performance, he received the Chinese Academy of Sciences President’s Award.

In 2022, Yu Bowen joined DAMO Academy through Alibaba’s “Alibaba Star” campus recruitment program. With his strong technical skills, he quickly became a core member of the Qwen team, leading the development of the Qwen series chat models and gaining deep experience in post-training large models and multimodal alignment.

Yu Bowen’s departure is closely related to recent organizational adjustments within Alibaba’s Tongyi Qwen team.

It is reported that in March, Alibaba’s Tongyi Laboratory initiated an organizational restructuring, splitting the originally vertically integrated Qwen team into several parallel horizontal modules, including pre-training, post-training, text, and multimodal divisions. The assessment and scope of authority for the Qwen team have also been adjusted.

On March 4, Lin Junyang, the technical lead of the Tongyi Qianwen large model, announced on social media that he had stepped down from his role.

Following Lin Junyang’s departure, several core members also announced their resignations, including Yu Bowen, the post-training lead for Qwen, and Kaixin Li, a key contributor to Qwen 3.5/VL/Coder. Previously, Hui Bin, head of Qwen Code, had left and joined Meta in January.

On March 5, Alibaba CEO Wu Yongming confirmed via internal email that he approved Lin Junyang’s resignation. The Tongyi Laboratory will be led by Zhou Jingren to ensure a smooth transition in model development and business operations. The company reaffirmed its commitment to open-source models, continued increased investment in AI R&D and talent recruitment, and established a “Foundation Model Support Group” composed of Wu Yongming, Zhou Jingren, and Fan Yu to coordinate group resources and support foundational model development.

Amid personnel changes in Alibaba’s Tongyi Qianwen team, Google DeepMind executives have also publicly reached out to recruit talent. Omar Sanseviero, a senior leader at Google DeepMind, posted on social media inviting the Qianwen team: “If you’re looking for a new place to build excellent models and contribute to the open model ecosystem, contact us! We have many exciting plans on our roadmap, and there’s still a lot of work to do in the future.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin