*Data last updated: 2026-04-28 22:08 (UTC+8)
As of 2026-04-28 22:08, Ralph Lauren Corp (RL) is priced at $0, with a total market cap of $22,46B, a P/E ratio of 18,17, and a dividend yield of 0,98%. Today, the stock price fluctuated between $0 and $0. The current price is 0,00% above the day's low and 0,00% below the day's high, with a trading volume of 313,64K. Over the past 52 weeks, RL has traded between $0 to $0, and the current price is 0,00% away from the 52-week high.
RL Key Stats
About RL
Ralph Lauren Corp (RL) FAQ
What's the stock price of Ralph Lauren Corp (RL) today?
What are the 52-week high and low prices for Ralph Lauren Corp (RL)?
What is the price-to-earnings (P/E) ratio of Ralph Lauren Corp (RL)? What does it indicate?
What is the market cap of Ralph Lauren Corp (RL)?
What is the most recent quarterly earnings per share (EPS) for Ralph Lauren Corp (RL)?
Should you buy or sell Ralph Lauren Corp (RL) now?
What factors can affect the stock price of Ralph Lauren Corp (RL)?
How to buy Ralph Lauren Corp (RL) stock?
Risk Warning
Disclaimer
Other Trading Markets
Ralph Lauren Corp (RL) Latest News
Perplexity Discloses Web Search Agent Post-Training Method; Qwen3.5-Based Model Outperforms GPT-5.4 on Accuracy and Cost
Gate News message, April 23 — Perplexity's research team published a technical article detailing its post-training methodology for web search agents. The approach uses two open-source Qwen3.5 models (Qwen3.5-122B-A10B and Qwen3.5-397B-A17B) and employs a two-stage pipeline: supervised fine-tuning (SFT) to establish instruction-following and language consistency, followed by online reinforcement learning (RL) to optimize search accuracy and tool-use efficiency. The RL phase leverages the GRPO algorithm with two data sources: a proprietary multi-hop verifiable question-answer dataset constructed from internal seed queries requiring 2–4 hops of reasoning with multi-solver verification, and rubric-based general conversation data that converts deployment requirements into objectively checkable atomic conditions to prevent SFT behavior degradation. Reward design employs gated aggregation—preference scores only contribute when baseline correctness is achieved (question-answer match or all rubric criteria met), preventing high preference signals from masking factual errors. Efficiency penalties use within-group anchoring, applying smooth penalties to tool calls and generation length exceeding the baseline of correct answers in the same group. Evaluation shows Qwen3.5-397B-SFT-RL achieves best-in-class performance across search benchmarks. On FRAMES, it reaches 57.3% accuracy with a single tool call, outperforming GPT-5.4 by 5.7 percentage points and Claude Sonnet 4.6 by 4.7 percentage points. Under moderate budget (four tool calls), it achieves 73.9% accuracy at $0.02 per query, compared to GPT-5.4's 67.8% accuracy at $0.085 per query and Sonnet 4.6's 62.4% accuracy at $0.153 per query. Cost figures are based on each provider's public API pricing and exclude caching optimizations.
2026-03-21 00:19Cursor 官方确认 Kimi K2.5 为基座,月之暗面:属授权商业合作
Gate News 消息,3 月 21 日,据 1M AI News 监测,月之暗面官方账号 @Kimi_Moonshot 发文祝贺 Cursor 发布 Composer 2,并说明 Cursor 通过 Fireworks AI 托管的 RL 与推理平台访问 Kimi K2.5,属于授权商业合作。Cursor 联合创始人 Aman Sanger 和开发者教育副总裁 Lee Robinson 随后公开确认基座来源,并披露技术细节。Sanger 表示团队对多个基座进行困惑度评测,Kimi K2.5「证明是最强的」,随后叠加继续预训练和 4 倍规模的高算力强化学习,并通过 Fireworks AI 的推理与 RL 采样器部署。Robinson 补充,最终模型中来自基座的算力约占 1/4,其余 3/4 来自 Cursor 自身训练。两位创始人均承认发布博客时未提及 Kimi 基座「是一个失误」,表示下一个模型发布时会在第一时间注明基座来源。此前,Elon Musk 在相关讨论帖下回复「Yeah, it's Kimi 2.5」,进一步放大话题热度。
2026-03-20 09:47Cursor Composer 2 被指使用 Kimi K2.5 模型,月之暗面指控其未遵守许可证
Gate News 消息,3 月 20 日,据 1M AI News 监测,开发者 @fynnso 在调试 Cursor API 请求时发现,Composer 2 的实际模型 ID 为 kimi-k2p5-rl-0317-s515-fast,字面即「Kimi K2.5 + RL」。月之暗面(Moonshot AI)预训练负责人杜羽伦随即发推,称团队测试 Composer 2 的 tokenizer 后发现「与我们的 Kimi tokenizer 完全一致」,「几乎可以确认这是我们的模型被进一步后训练的结果」,并直接 @ Cursor 联合创始人 Michael Truell,质问「为什么不尊重我们的许可证,也没有支付任何费用」。Cursor 于 3 月 19 日发布 Composer 2 时称,性能提升来自「首次对基座模型进行继续预训练,再结合强化学习」,但全程未提及 Kimi K2.5。Kimi K2.5 采用修改版 MIT 协议,明文规定:月活超 1 亿或月营收超 2000 万美元的商业产品,必须在用户界面显著标注「Kimi K2.5」。以 Cursor 293 亿美元估值及付费用户规模,月营收门槛几乎必然触发。截至发稿,Cursor 未公开回应。
2026-02-12 14:21Gradient 推出分布式强化学习框架 Echo-2,并计划推出 RLaaS 平台 Logits
Foresight News 消息,分布式 AI 实验室 Gradient 发布 Echo-2 分布式强化学习框架,旨在打破 AI 研究训练效率壁垒。该框架通过在架构层实现 Learner 与 Actor 的解耦,旨在降低大模型的后训练成本。据官方数据显示,该框架可将 30B 模型的后训练成本从 4500 美元降低至 425 美元。 Echo-2 利用存算分离技术进行异步训练(Async RL),支持将采样算力卸载至不稳定显卡实例与基于 Parallax 的异构显卡。该框架配合有界陈旧性、实例容错调度以及自研 Lattica 通讯协议等技术,在维持模型精度的前提下提升训练效率。 此外,Gradient 计划推出 RLaaS(强化学习即服务)平台 Logits,目前已面向学生与研究人员开放预约。
2026-01-02 09:15Mechanism Capital合伙人:2026年实体AI数据规模将扩大100倍
PANews 1月2日消息,Mechanism Capital合伙人Andrew Kang在X平台发文表示,2025年机器人领域解决了长期存在的模型架构与训练挑战,并在数据采集技术、数据质量理解和数据配方方面取得重大进展,使得人工智能公司有信心最终开始投资大规模数据收集,像Figure、Dyna和PI这样的公司利用强化学习 (RL) 的创新技术,在各种实际应用场景中实现了99%以上的成功率。 此外,记忆技术的进步打破了“记忆墙”,NVIDIA的ReMEmber利用基于记忆的导航,Titans与MIRAS实现了测试时记忆,更优秀的虚拟定位模型(VLM)意味着虚拟定位阵列(VLA)拥有更佳的空间理解能力,以及能够大幅提升吞吐量的数据标注和处理流程。2025年市场初步领略到数据规模带来的零样本能力映射、视觉力度敏感性和通用物理推理,2026年实体AI数据规模将扩大100倍。



















































































































































































































































































































































































