ZHIPU

คำนวณราคา ZhiPu 02513.HK

price.closed
ZHIPU
฿0
+฿0(0.00%)
No data

data.updated

v2.stock.overview v2.daily.trading v2.range.52w

key.stats

pe.ratio0.00
div.yield0.00%
shares.out0.00

เรียนรู้เพิ่มเติมเกี่ยวกับ ZhiPu 02513.HK (ZHIPU)

stock.faq

stock.price

x
current.stats

52w.range.q

x

pe.ratio.q

x

market.cap.q

x

eps.recent.q

x

buy.sell.q

x

price.factors

x

buy.how

x

risk.warn

risk.notice

disclaimer2

risk.disclosure

latest.news

2026-04-24 05:00

social_tradfi_title%!(EXTRA string=social_tradfi_fall, string=ZHIPU, string=ZhiPu 02513.HK, string=social_tradfi_falls, string=8%)

social_[tradfi](https://www.gate.com/tradfi)_content%!(EXTRA string=ZHIPU, string=ZhiPu 02513.HK, string=social_tradfi_dropped, string=8%)

2026-04-24 03:54

social_tradfi_title%!(EXTRA string=social_tradfi_fall, string=ZHIPU, string=ZhiPu 02513.HK, string=social_tradfi_falls, string=6%)

social_[tradfi](https://www.gate.com/tradfi)_content%!(EXTRA string=ZHIPU, string=ZhiPu 02513.HK, string=social_tradfi_dropped, string=6%)

2026-04-23 02:02

หุ้น Zhipu ทำสถิติสูงสุดเป็นประวัติการณ์ พุ่งขึ้นมากกว่า 5% ในช่วงเปิดตลาด หลังได้กำไร 800% นับตั้งแต่วัน IPO

ข้อความจาก Gate News วันที่ 23 เมษายน — Zhipu (02513.HK) พุ่งขึ้นมากกว่า 5% ในช่วงเปิดตลาด ทำสถิติสูงสุดตลอดกาลใหม่ หุ้นตัวนี้เพิ่มขึ้นเกือบ 800% นับตั้งแต่เข้าจดทะเบียน

2026-04-22 17:00

OpenClaw、Hermes 与 SillyTavern 已确认获得 GLM Coding Plan 支持

Gate News 消息,4 月 22 日——Zhipu AI 的产品经理 Zixuan Li 在 X 上宣布,OpenClaw、Hermes 和 SillyTavern 已正式标记为在 GLM Coding Plan 下受支持的项目。其他通用型工具将按个案评估。 Li 还建议用户不要共享账号凭据,或将订阅用作 API 访问。对照指南操作时遇到错误代码 1313 的用户,建议联系 Zhipu 的支持团队寻求帮助。

2026-04-22 07:13

Zhipu AI ยุติการสมัครสมาชิก GLM Coding Plan แบบไม่จำกัดโควต้ารายสัปดาห์ในวันที่ 30 เมษายน

ข่าวจาก Gate — 22 เมษายน — Zhipu AI ประกาศว่าจะยุติการต่ออายุอัตโนมัติของการสมัครสมาชิก GLM Coding Plan แบบไม่จำกัดโควต้ารายสัปดาห์ เริ่มตั้งแต่เวลา 10:00 น. ตามเวลาปักกิ่งในวันที่ 30 เมษายน 2026. การยุตินี้มีผลกับผู้ใช้ที่ขณะนี้สมัครสมาชิกอยู่ในแผนเดิม (legacy) โดยเปิดใช้งานการต่ออายุอัตโนมัติอยู่แล้ว ตามข้อมูลของบริษัท การตัดสินใจดังกล่าวเกิดจากการเติบโตอย่างต่อเนื่องในการใช้งาน ซึ่งทำให้โมเดลโควต้ารายสัปดาห์แบบไม่จำกัดเดิมยากต่อการรักษาให้ยืนระยะในระยะยาว ผู้ใช้ที่ได้รับผลกระทบจะได้รับสิทธิประโยชน์ของแผนใหม่ที่เทียบเท่ากันเป็นเวลา 2 เดือนเป็นค่าชดเชย รอบการสมัครสมาชิกและราคาปัจจุบันยังคงไม่เปลี่ยนแปลง และค่าชดเชย 2 เดือนจะออกให้โดยอัตโนมัติในวันที่ 30 เมษายน เมื่อระยะเวลาชดเชยสิ้นสุดลง ผู้ใช้ที่ต้องการใช้บริการต่อ จะต้องสมัครด้วยตนเองกับแผนล่าสุดที่พร้อมใช้งานในเวลานั้น

กระทู้ร้อนแรงเกี่ยวกับ ZhiPu 02513.HK (ZHIPU)

DeepFlowTech

DeepFlowTech

17 ชั่วโมงที่ผ่านมา
Deep潮 TechFlow ข่าวสาร เมื่อวันที่ 25 เมษายน ตามรายงานของนิตยสาร《财经》 บริษัทโมเดล AI ขนาดใหญ่ DeepSeek (深度求索) กำลังเจรจาร่วมทุนกับ Tencent และ Alibaba โดยทั้งสองฝ่ายคาดว่าจะลงทุนรวมกัน 1.8 พันล้านดอลลาร์สหรัฐ การประเมินมูลค่าการระดมทุนรอบนี้อยู่ที่ประมาณ 20 พันล้านดอลลาร์สหรัฐ ขณะนี้ยังไม่ได้ข้อสรุปเกี่ยวกับแผนการทำธุรกรรมอย่างสมบูรณ์ ทั้ง DeepSeek, Tencent และ Alibaba ยังไม่ได้ยืนยันอย่างเป็นทางการ แหล่งข่าวเปิดเผยว่า เหตุผลหลักที่ DeepSeek เปิดโอกาสให้มีการระดมทุนในครั้งนี้คือ สถานการณ์การสูญเสียบุคลากรที่รุนแรงในช่วงหลัง นักวิจัยหลักหลายคนลาออกต่อเนื่องกัน เข้าร่วมกับ ByteDance, Tencent, Xiaomi และบริษัทขับเคลื่อนอัตโนมัติ Yuanrong Qixing ในขณะเดียวกัน คู่แข่งอย่าง Zhipu Technology และ MiniMax ได้เข้าจดทะเบียนใน Hong Kong Stock Exchange โดยเดือน暗面 (อาจหมายถึงบริษัทหรือโครงการ) ได้ระดมทุนสามรอบในสามเดือนแรกของปีนี้ มูลค่าการประเมินสูงกว่าปลายปีที่แล้วกว่า 4 เท่า Tencent และ Alibaba ลงทุนใน DeepSeek ซึ่งช่วยกระจายความเสี่ยงในเส้นทาง AI และยังช่วยเสริมความร่วมมือในด้านโมเดลและผลิตภัณฑ์ของทั้งสองฝ่าย
0
0
0
0
quiet_lurker

quiet_lurker

04-24 22:02
Napapansin ko lang habang sinusubaybayan ang AI industry na may kakaibang pattern na nangyayari. Walong taon lang ang nakalipas, isang Chinese telecom company ay literal na nawalan ng buhay dahil sa isang embargo. Pero ngayon, ang ibang Chinese AI companies ay lumalaki nang mabilis kahit sa harap ng mas mataas na pressure. Ano talaga ang nagbago? Balik tayo sa 2018. Ang ZTE ay isa sa pinakamalaking telecommunications equipment manufacturers sa mundo—80,000 employees, billions sa annual revenue. Tapos isang araw lang, isang order mula sa US Bureau of Industry and Security ay nagsara ng buong company. Walang American components, walang Google license, walang operating system. Tatlong linggo later, ang ZTE ay nagsabi na hindi na nila kaya i-operate ang business. Nagbayad sila ng 1.4 billion dollars penalty, pero ang tunay na problema ay nasa ecosystem—sila ay completely dependent sa global supply chain na kontrolado ng US. Ngayon, kahit na may similar restrictions pa rin, ang Chinese AI companies ay hindi nagsuffer ng same fate. Bakit? Dahil ang problema ay hindi lang ang hardware. Ang tunay na bottleneck ay CUDA. Pinagsasalita ko yan dahil most people ay nag-assume na ang chip ban ay tungkol sa chips mismo. Mali yan. Ang CUDA—ang parallel computing platform ng NVIDIA mula 2006—yan ang tunay na hadlang. Lahat ng major AI frameworks worldwide, from Google's TensorFlow to Meta's PyTorch, ay deeply dependent sa CUDA. Pag nag-aral ang isang AI researcher, CUDA ang unang tool na natutuhan nila. Bawat line ng code ay nagpapalakas ng NVIDIA's ecosystem. By 2025, may 4.5 million developers na sa CUDA ecosystem, 3000+ GPU-accelerated applications, at 40,000 companies worldwide ang gumagamit nito. Yan ang 90% ng global AI developers. Ito ay isang flywheel na once nagsimula, halos imposibleng ihinto. Mas maraming developers, mas maraming tools. Mas maraming tools, mas maraming developers ang sumali. Ang resulta? NVIDIA ang nag-set ng rules, at lahat ay sumusunod. Kaya noong 2022-2024, US government ay nag-implement ng three waves ng restrictions sa NVIDIA chips export. First A100 at H100, then A800 at H800, then H20. Pero ito ay hindi nagtrigger ng same panic na nangyari sa ZTE. Bakit? Dahil ang Chinese companies ay nag-pivot sa algorithm optimization instead na mag-rebelde laban sa hardware. Ang DeepSeek ay ang best example nito. Ang kanilang V3 model ay may 671 billion parameters, pero bawat inference ay gumagamit lang ng 37 billion—5.5% lang ng total. Para ma-train ito, ginamit nila lang 2,048 NVIDIA H800 GPUs for 58 days, total cost na 5.576 million dollars. Compare yan sa estimated 78 million dollars para sa GPT-4. Isang order of magnitude difference. Ang pricing ay mas nagsasalita pa. DeepSeek API input ay 0.028 to 0.28 dollars per million tokens, output 0.42 dollars. GPT-4o input ay 5 dollars, output 15 dollars. Claude Opus ay mas mahal pa—15 dollars input, 75 dollars output. DeepSeek ay 25 to 75 times cheaper. Ang price difference na ito ay nag-trigger ng massive shift sa developer market. Noong February 2026, sa OpenRouter—ang biggest AI model API aggregation platform—ang weekly usage ng Chinese AI models ay tumalon ng 127% in three weeks at nag-overtake ng US para sa first time. A year ago, Chinese models ay less than 2% ng market. Now, tumaas ng 421% at approaching 6%. Pero ang deeper shift ay hindi lang sa price. Mula mid-2025, ang primary AI application ay nag-shift mula sa chatting to Agents. Sa Agent scenarios, token usage ay 10 to 100 times higher compared sa simple chat. When token consumption explodes exponentially, price becomes the deciding factor. Ang extreme cost efficiency ng Chinese models ay perfectly timed sa window na ito. But algorithm optimization ay hindi lang solve ang training problem. Kung hindi ka makapag-train sa latest data at mag-iterate, ang model mo ay mabilis na magiging obsolete. Training requires massive computing power. So where are Chinese companies getting ang computing infrastructure? May isang small city sa Jiangsu na Xinghua—kilala lang for stainless steel at healthy food—pero noong 2025, nag-build dito ng 148-meter server production line. From agreement signing to operations, 180 days lang. Ang core ay two fully local chips: Loongson 3C6000 processor at TaiChu Yuanqi T100 AI accelerator card. Ang Loongson ay may sariling design from instruction set to microarchitecture. Ang TaiChu Yuanqi ay galing sa National Supercomputing Center Wuxi at Tsinghua University, heterogenous many-core architecture. When full capacity, isang server every 5 minutes. Total investment 1.1 billion yuan, expected 100,000 units annually. Ang importante ay clusters ng thousands ng local chips ay nag-start na mag-handle ng real large model training. January 2026, Zhipu AI released GLM-Image kasama ang Huawei—first SOTA image generation model trained entirely using local chips. February, China Telecom completed full process training ng kanilang hundred-billion-level Xingchen model sa local compute pool ng thousands ng GPUs sa Shanghai Lingang. Ang significance nito ay isa lang: local chips ay nag-transition na from inference-only to training-capable. Ito ay qualitative change. Inference ay kailangan lang ng pre-trained models, relatively low chip requirements. Training ay kailangan ng massive data handling, complex gradient computation, parameter updates—mas mataas ang requirements sa computing power, interconnect bandwidth, software ecosystem. Ang driving force nito ay Huawei Ascend series. By end of 2025, Ascend ecosystem developers ay umabot na ng 4 million, partners 3000+, at 43 major models ay nag-complete ng pre-training using Ascend, plus 200+ open-source models na nag-adapt. March 2, 2026, sa MWC, Huawei introduced bagong generation SuperPoD compute infrastructure para sa overseas markets. Ang FP16 computing power ng Ascend 910B ay katumbas na ng NVIDIA A100. May gaps pa, pero naging usable na ito from completely unusable. Ang ecosystem building ay hindi dapat maghintay until perfect chips—dapat wide deployment na habang sufficient na, using real business needs to force chip at software updates. Ang deployment targets ng ByteDance, Tencent, Baidu para sa local servers ay expected mag-double in 2026 versus last year. According sa Ministry of Industry and Information Technology, intelligent computing scale sa China ay umabot na ng 1590 EFLOPS. 2026 ay year ng widespread local computing power deployment. But may another side ng story na equally important—energy. Virginia, which handles massive share ng world's data center traffic, nag-pause ng new data center permits. Georgia nag-pause until 2027. Illinois, Michigan nag-issue ng restrictions. According sa International Energy Agency, US data center electricity consumption 2024 ay umabot ng 183 terawatt-hours, roughly 4% ng total national consumption. By 2030, expected to double to 426 TWh, possibly exceeding 12%. Arm CEO nag-say na by 2030, AI data centers alone could consume 20-25% ng US electricity. Ang US grid ay nasa limits na. PJM grid covering 13 eastern states ay may 6GW capacity shortage. By 2033, entire US ay facing 175GW electricity capacity shortage, equivalent sa energy usage ng 130 million families. Electricity prices sa regions with concentrated data centers ay tumaas ng 267% compared sa five years ago. Ang computing power boundary ay energy. Pero sa energy side, ang gap between China at US ay mas malaki than chips, pero opposite direction. China's annual electricity generation ay 10.4 trillion units versus US 4.2 trillion—China ay 2.5 times more. More importantly, household electricity usage sa China ay lang 15% ng total, versus US 36%. Meaning China may mas malaking industrial electricity capacity available for computing power buildout. Electricity price pa lang—US AI company regions ay 0.12 to 0.15 dollars per kilowatt-hour, habang western China industrial rates ay around 0.03 dollars, half or one-fifth ng US prices. China's electricity generation advantage ay 7 times versus US. While America worried sa power, Chinese AI quietly developing sa ibang bansa. But this time, hindi ang product o factory ang lumalaki—ang tokens. Tokens, smallest information unit considered ng AI models, ay naging bagong digital commodity. Produced sa Chinese computing factories, shipped worldwide through undersea cables. DeepSeek user distribution ay clear: 30.7% from China, 13.6% from India, 6.9% from Indonesia, 4.3% from US, 3.2% from France. Supports 37 languages, widely valued sa emerging markets like Brazil. 26,000 companies worldwide may accounts, 3,200 institutions using enterprise version. In 2025, 58% ng new AI startups integrated DeepSeek sa tech stack. Sa China, DeepSeek captured 89% market share. Sa other trained markets, market share between 40-60%. Ang view na ito ay parang isang industry control loss war na nangyari four decades ago. Tokyo, 1986, under intense US pressure, Japanese government signed US-Japan Semiconductor Agreement. Three main features: open Japan's semiconductor market, US chip market share must exceed 20%, export bans sa below-cost semiconductors, 100% penalty sa 3 billion dollars exported chips. US rejected Fujitsu's acquisition ng Fairchild. That year, Japan's semiconductor industry ay nasa peak. By 1988, Japan controlled 51% global semiconductor market, US 36.8%. Sa top 10 global semiconductor companies, six from Japan: NEC second, Toshiba third, Hitachi fifth, Fujitsu seventh, Mitsubishi eighth, Panasonic ninth. But after agreement signing, lahat nagbago. US used Section 301 investigations to completely suppress Japanese semiconductor companies. Meanwhile, supported Samsung at SK Hynix from Korea to fight Japan's market sa lower prices. Japan's DRAM share dropped from 80% to 10%. By 2017, Japan's IC market share ay 7% lang. Former strong companies either split, bought, o exited sa endless losses. Japan's semiconductor tragedy ay in being happy sa global division ng labor led by external power, as best producer, but never thought to build independent ecosystem. When wave receded, discovered they had nothing but manufacturing itself. Current Chinese AI industry ay facing similar pero completely different crusade. Facing major external pressure too. Three waves ng chip tightening, continuously strengthened, CUDA ecosystem barrier remains high. Difference ay this time chose harder path—from extreme algorithm optimization level, through local chip journey from inference to training, collecting 4 million developers sa Ascend ecosystem, to spreading tokens sa global market. Every step builds independent industrial ecosystem Japan never had. February 27, 2026, three local AI chip companies reported performance. Cambrian, revenue up 453%, first achieved full-year profitability. Moore Threads, revenue up 243%, pero net loss 1 billion. Muxi, revenue up 121%, net loss almost 8 billion. Half fire, half water. Fire ay market's hunger. Huang's 95% space released ay completed by local companies' revenue numbers, one by one fulfillment. Whatever performance, whatever ecosystem, market needs second option where NVIDIA isn't. Ito ay unusual structural opportunity discovered by geopolitics. Building ecosystem ay expensive. Every loss ay real money spent following CUDA ecosystem. Learning costs, software subsidies, engineer travel costs to customer sites solving compilation issues. These losses aren't from poor operations—necessary war tax para building independent ecosystem. These three performance reports ay more truthfully written real hash power war situation than any industry report. Hindi ito celebration-filled success, kundi brutal position battle, where soldiers rise while bleeding. But war form truly changed. Eight years ago, discussed question 'can we survive.' Now, discuss 'what cost must we pay to survive.' Cost itself ay progress.
0
0
0
0