Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
"Better than waiting half a year": medical professionals support the launch of ChatGPT Health - ForkLog: cryptocurrencies, AI, singularity, future
Experts have supported the release of ChatGPT Health for health consultations despite the risks of hallucinations in neural networks. TechCrunch reports this.
Sina Bari, a practicing surgeon and head of AI at iMerit, shared how his patient consulted with ChatGPT:
Dr. Bari checked the sources and found that the statistics were taken from an article about the drug’s impact on a niche subgroup of people with tuberculosis. These data were not applicable to his clinical case.
Despite inaccuracies, the doctor positively evaluated the launch of ChatGPT Health. In his opinion, the service gives users the opportunity to discuss health issues in a more private setting.
Users can receive more personalized recommendations from ChatGPT Health by uploading medical records and syncing the app with Apple Health and MyFitnessPal. Such deep access to personal information has raised community concerns.
Over 230 million people discuss their health with ChatGPT weekly. Many have stopped “Googling” symptoms, choosing the chatbot as their source of information.
The Hallucination Problem
The main issue with chatbots remains “hallucinations,” which is especially critical in healthcare. A Vectara study showed that GPT-5 from OpenAI “hallucinates” more often than competitors from Google and Anthropic.
However, Stanford University medical professor Nigam Shah considers these concerns secondary. According to him, the real problem with the system is the difficulty in accessing doctors, not the risk of receiving incorrect advice from ChatGPT.
Administrative tasks for doctors can take up about half of their time, significantly reducing the number of appointments. Automating these processes will allow specialists to devote more attention to patients.
Dr. Shah leads a team at Stanford developing ChatEHR — software that enables doctors to work rationally and efficiently with electronic medical records.
Recall that in January, Anthropic announced the release of Claude for Healthcare — a set of tools for healthcare providers and patients.