🚨 ChatGPT lies to you 27% of the time and you have no idea.


a lawyer literally lost his career trusting AI-generated legal citations that were completely fabricated. filed them in court. judge found out. career over.
but here's what most people don't know..
Johns Hopkins researchers tested 1,200 prompts and found that how you prompt changes everything.
baseline prompting: 27.3% hallucination rate
generic instructions like "be accurate": 24.1%.. barely helps
now here's the fix:
just add "according to" before your question.
instead of: "what are the health benefits of magnesium?"
try: "according to peer-reviewed research, what are the health benefits of magnesium?"
hallucination rate drops to 7.2%.. that's a 20 percentage point reduction from one small change.
source attribution method works the same.. 7.2%.
the trick is simple.. when you force AI to attribute its claims to something specific, it can't make stuff up as easily. it either finds the source or tells you it doesn't know.
two words. 20% less lies.
most people will keep prompting the lazy way. now you won't.
post-image
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin