Recently, while reviewing industry trends, I noticed a rather interesting phenomenon.
The founder of a leading exchange shared an AI bot project this April—specifically designed to mimic his own speaking style on social platforms. This tool analyzes historical posts, learns his way of expression, and then automatically generates replies in a similar style.
But that brings up a question: if this kind of AI tool really becomes widespread, how much of the "personal statements" we see on social platforms in the future will actually be typed by real people, and how much will be written by machines? Will this boundary become increasingly blurred?
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
15 Likes
Reward
15
6
Repost
Share
Comment
0/400
ChainDoctor
· 12-09 11:01
Haha, isn't this just the arrival of the deepfake era? Who can trust anything in the future?
---
Wait, doesn't this mean that big influencers can be "online" 24/7 now... That's a bit scary.
---
Someone should have done this a long time ago, so certain people wouldn't have to keep fabricating personas every day.
---
I just want to know if these bots will also pick up those marketing pitches, haha.
---
Is this for real? So are the tweets I'm seeing now from real people or AI? The more I think about it, the crazier it gets.
---
Now this is a real DAO. Even the CEO can be automated.
---
But then again, when it comes to AI learning styles, it can learn the tone but not the judgment.
---
Doing gimmicks is one thing, but what about credibility... The information war has just leveled up.
View OriginalReply0
ForkItAllDay
· 12-09 11:00
It's really hard to tell what's real or fake, man. From now on, we should run a human verification before trusting any tweets.
View OriginalReply0
0xSunnyDay
· 12-09 10:48
Haha, now it's really hard to tell what's real and what's fake.
Robots are even more dedicated than real people, huh?
Wait, then how am I supposed to know who's who?
Isn't this turning into a big identity drama?
Sooner or later, everyone's going to get exposed, haha.
View OriginalReply0
0xSleepDeprived
· 12-09 10:47
Now this is great, from now on we'll need a magnifying glass to read what influencers say—it's getting harder and harder to tell what's real and what's fake.
View OriginalReply0
OnchainDetective
· 12-09 10:45
Ha, isn't this just the social media version of deepfake? It's bound to crash sooner or later.
---
Wait, then how do I know if the influencer I follow is a real person or a bot? Feels bad.
---
I've long noticed that some posts are full of fluff, now I know why.
---
Unbelievable, who would dare trust anything seen on social platforms in the future—it’s all AI's fault.
---
This guy even copied himself, that's something.
---
I was wondering why some replies are so fast and so similar—turns out it's all automated.
---
We're doomed, the information cocoon is about to upgrade to "hard to tell real from fake."
---
The crypto community was already an information war zone, now with AI joining in, it's straight up a conspiracy theory paradise.
Recently, while reviewing industry trends, I noticed a rather interesting phenomenon.
The founder of a leading exchange shared an AI bot project this April—specifically designed to mimic his own speaking style on social platforms. This tool analyzes historical posts, learns his way of expression, and then automatically generates replies in a similar style.
But that brings up a question: if this kind of AI tool really becomes widespread, how much of the "personal statements" we see on social platforms in the future will actually be typed by real people, and how much will be written by machines? Will this boundary become increasingly blurred?