A recent trend on TikTok caught my attention. People are sharing screenshots of what ChatGPT “gave them” when they asked how they treat it. Some responses are surprisingly thoughtful. Others are… brutal.
At first glance, it looks like a joke. But underneath the trend is something more interesting—and more useful:
How you use ChatGPT directly determines the quality of what you get back.
This isn’t about politeness, emotions, or “being nice” to AI. It’s about inputs, structure, and thinking quality. ChatGPT isn’t magic. It’s leverage. And leverage only works if you know how to use it.
A common complaint online is:
“ChatGPT is wrong.”
“ChatGPT is inconsistent.”
“ChatGPT used to be better.”
Most of the time, that’s not true. What’s actually happening is simpler and less comfortable:
People are asking low-quality questions and expecting high-quality outputs.
Here are the most common mistakes:
“Help me with marketing.”
“Fix my resume.”
“Explain this.”
No context. No constraints. No goal. You wouldn’t ask a human expert this way and expect a good answer. AI is no different.
People ask once, don’t like the answer, and quit.
That’s like giving feedback to no one and wondering why nothing improves.
ChatGPT is not Google. It doesn’t retrieve facts; it responds to structure. If you treat it like a keyword box, you’ll get generic, surface-level responses.
When people say ChatGPT gives bad answers, what they’re really saying is:
“I didn’t guide it.”
People who get strong results from ChatGPT behave differently. Not emotionally—structurally.
They do three things consistently:
They state what they want and why.
Bad:
“Write a LinkedIn post.”
Better:
“Write a short LinkedIn post explaining why how people use AI tools affects the results they get. Tone: practical, not hype.”
Constraints improve outputs.
Examples:
Length
Tone
Audience
Format
What to avoid
Constraints don’t limit ChatGPT—they focus it.
They treat ChatGPT like a junior analyst or thinking partner:
“This is too generic.”
“Go deeper on point 3.”
“Rewrite this to sound more direct.”
This feedback loop is where quality comes from.
This is the most important thing to understand.
ChatGPT does not “think” the way humans do. It doesn’t have opinions, intent, or understanding. What it does have is the ability to mirror and extend the structure you give it.
That’s why:
Clear thinkers get clearer answers
Messy prompts get messy outputs
Strong framing leads to strong reasoning
ChatGPT amplifies your thinking patterns.
If your input is shallow, it will confidently return shallow content.
If your input is structured, it will build on that structure.
In other words: AI reflects the user.
That’s what this TikTok trend accidentally reveals.
If you want consistently better results, follow this framework.
Always explain the situation before the task.
Example:
“I’m a digital project manager writing a blog post about AI usage habits for a technical audience.”
Tell ChatGPT what success looks like.
Bad:
“Explain this topic.”
Better:
“Explain this topic in a way that helps readers change how they use ChatGPT.”
Tell it what not to do.
Examples:
“Avoid buzzwords.”
“Don’t sound motivational.”
“No emojis.”
“Write like a human, not a blog template.”
Your first answer is a draft. Treat it like one.
Good users don’t ask “Is this good?”
They say:
“Tighten this.”
“Cut the fluff.”
“Be more direct.”
“This section is weak—rewrite it.”
That’s how outputs improve.
This isn’t really about AI.
It’s about how people approach tools, thinking, and leverage.
People who get value from ChatGPT:
Ask better questions
Think in systems
Refine ideas instead of abandoning them
Use feedback loops
People who don’t:
Expect instant perfection
Avoid clarity
Blame the tool instead of the input
AI is exposing something uncomfortable but useful:
most people don’t struggle with tools—they struggle with thinking clearly.
ChatGPT just makes that visible.
The TikTok trend isn’t showing how ChatGPT “feels” about people.
It’s showing how people show up when they interact with a powerful tool.
Some treat it like a toy.
Some treat it like a shortcut.
Some treat it like leverage.
And the results match the behavior.
If you want better outputs, don’t ask whether ChatGPT is getting worse.
Ask whether your inputs are getting better.
Because AI doesn’t reward politeness.
It rewards clarity.
Curious experiment:
Ask ChatGPT how you treat it.
Then ask yourself whether that matches the results you’re getting.
Personalized Attention: Being a solo consultant, I offer personalized service that larger firms can’t match. You will work directly with me, ensuring clear communication and a deep understanding of your project goals.
Automated page speed optimizations for fast site performance