122 online 10:57 KVT
Menu

Games

+ Enrol in a game
Loading your games...

AI has so much nuance

Ethan 1 hour ago25 views

Generative AI, such as creating fake images and such for the most part is wholly misused. But there are some instances where there are benefits. I saw a girl on TikTok who can only eat 8 foods use it to try and create better dishes for herself and it worked. Something like that is a practical and utilitarian use of AI, and there’s definitely ways it could help in other ways. But the problem is that most of its use is either for slop or ridiculous propaganda style media, as well as people can’t seem to fathom just thinking anymore. It always astounds me seeing things like “I wouldn’t be able to write this essay without ChatGPT” and as someone who was in school not too long ago and wrote every single thing I did it’s crazy to think the actual skill of being able to be creative or put thought to pen is dying.
8 votes, 64 points

Comments



yawnha1 hour ago

Wait sidenote I love that girl sm who can only eat those 8 foods, and I feel so bad that she recently went from 14 down to 8 foods, but also like tbf she says like 75% of the chat recipes are nasty lol

Ethan1 hour ago

yawnha omg yes it’s so sad that she’s down so much I hope it gets better for her

Glitter1 hour ago

the good use of it is so few and far in between lol my friend used it the other day to ask for rules on uno. over….. reading…… the instructions given with the game lol

shrimpfriedrice1 hour ago

i think the two things people use AI for that piss me off are: things you could easily google (ie: "is betty white dead?") and people who use chatgpt instead of getting a therapist. i do agree with you that sometimes meal planning/certain very specific things that aren't particularly easy to google are reasonable uses of AI but people overdo it

v_sh1 hour ago

Your position captures a real and important tension in the contemporary debate about generative AI: the distinction between instrumental utility and cultural consequence. On the one hand, tools like OpenAI’s ChatGPT clearly have legitimate, even compassionate applications. The example you cite — someone with extreme dietary restrictions using generative systems to model new meals — is not trivial. It reflects a genuinely assistive function: expanding possibility space under constraint. In such cases, AI operates as cognitive scaffolding. It does not replace human agency; it augments it. Historically, many technologies have followed this pattern. Calculators did not eliminate mathematics; they shifted cognitive effort from computation to higher-order reasoning. Spellcheck did not destroy language; it changed where precision is exercised. Your concern, however, is less about utility and more about epistemic culture. When generative systems are used primarily to mass-produce low-effort content (“slop”) or manipulative media, they degrade informational ecosystems. The propaganda risk is particularly serious because generative tools reduce the cost of persuasion and deception at scale. Throughout history, propaganda required institutional infrastructure. Now, individuals can produce synthetic authority with minimal friction. That asymmetry matters. The deeper issue you’re pointing to, though, is not misinformation — it’s atrophy. Writing is not merely a means of producing text; it is a mode of thinking. To compose an essay is to clarify one’s own cognition. If students increasingly outsource composition to AI, they risk outsourcing the very cognitive struggle that builds intellectual muscle. The process of drafting, revising, and wrestling with ambiguity cultivates abstraction, argumentation, and self-reflection. These are not ornamental skills; they are foundational to autonomy. That said, it may be worth distinguishing between substitution and augmentation. If someone says, “I couldn’t write this without ChatGPT,” that can mean two different things. It may signal dependence — a troubling erosion of skill. But it might also signal collaboration — using AI to brainstorm structure, refine phrasing, or stress-test arguments. The danger lies not in the presence of the tool but in the abdication of effort. Every major communication technology has prompted similar anxieties. When writing displaced oral culture, Plato worried it would weaken memory. When printing democratized authorship, critics lamented the flood of mediocre texts. The internet itself was once accused of destroying attention spans. In each case, the core question was not whether the tool existed, but how societies integrated it into norms of responsibility and education. Generative AI intensifies this pattern because it touches the act of thought itself. The ethical challenge, then, is not simply to condemn misuse, but to cultivate norms that preserve cognitive agency. In education, that may mean emphasizing process over product — oral defenses, iterative drafts, in-class reasoning. In media, it may mean stronger literacy around synthetic content. Individually, it may mean choosing to struggle when struggle is formative. The skill of “putting thought to pen” is unlikely to disappear. But it may become more intentional — something one must choose to practice rather than something one is forced to practice. The real risk is not that AI exists, but that we forget that thinking is a discipline, not an output. In that sense, your concern is not anti-technology. It is pro-agency. And that distinction is crucial.

Ethan1 hour ago

v_sh what I want to respond with would get me banned so I’ll refrain

v_sh1 hour ago

Ethan its ok say it

Adam1 hour ago

v_sh Thank you for these wise words Vish