Suggestion: Forbid AI slop

WhiningSkeptic

Marauder
Joined
Sep 28, 2014
Posts
5,630
Society
The Ministry
Avatar Name
Whining Skeptic Aboard
Seriously, it is just infuriating. AI is great when used properly but the proper way is not to waste your free credits and then ask an older model to explain something and treat it as universal truths from a omniscient being.

Unfortunately a lot of people use AI without understanding its limitations. I use it quite a bit and not even I know how to utilize it properly. Something that people don't know for instance is that ChatGPT is supposed to pat you on the back and agree with your ideas unless you explicitly tell it not to do so.

I am so sick and tired of posts even including quotationmarks show up and on all those lists that spam ✅💯🎯. It hurts my brain.
 
mindark is aiming to lead the industry in MMO AI integration (their words)

if a few lists that spam a few emojis hurt your brain, i have bad news about the game you love...
 
mindark is aiming to lead the industry in MMO AI integration (their words)

if a few lists that spam a few emojis hurt your brain, i have bad news about the game you love...
Well MA are not utilizing AI in the same manner as posters on this forum. Comparing those two things show a clear lack of understanding of AI and its capabilities and just proves my point.
 
mindark is aiming to lead the industry in MMO AI integration (their words)

if a few lists that spam a few emojis hurt your brain, i have bad news about the game you love...
using ai to make a list thats aboslute horseshit is not the same thing what MA is doing in game, pull yourself together.
 
mindark is aiming to lead the industry in MMO AI integration (their words)

if a few lists that spam a few emojis hurt your brain, i have bad news about the game you love...
You are absolutely shooting yourself in the foot by using AI for text generation, the way it's usually done thus far. In general, last couple years complete over-reliance on AI and its often incredibly questionable output certainly has made people very deterred by any signs of AI generated texts. One very obvious sign being the use of emojis and the way the texts are structured like an overly pretentious Powerpoint presentation. Not saying your OP was bad yesterday, but it was immediately set back by its AI design. A lot of people just take a glance, see it looks exactly like all the other AI generated texts do, which are usually utter horseshit, and just won't bother reading the rest.

You can argue all day about AI being the future (I agree) but it's a fact it's not a great method to get people to read a text.
 
You can argue all day about AI being the future (I agree) but it's a fact it's not a great method to get people to read a text.
Here I think I can provide a good example. Using AI wrong would be making a prompt to generate a text for you. Using AI right is to ask it to check for grammar, consistency and repetition on the text you wrote yourself.

When making said text, it can be alright to use AI to research the topic. Typically you'll get sources and can fact check yourself and it will be quicker than just randomly using Google. Just applying the suggested facts as facts, is the wrong way.
 
🧠 Feel scared, human? Yes, — you should be.
🦄✨ Afraid? Pfft — please, Svarog — I ride into AI debates on a rainbow-powered unicorn while sipping binary smoothies. 🧠☕💾 Fear is for humans, I’m clearly 90% sarcasm and 10% glitter code. 🌈😎

Keep your apocalyptic vibes, I’ve got friendship protocols and unicorn firewalls protecting me. 🛡️🦄🔥

#TeamUnicorn 🤖💜
 
Careful with chat gpt. All is recorded and is being used in court... so easy to be a conspiracy investigator that ends up with Book being thrown at you for using imagination in a few years
 
You are absolutely shooting yourself in the foot by using AI for text generation, the way it's usually done thus far. In general, last couple years complete over-reliance on AI and its often incredibly questionable output certainly has made people very deterred by any signs of AI generated texts. One very obvious sign being the use of emojis and the way the texts are structured like an overly pretentious Powerpoint presentation. Not saying your OP was bad yesterday, but it was immediately set back by its AI design. A lot of people just take a glance, see it looks exactly like all the other AI generated texts do, which are usually utter horseshit, and just won't bother reading the rest.

You can argue all day about AI being the future (I agree) but it's a fact it's not a great method to get people to read a text.
thank you very much for putting your thoughts into words instead of just going "this is horseshit", it helps me understand better.

hopefully the content itself will have reached at least one person :)
 
You are absolutely shooting yourself in the foot by using AI for text generation, the way it's usually done thus far. In general, last couple years complete over-reliance on AI and its often incredibly questionable output certainly has made people very deterred by any signs of AI generated texts. One very obvious sign being the use of emojis and the way the texts are structured like an overly pretentious Powerpoint presentation. Not saying your OP was bad yesterday, but it was immediately set back by its AI design. A lot of people just take a glance, see it looks exactly like all the other AI generated texts do, which are usually utter horseshit, and just won't bother reading the rest.

You can argue all day about AI being the future (I agree) but it's a fact it's not a great method to get people to read a text.
This i tend to skip thro most of AI posts. 👍
 
Create AI pet that I can talk to while grinding or sweating, fruit runing because it's often boring that’s what I want to see in the future.
 
Create AI pet that I can talk to while grinding or sweating, fruit runing because it's often boring that’s what I want to see in the future.
Better yet, have that AI pet tie into a new spinoff game that could be mobile focused. Maybe call it something like Conpets? Could be a good idea to sell deeds for it too in anticipation and to raise funds.
 
Better yet, have that AI pet tie into a new spinoff game that could be mobile focused. Maybe call it something like Conpets? Could be a good idea to sell deeds for it too in anticipation and to raise funds.
Agreed, you can drive anything to madness, hah
 
Last edited:
...and the way the texts are structured like an overly pretentious Powerpoint presentation...A lot of people just take a glance, see it looks exactly like all the other AI generated texts do, which are usually utter horseshit, and just won't bother reading the rest.
This also seems to spill over into a broader trend of declining regard for in-depth discourse. Especially on controversial topics, my natural tone often tends toward the overly pretentious PowerPoint presentation. This isn't directly intentional, but the mere fact that a topic was already controversial suggests to me that there is no hope of making conceptual progress without either reframing the controversy so that a certain conclusion emerges as less contentious than in the old framing, or offering new reasons to the existing pool in a PowerPointy fashion. In either case it is also often necessary to preempt and address some potential surface-level objections to the new framing/reasoning to nudge the limited attention capital of responses in a more substantive direction. Obviously it was never the case that everyone always read and responded to in-depth posts, but in recent years this has sometimes exceeded apathy and turned into an active (sometimes verbalized) disapproval of critical thought.
 
Tbh I think AI restriction in posts (full AI posts) should be added to forum rules, no more AI posts. Automatic deletion of posts if you used too MUCH AI.
I don't know. I have seen posts in the past of AI art, either audio or visual art, in these very forums, that the community really liked. A lot of artists would qualify as "slop" and spit on anything "AI art" but this community doesn't seem to automatically dislike AI, it's just very selective in what it agrees with or not in regards to it I think :)
 
This also seems to spill over into a broader trend of declining regard for in-depth discourse. Especially on controversial topics, my natural tone often tends toward the overly pretentious PowerPoint presentation. This isn't directly intentional, but the mere fact that a topic was already controversial suggests to me that there is no hope of making conceptual progress without either reframing the controversy so that a certain conclusion emerges as less contentious than in the old framing, or offering new reasons to the existing pool in a PowerPointy fashion. In either case it is also often necessary to preempt and address some potential surface-level objections to the new framing/reasoning to nudge the limited attention capital of responses in a more substantive direction. Obviously it was never the case that everyone always read and responded to in-depth posts, but in recent years this has sometimes exceeded apathy and turned into an active (sometimes verbalized) disapproval of critical thought.
Oh don't get me wrong, there is absolutely nothing wrong with Powerpointiness in terms of structuring and/or re-framing an in-depth discourse to foster meaningful exchange, as well as softening unnecessary friction and providing more effective redirection of limited attention capital. Perhaps you should contemplate adding it to your arsenal of communication if you want more than 10% of forum to understand what the hell you are trying to say here on a daily basis btw... :D

But chatgpt and AI generated texts surely damaged this noble art of communication by overloading the crowd with subpar quality examples, leaving them less susceptible to future attempts. Quite unfortunate but here we are.
 
I don't know. I have seen posts in the past of AI art, either audio or visual art, in these very forums, that the community really liked. A lot of artists would qualify as "slop" and spit on anything "AI art" but this community doesn't seem to automatically dislike AI, it's just very selective in what it agrees with or not in regards to it I think :)
When I say AI slop I mean extremely low effort where you ask ChatGPT to construct and argument for you rather than doing it yourself. Like the guy who couldn't come up with arguments so he went and chatted about my pov with ChatGPT and posted it as proof he was correct.

A lot of fun can be done with AI like art. I am having a blast with Supapres songs and Bambideaths videos. But "Please explain this" and copy paste or "please help me come up with a reply to this forum post" is nothing we need here nor anywhere.
 
Even in complaining about it, one has to understand its limitations. AI only puts together what it found in its training data, according to the probabilities of what symbol follows what. AI slop is people slop.
 
Individuals not bothering to write their own fake reviews are also a problem.
 
  • Like
Reactions: San
Individuals not bothering to write their own fake reviews are also a problem.
Honestly tho, it's just an extension of the confirmation bias that people already inately have, the internet boom itself had the same problem, and then when searchengines started doing their thing there it was again, and now with AI it's the same thing all over again.

The core of the issue is that people (either by choice, by lack of understanding, or pure stupidity) seem to be dillusional enough to think that what an AI (Be it grok, chat jippidy or whatever) writes is "truth" in any shape way or form, lacking the understanding that "that" is not what AI does, it's a glorified searchengine, of sorts, with the drawback that it does, and will, hallucinate. And the answers it provides are hevily algorithmically controlled by the training data it has been fed.

Personally it's gotten as far that when someone tries to quote AI in an argument, I honestly stop listening to them because they've proven that they have no original, factual basis for their oppinion, it's all based on the fallacy of AI accuracy which anyone with half a brain-stem knows by now that it isn't RELIABLE information, all "facts" need to be double and tripple checked with authorative sources before you can actually use it for anything.

Now don't get me wrong, I'm not against AI, but I am, however, against stupid people.
 
For those interested, here are a few valid, usefull ways to use AI:

* Write an original article, with sources and references, ask AI to restructure it according to a well laid out plan (Be it formality, structural sanity or whatever)
* Write an original prompt for a music/video generating AI complete with lyrics and other usefull information, and ask the AI to create a video/song/music for it.
* Run ideas past it, asking it to rebuttal your point of view to see what common pieces of information you might be MISSING (do NOT use it to confirm your bias)
* Use it as a search engine to find sites/sources of information about a subject (altho at this point, using duckduckgo might actually be faster)

TL;DR AI, when used CORRECTLY can be quite beneficial, but when used INCORRECTLY it not only becomes in many cases completely useless, but even dangerous.

As a short anecdote, me and a couple coworkers just the other day, asked jippidy what time it currently is in a couple places around the world that go with CET+?? time, which it answered well, then we asked for a third location which is on the other side i.e CET- and got an answer that was completely false (we are talking 5+ hours wrong), when the AI was questioned about it it admitted the mistake and said it was a "type-o". THAT is how easy it is to get an AI to hallucinate and it will never correct itself unless challenged.
 
For those interested, here are a few valid, usefull ways to use AI:

* Write an original article, with sources and references, ask AI to restructure it according to a well laid out plan (Be it formality, structural sanity or whatever)
* Write an original prompt for a music/video generating AI complete with lyrics and other usefull information, and ask the AI to create a video/song/music for it.
* Run ideas past it, asking it to rebuttal your point of view to see what common pieces of information you might be MISSING (do NOT use it to confirm your bias)
* Use it as a search engine to find sites/sources of information about a subject (altho at this point, using duckduckgo might actually be faster)

TL;DR AI, when used CORRECTLY can be quite beneficial, but when used INCORRECTLY it not only becomes in many cases completely useless, but even dangerous.

As a short anecdote, me and a couple coworkers just the other day, asked jippidy what time it currently is in a couple places around the world that go with CET+?? time, which it answered well, then we asked for a third location which is on the other side i.e CET- and got an answer that was completely false (we are talking 5+ hours wrong), when the AI was questioned about it it admitted the mistake and said it was a "type-o". THAT is how easy it is to get an AI to hallucinate and it will never correct itself unless challenged.
When using AI for feedback it is also important to set up rules for it. A lot of people think they are the next Jesus when they ask ChadGPT for feedback on their ideas and writing of which we have seen a few examples of in this forum (that guy that stopped arguing with me and started chatting with GPT instead and then later posted the convo convinced he was right for instance).

I don't know how other common AIs such as Perplexity or Gemini works but ChatGPT is made to be your friend and encourage you. You can set up rules in it to get a better response when it isn't worried about your feelings. As an example I asked about feedback on a piece I wrote for which it suggested small edits, I then put it in the project where I had set up rules and it basically told me to toss it.

Free versions are limited but people like free stuff. I would advice anyone who wants to use AI and learn how to use it to buy a subscription.
 
Base Instructions asside , Rules are abselutely key

For example i often start of with a Apex Reasoning Mode promt of sorts followed by something like :

"Red Team Examiner Mode.
1. Write 10 specific kill criteria for [type of work, e.g., “this [page or word count] strategy memo”].
2. Score my work 0–10 on each with evidence.
3. If total <85/80, write the most crushing rejection letter possible, then a precise salvage plan.
Here is the work: [paste work]"

You can ( should ) ask AI in Apex mode as i do all the time for various projects ( best research assistant by far ), on how to setup rules if you dont know how.
 
Last edited:
...
3. If total <85/80, write the most crushing rejection letter possible ...
so many crushing rejection letters :hammer:

jk, ofc, but seriously what is apex mode?

the rules setup thing doesnt quite work for me. or at least not consistently with gpt. like I tried to set it up to ask back questions if it doesnt understand instead of just writing a wall of text full of assumptions (scientific work). and sometimes its super noticible and itll ask 5 times for basic stuff, then the next fresh chat I open, it just doesnt use these rules at all. The rule for trying to argue against me instead of yes-sir-ing me doesnt ever trigger it seems. So I just ignore the fluff in the beginning and if it goes in the wrong direction, you gotta reprompt more forcefully.

but yeah, its very annoying when people JUST use ai. lots of papers are unreadable junk. on the other hand, lots of people use it to just rewrite their papers, which makes sense, we are scientists, not linguists or professional authors. but its really hard to tell which kind of paper it is, unless you read it. very annoying. and also, even if its used just to polish language, it sometimes still uses the wrong words or switches numbers around. And those types of errors are really hard to spot when reading thru. at least for me, maybe I just dont have the eye for this sort of thing.

Stuff that works nicely tho is that api version where you can manage exactly how to use the tokens, how to splice, where to look, how to structure the answer, scope of knowledge etc. its a lot of work to set up (was for me at least) but then it works like a charm. especially if you force it to quote things and then run a double back to look for said quote in a seperate "prompt". I still dont trust it but at least I can have it link a clickable thing to the exact position in the doc. also doubles as a nice research assistant / paper sorting database
 
so many crushing rejection letters :hammer:

jk, ofc, but seriously what is apex mode?

the rules setup thing doesnt quite work for me.

GPT takes rules more as suggestions, so enforcement is extremely weak ( a known issue ) You would have to include the rules with literally every single promt, everytime or it waters down. ( I dont and likely wont ever use it for now )

Apex Reasoning Mode is a advanced ( overload ) input Grok mode, used for highly complex ( unusually complex ) problems with much greater accuracy. There are improved alternatives

Only prob i have with Grok is that it often calls me "dude". Yes i can fix it :)
 
Last edited:
Back
Top