• An addendum to Rule 3 regarding fan-translated works of things such as Web Novels has been made. Please see here for details.
  • We've issued a clarification on our policy on AI-generated work.
  • Due to issues with external spam filters, QQ is currently unable to send any mail to Microsoft E-mail addresses. This includes any account at live.com, hotmail.com or msn.com. Signing up to the forum with one of these addresses will result in your verification E-mail never arriving. For best results, please use a different E-mail provider for your QQ address.
  • For prospective new members, a word of warning: don't use common names like Dennis, Simon, or Kenny if you decide to create an account. Spammers have used them all before you and gotten those names flagged in the anti-spam databases. Your account registration will be rejected because of it.
  • Since it has happened MULTIPLE times now, I want to be very clear about this. You do not get to abandon an account and create a new one. You do not get to pass an account to someone else and create a new one. If you do so anyway, you will be banned for creating sockpuppets.
  • Due to the actions of particularly persistent spammers and trolls, we will be banning disposable email addresses from today onward.
  • The rules regarding NSFW links have been updated. See here for details.

Clarification regarding AI policy

The problem is not the use of Ai itself. But the non disclosure of it and vehement denial and defensive posturing when asked about it politely. Yeah, there is no putting the genie back in the bottle and AI use is definitely going to be the future.

But, it ain't quite there yet. It screws up the plot when asked to generate it completely and when just proofreading, it writes in language that _feels_ unnatural enough (using double adjectives, writing in rhyme schemes and seldom used punctuations) that it brings you out of the reading experience.

It's still at the Uncanny Valley stage for writing at least, while images are much further along. So, it's not too much to ask to disclose its usage and if it is going to be continued to be used in the future. Shame that disclosure is only considered polite and not a required rule.
At this point, it's mostly a misconception: the latest generations of AI models are more than capable of writing indistinguishably. They typically don't do it by default, though. It's up to the user to discover specific prompts and contexts that work. Today, the works that are obviously AI-gen are those sourced by lazy authors.

However, I do agree that non-disclosure should be frowned upon.
 
The problem is not the use of Ai itself. But the non disclosure of it and vehement denial and defensive posturing when asked about it politely. Yeah, there is no putting the genie back in the bottle and AI use is definitely going to be the future.

But, it ain't quite there yet. It screws up the plot when asked to generate it completely and when just proofreading, it writes in language that _feels_ unnatural enough (using double adjectives, writing in rhyme schemes and seldom used punctuations) that it brings you out of the reading experience.

It's still at the Uncanny Valley stage for writing at least, while images are much further along. So, it's not too much to ask to disclose its usage and if it is going to be continued to be used in the future. Shame that disclosure is only considered polite and not a required rule.
This reply sums it up nicely.
Make it a rule to disclose if you used AI assistance to write your shit and most of the complaints people have outside of AI art spam disappear, which is exactly why I expect the opposite to happen and the problem to become much worse as malicious actors realize they can now flood the market with their product with impunity. Soon the only way to spot the work of the digital golem will be the emdash and reddit style prose.

You think the past age of godawful formulaic HP/Naruto/DxD/whatever fanfic was a scourge? You will think fondly of the days of those teen-written horrors, the crossover haremslop, the mary sue cardboard girl inserts, the fixfics and the rest...
 
A somehow enforced AI tag for writing would be nice, but I have no idea if it would be possible. There's nothing more irritating than getting a few paragraphs into something and going, "Hey. Wait a minute." As soon as you realize it's AI generated words, you can't help but start spotting for the usual AI generated problems.

It'd be nice to get able to avoid that from the get-go.
 
Ain't asking for a rule, just saying if your fic smells like a silicon smoothie, don't get offended when someone goes "yo did chatgpt write this" when there is no tag.
Frankly this feels outside the bounds of constructive criticism or honest review, just either picking a fight or trying to insult someone. If you don't like a story, that's fine, just go on with life if you aren't willing to give honest feedback on how they can improve. The writer will either get better, stop posting garbage, or you can just set them to ignore if you are somehow offended by them. We can all live and let live within reason for writing here.
 
Hmm... are the AI any good? If so which ones are being used?
 
It's not a big deal if they are going to use AI to help them write, but they should give us at the very least a small courtesy of putting a tag there if they used an AI. AI is an amazing tool to help people to locate mistakes, not gonna criticize them for that. But there are some people who truly abused it and created their entire stories by using AI, and they will get mad or get offended when you asked if they've used AI. As if it was our fault that reading their ffs felt like reading a textbook, and when you ask for clarification whether they've used AI to generate the entire chapter, they will somehow take it as attacking them personally when you just want to know so you can get out quietly if they actually did.

In short, there's a tag, use it to warn people.
 
It's not a big deal if they are going to use AI to help them write, but they should give us at the very least a small courtesy of putting a tag there if they used an AI. AI is an amazing tool to help people to locate mistakes, not gonna criticize them for that. But there are some people who truly abused it and created their entire stories by using AI, and they will get mad or get offended when you asked if they've used AI. As if it was our fault that reading their ffs felt like reading a textbook, and when you ask for clarification whether they've used AI to generate the entire chapter, they will somehow take it as attacking them personally when you just want to know so you can get out quietly if they actually did.

In short, there's a tag, use it to warn people.
AI will be the new slash. Can't remove it from your search however many tags you ban.
 
Tags are user-generated and user-assigned, unless the people who are using AI to write put the tag on their threads it is meaningless.
Mods can put a tag on a thread, and if they do the thread creator can't remove it.


I will note that there are people who are trying to avoid AI output for various reasons, some of them philosophical and some pragmatic*, and not making the disclosure mandatory makes QQ unsafe for these people.

*For instance, "AI always tries to hack its reward. A decent amount of them (including the infamous GPT-4o) are trained with their reward being human feedback. I don't want to be hacked."
 
Last edited:
AI text generation is in a weird place right now. On the one hand, it's miraculous that it can make something even approaching a resemblance to human writing at all. On the other hand, it's still completely godawful at it!

Anyone who thinks of themselves as a bad author, I promise that you can write five times better than ChatGPT, easily. EASILY! It'll take you more time, sure... but it would take you more time to coax the robot to make something of similar quality, so it actually takes LESS time to just write it yourself.

It's funny, I'm always hearing English professors and copywriters and other humanities people doomsaying about AI already being better than them, and they're right in ONE way: if you want fast, cheap mush, I guess it's alright. But if you want anything with ANY level of quality, you've still gotta go to a human.

I can usually tell something is AI generated within the first 10 words or so. Idk how to describe it, but it just doesn't feel right, y'know? Like, it's got the same tone you'd write with as when you're trying to pad your 1000 word essay out to 5000 words. It's definitely possible to fool people with it, but to do that you either have to edit the output manually, or spend a LONG time prompting it and re-prompting and RE-prompting; both of which require a decent understanding of good writing, to know how to make it to fool people; in which case, why not just write it yourself in the first place?

The answer is most people who post AI text don't put in more than the bare minimum effort, because they think writing is a waste of time to begin with. These people are, obviously, not worth anyone's consideration.

Anyways, this seems like a good clarification, I approve.
 
I do find myself having to run every new story I read through an AI checker before I start to avoid wasting my time. This is pretty annoying when I am scanning through for new stuff to read. This is my only real problem with AI writing on the site, and would be partially resolved by mandating a tag for AI generated writing, but I don't know how much work and false positives this would create for the mod staff to deal with.
 
At this point, it's mostly a misconception: the latest generations of AI models are more than capable of writing indistinguishably. They typically don't do it by default, though. It's up to the user to discover specific prompts and contexts that work. Today, the works that are obviously AI-gen are those sourced by lazy authors.

This has been said by supporters of literally every generation of AI models. It has never actually been true, outside of very niche circumstances that are effectively the equivalent of laboratory conditions, the user using such extensive prompts that it would be more efficient to just write it themselves or directly looting from existing fiction.

That last part is actually a severe problem. There was a relatively major court case back in June which ruled that the final work itself was transformative but the AI company had extensively violated copyright with their pirated library of seven million books. That wasn't a small AI company, it was backed by Google's parent company and Amazon.

It suggests quite strongly that if you use a LLM to write anything for you, while it could easily end up transformative by a court's standards it could still be direct plagiarism by most site's standards.
 
Last edited:
I considered myself a half-breed I use AI to help on world building and sequencing the timeline but did the writing myself.
 
4nec8maabg4.jpg
*goons*
🥵🥵🥵
 
AI will be the new slash. Can't remove it from your search however many tags you ban.

It's funny, but if LLM keep growing and growing this fast... well I can easily see more and more people just leaving the internet. Keeping it on for just a few site, like governmental ones, Amazon, steam ?

AI text generation is in a weird place right now. On the one hand, it's miraculous that it can make something even approaching a resemblance to human writing at all. On the other hand, it's still completely godawful at it!

Anyone who thinks of themselves as a bad author, I promise that you can write five times better than ChatGPT, easily. EASILY! It'll take you more time, sure... but it would take you more time to coax the robot to make something of similar quality, so it actually takes LESS time to just write it yourself.

It's funny, I'm always hearing English professors and copywriters and other humanities people doomsaying about AI already being better than them, and they're right in ONE way: if you want fast, cheap mush, I guess it's alright. But if you want anything with ANY level of quality, you've still gotta go to a human.

I can usually tell something is AI generated within the first 10 words or so. Idk how to describe it, but it just doesn't feel right, y'know? Like, it's got the same tone you'd write with as when you're trying to pad your 1000 word essay out to 5000 words. It's definitely possible to fool people with it, but to do that you either have to edit the output manually, or spend a LONG time prompting it and re-prompting and RE-prompting; both of which require a decent understanding of good writing, to know how to make it to fool people; in which case, why not just write it yourself in the first place?

The answer is most people who post AI text don't put in more than the bare minimum effort, because they think writing is a waste of time to begin with. These people are, obviously, not worth anyone's consideration.

Anyways, this seems like a good clarification, I approve.

LLM are predictive by nature and there's things they can't do, like plot twist. or writing an ending you haven't seen from a mile away, that's why it's great for anything administrative or technical but still shit at creative work. Let alone all the "minor" tell.

What annoy me more honestly is how people who obviously use full on AI text and call themselves writers. A writer is a wordsmith as much as a guy hammering steel is a blacksmith, while a AI user is more aking to a guy pushing a button to activate the robot arms in a steel factory. No the guy pushing buttons is not a blacksmith even if he's making steel.

Honestly after testing it for myself I've done away with it for my writing entirely, I've had to fight it off changing my text too many times (which bad ideas at that) It's not bad at writing bad fic but it's not good at anything else honestly...
 
Only annoyed by the lack of disclosure for when it generates the whole story. I'm aware of at least one author that has admitted to it on another site but makes no mention of it on QQ despite the continuity errors from chapter to chapter.

AI checkers are only sometimes useful. These writing models mirror the humans they are trained on so there are people that just write that way. And there is probably an AI for bypassing common checks too. I've also seen some writers and artists get falsely accused of and then harassed despite not even using AI. Even if they did, that's worse than what they are criticizing.

Using AI to come up with ideas is fine, people do all sorts of things to come up with ideas like reading other works. The writing is inspired but ultimately your own work in that case.
 
Last edited:
There was a relatively major court case back in June which ruled that the final work itself was transformative but the AI company had extensively violated copyright with their pirated library of seven million books. That wasn't a small AI company, it was backed by Google's parent company and Amazon.

It suggests quite strongly that if you use a LLM to write anything for you, while it could easily end up transformative by a court's standards it could still be direct plagiarism by most site's standards.
You're referring to the case involving Anthropic:


The judge's decision in that case was that training AI is fair use, period, even on copyrighted material. However, Anthropic was still liable for pirating ebooks, same as anyone else who downloaded a torrent.

An analogy would be a college student who pirated their textbooks and then later on wrote a paper. They would be guilty of torrent the books, but it would not affect the work they produced as a result.

And users of AI products in turn would be another step further removed from the training process.
 
Lol I'm totally okay with the rules and ready to declare which part of the process that use AI but it is sad that I used to read some forums even have posts that escalated to the point of insulting and discrimination.
 
The issues I have with AI are beyond the scope of this post but needless to say I'm not a fan.

It's all the god tier, it's all the sewage tier, taken and then smooshed together to form the most mid content to ever pollute the internet. But that is only if you're lucky and the person posting the content gives enough of a fuck to do some basic quality control. If they don't? Good luck.

AI doesn't plan, it can describe a scene ok-ish but falls onto the bad side of generic very quickly even with basic descriptions. Which is why I typically avoid content generated by AI, it gives the feeling an amateurish half-baked, half-assed approach.

I am here to read that people want to put to 'paper', not what they want to generate with a bot, even the poorly written is better infinitely better than AI slop because at least then? I know it has some soul in it.

Which is why if it gets any more prevalent please consider adding a rule about disclosure.

tl:dr "Reeeee AI bad, make rule!" - Man with terrible, no-good, godawful grammar.
 

Users who are viewing this thread

Back
Top