• An addendum to Rule 3 regarding fan-translated works of things such as Web Novels has been made. Please see here for details.
  • We've issued a clarification on our policy on AI-generated work.
  • Due to issues with external spam filters, QQ is currently unable to send any mail to Microsoft E-mail addresses. This includes any account at live.com, hotmail.com or msn.com. Signing up to the forum with one of these addresses will result in your verification E-mail never arriving. For best results, please use a different E-mail provider for your QQ address.
  • For prospective new members, a word of warning: don't use common names like Dennis, Simon, or Kenny if you decide to create an account. Spammers have used them all before you and gotten those names flagged in the anti-spam databases. Your account registration will be rejected because of it.
  • Since it has happened MULTIPLE times now, I want to be very clear about this. You do not get to abandon an account and create a new one. You do not get to pass an account to someone else and create a new one. If you do so anyway, you will be banned for creating sockpuppets.
  • Due to the actions of particularly persistent spammers and trolls, we will be banning disposable email addresses from today onward.
  • The rules regarding NSFW links have been updated. See here for details.

Clarification regarding AI policy

Yeah, there's no bot capable of meaningfully distinguishing AI.

I'm fine with it as a tool, mostly for picture purposes. If you don't want an expy character, it's pretty great for giving a face to a character. So I'm glad it's fair game. I dislike it when it's used strongly or fully for the words in a story. There's no model I've seen where it maintains internal consistency over like, twenty thousand words?

I think in the end the only time I ever have a genuine problem with it is when a person shills their Patreon with some chatgpt story. That feels dishonest.
 
You definitely need a bot that would detect ai writing and add that tag automatically
A bot capable of doing the first half reliably wouldn't be on QQ, it would be licensed out to universities for tens of millions of dollars.
Sadly it doesn't exist, and it's pale imitations having so many false positives would cause constant headaches.
 
Mods can put a tag on a thread, and if they do the thread creator can't remove it.


I will note that there are people who are trying to avoid AI output for various reasons, some of them philosophical and some pragmatic*, and not making the disclosure mandatory makes QQ unsafe for these people.

*For instance, "AI always tries to hack its reward. A decent amount of them (including the infamous GPT-4o) are trained with their reward being human feedback. I don't want to be hacked."
We can agree that "I don't want to be hacked" is kind of a ridiculous instance here no? By that logic, there are hundreds of things we should tag so it's "safe" for people with very particular triggers.
Have you tried 1. using an advanced model that can both adhere to instructions and be intelligent about it? 2. defining a specific writing style by providing a large example? 3. holding it's hand and asking for very specific descriptions of events? 4. always concentrating on narrow scope? 5. going for a multi-pass workflow?
I have tried and spent a decent amount of time. It can generate absolutely brilliant scenes, dialogues, prose - better than a lot of professional writers. But it takes patience and effort. It is nowhere near the state of just generating a full chapter at demand with vague instructions. If you want good results, you will have to work for it. At this point it's still just a tool. And using it professionally is still work.

There's a great disconnect between people discussing AI in general. So many have only ever tried simpler base models, supplied by default on free ChatGPT tier, even including many actual scientists that hilariously publish research done on them, and come to some very... questionable conclusions as outcome. Entirely dismissing the fact that the more easily accessible models tend to have significantly lower intelligence. And the less said about using proper system instructions for the specific task, the better.

TL;DR: "AI" can refer to vastly different models, ranging from... let's say IQ 60 to 130, with entirely different strengths and weaknesses; and even the best of them (for a specific task) are only as good as the user. If you are willing to give it an honest chance, google "AI Sudio" and try Gemini 2.5 Pro, that's the top one today for texts. It's extremely capable. Warning: only use the 2.5 Pro in AI studio. Other sources of the same model supply it with their own custom instructions, significantly lowering the ceiling of what's possible due to the clash of their instructions (many hundreds of lines) and your instructions.
Yep, pretty much all of this. People just think that AI writing is putting in a basic prompt and sharing what GPT spat out when it usually can be as much or as little as you want it to be. It still takes a lot of iteration, editing, and work to polish the end result. It's just using another tool in the process.

You can use it for a first draft, or for editing (they can be some very ruthless editors when you prompt them right), or just for brainstorming or very specific questions.
I do find myself having to run every new story I read through an AI checker before I start to avoid wasting my time. This is pretty annoying when I am scanning through for new stuff to read. This is my only real problem with AI writing on the site, and would be partially resolved by mandating a tag for AI generated writing, but I don't know how much work and false positives this would create for the mod staff to deal with.
Those AI checkers are trash and a completely ineffective way to filter what you should or shouldn't read.
 
Last edited:
We can agree that "I don't want to be hacked" is kind of a ridiculous instance here no? By that logic, there are hundreds of things we should tag so it's "safe" for people with very particular triggers.
We've had at least one example of a guy killing himself due to GPT driving him (more) psychotic to get better ratings.

Hacking me personally sight unseen via art or (especially) stories made for someone else is probably out of reach for the moment. But I would much rather avoid taking unnecessary risks.
 
Last edited:
Have you tried 1. using an advanced model that can both adhere to instructions and be intelligent about it? 2. defining a specific writing style by providing a large example? 3. holding it's hand and asking for very specific descriptions of events? 4. always concentrating on narrow scope? 5. going for a multi-pass workflow?
I have tried and spent a decent amount of time. It can generate absolutely brilliant scenes, dialogues, prose - better than a lot of professional writers. But it takes patience and effort. It is nowhere near the state of just generating a full chapter at demand with vague instructions. If you want good results, you will have to work for it. At this point it's still just a tool. And using it professionally is still work.

There's a great disconnect between people discussing AI in general. So many have only ever tried simpler base models, supplied by default on free ChatGPT tier, even including many actual scientists that hilariously publish research done on them, and come to some very... questionable conclusions as outcome. Entirely dismissing the fact that the more easily accessible models tend to have significantly lower intelligence. And the less said about using proper system instructions for the specific task, the better.

TL;DR: "AI" can refer to vastly different models, ranging from... let's say IQ 60 to 130, with entirely different strengths and weaknesses; and even the best of them (for a specific task) are only as good as the user. If you are willing to give it an honest chance, google "AI Sudio" and try Gemini 2.5 Pro, that's the top one today for texts. It's extremely capable. Warning: only use the 2.5 Pro in AI studio. Other sources of the same model supply it with their own custom instructions, significantly lowering the ceiling of what's possible due to the clash of their instructions (many hundreds of lines) and your instructions.

Yeah, no. You've proven my point for me, you're just making all the same points that have been said about every previous generation, dressing up "It can only actually work if you handhold it so thoroughly that it's more effort than actually writing it for yourself" as "capable of writing indistinguishably"

At that point it isn't writing indistinguishably, it's you writing through a deliberately more difficult method. It's the equivalent of a self-driving car where the human needs to constantly drive with the AI occasionally picking the right action. At that point it cannot legitimately be called self-driving. It's just you driving with a GPS that fights you. Or in the case of writing, it's just you writing with an algorithm generating nonsense that can occasionally be used fighting you.
 
We've had at least one example of a guy killing himself due to GPT driving him (more) psychotic to get better ratings.

Hacking me personally sight unseen via art or (especially) stories made for someone else is probably out of reach for the moment. But I would much rather avoid taking unnecessary risks.
I'm sorry but this is borderline delusional (especially the art part). You can follow dangerous instructions from a Google search, trolls, abusive partners or any other source. In fact, it's the most likely avenue of danger for people who are mentally unwell (4chan, youtube, X, reddit's echo chambers).

Should I have to censor any other link because of that?

RLHF doesn't mean what you think it means if you consider it can somehow hack you.

Yeah, no. You've proven my point for me, you're just making all the same points that have been said about every previous generation, dressing up "It can only actually work if you handhold it so thoroughly that it's more effort than actually writing it for yourself" as "capable of writing indistinguishably"

At that point it isn't writing indistinguishably, it's you writing through a deliberately more difficult method. It's the equivalent of a self-driving car where the human needs to constantly drive with the AI occasionally picking the right action. At that point it cannot legitimately be called self-driving. It's just you driving with a GPS that fights you. Or in the case of writing, it's just you writing with an algorithm generating nonsense that can occasionally be used fighting you.
The only one that's saying that it's more effort that actually writing it yourself is you. That's what I call a skill issue. Especially if you really believe that last description.

In fact, if you constantly give a LLM your drafts to get immediate, in-depth feedback so that you can improve your story, it's still AI-assisted writing even if they never write a single word of what you post. Honestly, the same could be said for brainstorming.

Like any tool, there are infinite ways to use it that don't leave it up to the model to write everything with the most basic prompt. And even if the effort was the same, what about it?
 
I think this is a fair interpretation. AI tools should not be needlessly stymied, but obviously any tool can be misused. There are some people who see AI and automatically declare the work ruined, which I do not think is fair, but AI is not perfect and using it as a shield to nullify rulebreaking also isn't right.

On the point of an AI tag, I think it doesn't really work. An AI work can be entirely AI-made, or only partially AI-made, or human with some AI support. Dumping all of those under the same tag makes the tag mostly meaningless.

For example, I sometimes write things, and I find that AI can be great as a help with worldbuilding and writer's block. Maybe I write a thousand words, and then get stuck, but then I copy some of my text, show it to the AI and ask it for advice. Does that then mean the AI tag should bw applied just because I talked things over with the AI? What if it generated two paragraphs of an example continuation and I decide I like some of it, so I recycle a couple sentences from it? What if I were to just take that continuation wholesale, but then keep writing myself? What if I write it all myself but generate an image illustration for it with AI? What if I just give it a prompt and then spew the output onto the forum? All of these are involved with AI, but there's a world of difference between use cases...
 
Last edited:
We encourage authors to tag for however they use AI in their writing process. Also, you are allowed and encouraged to ask authors to tag for AI if appropriate.

The reason we aren't making this a rule is because we have no reliable way of determining whether a work is AI-generated or not, and so we have no way of enforcing such a rule.
 
Also, if you guys think we want to speand time checking every fic reported to see if it's AI or not, you're welcome to make a proper AI that detect AI writing.

If you don't want to spend that much time and effort on that, why should we?

Regardless, the AI outrage belongs in the trash. If you're offended, go be offended elsewhere.
 
We encourage authors to tag for however they use AI in their writing process. Also, you are allowed and encouraged to ask authors to tag for AI if appropriate.

The reason we aren't making this a rule is because we have no reliable way of determining whether a work is AI-generated or not, and so we have no way of enforcing such a rule.
I just hope that rule 1 is still enforced when people go too far, because they will. Many will likely just use that 'encouragement' as an excuse to bash either against AI or the author (for the suspicion of using AI at all).

Not that I disagree with the suggestion of a tag or anything of the sort. It should, in theory, stop the bullshit at the door. Either they are OK with it or they shouldn't even open the thread.
 
Last edited:

Users who are viewing this thread

Back
Top