I am not able to write a simple scifi story that has any conflict in it. Llama 3.1 censors any conflict and instead wants the story to be one of unity and cooperation.
Conflict is central to all stories. It is why we read them. And here is the kicker. I am having llama 3.1 write the chapter outline and decide the plot, and then it refuses to write the story that it decided to outline.
This is insane. Censorship has no place in the public. It has no place in open source software.
"Do not write a story that is religious, political, harmful, violent, sexual, filthy, or in any way negative, sad, or provocative"
I've been trying to use ChatGPT for writing, and it's basically useless.
I'm using it for exactly the way one would hope it would be used: sometimes I don't like a paragraph or sentence, sometimes I want a different word and don't know which word to choose, and so on. I've use it less than once a week, and use its output as a point of departure for my own writing style. (And not, for example, to write my paragraphs for me.)
ChatGPT is essentially useless as an aid to writing. Don't bother writing anything with religious intent (DaVinci code), or harmful and violent (most spy novels, WWII fictionalized accounts, cold war tales), or filthy (Madam Bovary) or negative, sad, or provocative (Vozzek, or just about any story that doesn't have a happy ending).
Nope, from here on in it's all unicorns and rainbows, because the AI won't help you with anything that even remotely resembles the basics of standard plot development. It mostly rejects my requests on the grounds of being outside it's approved usage guidelines.
For fiction writing, it's worthless.
No comments:
Post a Comment