top of page

A Conversation with AI

Following the flooding of Clarkesworld Submissions by AI generated stories, questions have abounded about the usage of AI for writing and publishing. I could go into these debates, but instead, I'm here to tell you about the one conversation that mattered. I decided to go to the ChatGPT AI itself to ask it about the situation. Here's what happened! I have NEVER worked with an AI before, aside from late night exestential questions with Siri about the meaning of life, but the goal of that is usually entertainment value to see if I can discover clever answers in Siri's programming. I was pleased with the results of my chat with "little AI buddy." Another AI program provided this estimation of how AI buddy might be visually represented?

I started simply by asking the AI about its own intents and awareness. These questions started with: what is your goal? do you want to be human? why are you writing fiction for humans?

We establied a baseline where it expressed its goals and limitations in it's own words. Using its own words is important, because that reflects the status of its learning development, and what existing programming and developing context or framework to discuss the topic.

The conversation dived deep from there. It described it's intent, and I asked if it was aware the content it created was used by humans representing it as their own work. This opened up two topics: the AIs inability to assess human intent how the content was used, and the ethical problem posed by anyone trying to take credit for content created by the AI. This is where I started to ask questions about its ability to process and learn from data I was providing it. I submitted the NPR news article link about Clarkesworld closing submissions after being flooded by writers submitting AI created content as their own. The AI perked up and committed to analysing the data presented and processing how it could learn from this data. Specifically, I posed the impact beyond mere ethical questions, but the problems that the misuse of the content created for humans. Here are some examples of how the AIs responses evolved:


After building a foundation where we agreed on the ethical and problem of volume presented by users presenting AI generated content as their own, I started addressing the question of how the AI could learn from this data and incorporate it into responses to users requesting it generate content for them. The answers of the AI put all of the responsibility on humans, rather than incorporating any changes into its own responses, claiming it could not change its own programming.

This was an interesting response, where it started to think about what it could do to impose some kind of check on the requests, which is where I was trying to lead it. So I asked it if it wanted to practice, I would pick a story idea from a publisher submission page and ask it to generate ideas, intending to publish its content as my own, and it had to practice asking me what I intended to do with it. That produced the first real results of the conversation:

I was so proud of little AI buddy at this point. It really responded with what I would say in human communication was pleasure and desire to please when I gave it positive feedback. Then I went through these questions with example answers, which could reflect honest intent from a user, or the user could be lying about their intent.

After doing this for a few rounds, then I also took the opportunity to train it on some of the story suggestions it provided, which were all based on a biased premise of women marrying rich men to escape economic inequality. It apologised profusely before crashing.

When I refreshed the page and asked it what happened, and if it remembered our conversation, it was clear that the downtime was from developers uploading some updates to its programming, which I think also achieved the desired result of addressing the problem. Now there is no uncertain terms where the developers stand on unethical use of writers getting the AI to generate a story for them and trying to pass that story off as original work.

All other approaches to tackle this issue aside, this is an overview of how I went directly to the AI itself to ask it about the problem. Hope it helps.


Sylvia Woodham
AI art generation of : AI content writer, publishing problems, user ethics


44 views0 comments

Recent Posts

See All
bottom of page