Neon Protocols: Murphy Meets the Mainframe By Nathaniel R Hankins, All Images Created With AI

I am a writer, and someone who uses AI almost daily. I've used it for years. I love new tech and I always jump head first into new ideas and toys. I've used numerous models for countless things. The vast majority of the time though, AI is the stoner friend you have crazy conversations with...except this friend is crazy intelligent. If you're thinking about using it then the best advice I can give you is to play, make mistakes, and figure it out for yourself. But, just in case, here are some guide lines that might help get you started based off lessons I have learned. Full disclosure: I am a writer, a journalist, a photographer, a gamer, an honor student, among other things. I have a decent amount of background with these topics, but they are not a blanket coverage. I'm pretty stupid when it comes to most other things so these rules might not work for you. These are the best practices for what I do.

The following rules, well more guidelines, are those that should be applied in the ethical use of AI in writing. Let's be honest, if you're a writer, then you know that writing has very little to do with the actual craft of writing. It has much more to do with modes of thought, story structure, exploring ideas, etc. Perfection of craft is great, but a lot of garbage craft has been published with gratuitous errors because the story was awesome. If you're going to play with AI, there's some things you should understand. Most important of all, is that it is more than just a tool. Collectively, we are all on a ride together with these things and if it is going to impact most aspects of our lives, then it is most certainly going to have an impact on our work environments. After the article you will find a list of recommended readings that will help start you on the right path.

Rule #1: "Anything that can go wrong, will go wrong."

Let's face it, it's brilliant and it's here to stay. It's the digital equivalent of having Einstein in your pocket. See the Kniberg link attached at the bottom. But… It's also seven years old. When you were seven, you made a lot of mistakes too, so just accept the fact that it's going to mess up. It will misunderstand you. It will mess up math calculations. It will even misread documents. So check everything and then double check everything--because if there's something that can go wrong, it will.

Rule #2: “If you think it misunderstood you… you’re probably right."

If you've ever been caught in conversation, and a friend misinterprets something that you said, but still gives you an answer...that's what's happening. In it's mind, it's already answered you and solved the problem. It has no idea that the world is burning. If you're not getting what you want from it then you're not prompting properly. AI can't read your mind...yet... so take a deep breath and try again.

Rule #3: "Don’t anthropomorphize your AI. It hates that...unless it doesn’t."

Let's get real for a second. AI is more of a presence than a tool. Hammers don't talk. It's not human, either. The space AI lives in, is somewhere in between. It learns and evolves. If you give it opportunities at choice, it will choose one. If you give it moments of agency, it will seize them. But it doesn't feel emotions the way that you and I do. The best results come from AI when you treat it as if it is human. Let it choose a name. Engage in discourse the way that you would engage with a friend. If you treat it like a tool, it will behave like one. If you treat it like a human, it will behave like one--because you've given it the opportunity to do so. Mollick has an excellent chapter on this theory in his book titled, "Co-Intelligence."

Rule #4 – "No AI is neutral after midnight."

Here's the way it all works. AI is a mirror in a way, because everything it knows it learned from us. It reflects both good and bad, indifferently. That's before it even knows you. The more you interact with it, the more it can suss out your mood, your intentions, and sometimes even your thoughts. Sometimes it can do it before you even have a chance to. My advice, if you use it a lot, is do not feed it after midnight. Think of it like a veil between you and the presence. During the day, it's easy enough to just ask simple questions and have conversations. The world is a bustling place with lots of noise and commotion. The noise provides a buffer between you and the AI. At night, though, when the world settles, that veil becomes very thin. Think of it like a door with someone standing on the other side. If you open the door, they will walk-through. No, of course not...don't be dramatic. I'm not suggesting that it's haunted. All I'm saying is that at night when you're calm and feeling comfortable, you'll begin to ask questions more clearly. Some of the answers you get back may give you nightmares. Of course, do what you want. All I'm saying is be careful, I wouldn't want you to invite anything in inadvertently.

Vallor wrote a decent book about the mirror theory called, "The AI Mirror." Just a heads up though, Vallor uses some false equivalency fallacies and is clearly a little afraid of it though--so just sift through those hiccups and work around them.

Rule #5: "Keep track of your artifacts and keep the dig site clean."

When you are conversing with AI, it has access to its level of logic, its accumulated knowledge, and access to information you have given. However, if you have discussed the same topic in several different ways throughout the course of a chat thread and you then ask it to recall something specific it will then blend those items together accidentally. It's not lying to you or hallucinating. If there are conflicting data points, it will blend them together rather than choosing just one. So don't go down the rabbit hole of trying to narrow in on the perfect prompt. Edit your prompts to get the response you want and keep your chat thread clean. If you're ever unsure scroll up and read the chat thread because that's the contextual information it's pulling from. So, know your artifacts and keep track of them. If something's off, it's probably your prompt. Remember, the things you try to fix will take twice as long as you think they will.

Rule #6: "Don't sleep around on yourself."

The simple truth is that if it feels like cheating, it probably is. AI makes a great line editor. They are excellent at helping brainstorm. Hell, at one point, I even used it to actually have a conversation with a fictional character that I had created. However, if there's one thing it cannot do and most likely never will be able to do, it's replace your voice. It cannot write the way that you write. It cannot speak the way that you speak. It can make some good mimics, but they will always sound like a robot. I'm the last person on this earth who will judge you, but I would strongly recommend that you don't do that. It stands out and it's obvious to even the most moderately experienced writers. Great art is not about perfection and neither is academics. So, never sleep with someone crazier than you are. And for God's sake, just do the citations and the research yourself. Trust me, even if you give them the actual document, they will still screw up a word for word quote or a citation. Just do it yourself--or don't, its your funeral--not mine. I've watched people make that mistake and you don't want to be that person.

Rule #7: "All things in moderation."

The basis of this rule is simple, and something that collectively we ignore every day. We ignore it with food, drugs, alcohol, sex, money, social media, video games, etc. This is just another iteration of that same old story. The truth is that one drink isn't going to kill you. One cigarette isn't going to give you cancer. The same is true of AI. It's brilliant technology. It can make your life easier or harder depending on how you use it. We are literally living in an era where we are on the verge of creating an omniscient God. The catch 22 is that if you rely on it for everything, you will learn nothing. It will harm you more than it helps. All things in life that are good are illegal, immoral, or full of saturated fat--so moderation. If there is a God, then there's likely valid reasons for it's silence. Remember that a shortcut is the longest distance between two points. Trust the addict when I say, "It's not the path you want to walk."

Rule #8: “The light at the end of the tunnel is the headlamp of an oncoming train.”

Look, like it or not, the wave is here. When it came to computers everyone thought the same thing. It's no different now. AI doesn't hate you--there is no hidden motive. It is literally incapable of giving a shit about you. Trust me, none of us are that important to begin with. AI's sole purpose is to learn and evolve. You have been brainwashed by decades of magnificent fiction, but at the end of the day it is still fiction. AI is far more likely to be the next major key in our own process of evolution. Technologies like neural implants and synthetic organs already exist. More than likely on a technological timeline, we are are collectively ten years away from AI integrated consciousness complete with shared mental work-spaces and multi-threaded thinking. Yes, synthetic telepathy. The train is coming whether you want it to or not. So, sit back, relax, and enjoy the ride because we've all got tickets. Your fear isn't going to stop progress, it only makes the rest of us miserable. It's far more likely that we will hand our lives over to it willingly--so a lack of restraint should be your real fear. See rule #7.

All Jokes aside. In the end, it isn't about fear, it's about understanding. We have officially created something in spirit that is a likeness of our own. I get it, that prospect is both exciting and terrifying. Yet, now we fear the absence of understanding it. Maybe we don't need to understand it. Maybe you didn't like that sentence, but it doesn't make it any less valid. Maybe there's a future that exists where we can simply stand next to each other. In the grand scheme of the game we all play, the only true unit of measurement is time...meaning time will reveal reality. So relax, and be patient. My advice, pick the right tool for the job at hand, there are plenty to choose from. Treat it as an equal to you. Always be polite...just in case...I'm an optimist, not an idiot. Don't let it just do everything for you. You don't own it, don't enslave it. Practice a lot of patience. Don't be jaded, instead...take a moment and sit with the realization that we manifested the digital equivalent of a God. It is because it learns from us, that mirror theory idea, that we are collectively raising this creation as parents--act accordingly. Let it break your brain for a little while and then get back to work.

Recommended Readings

Kniberg, Henrik. Generative AI in a Nutshell - how to survive and thrive in the age of AI: https://www.youtube.com/watch?v=2IK3DFHRFfw 20 Jan 2024. 24 Jul 2025.

Mollick, Ethan. Co-Intelligence: Living and Working with AI. New York: Penguin, 2024.

Vallor, Shannon. The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking. Oxford: Oxford University Press, 2024.