Oh boy, the first edition… The idea behind this newsletter is to identify a theme or topic and talk about it in a few hundred words or less. I’ve designed it to be consumable with your cup of coffee.
So for the first (official) edition of Bytes and Brew, I went with the theme of perfection. Here’s what’s in store:
Writing the perfect prompt: Does it exist? Let’s discuss…
AI-backed writing tools: I get it, writing is hard. I use one of these tools all the time to help me with my writing.
“Perfect” images: Check out some Midjourney crafted an image from my not-so-perfect prompts.
I’m excited for the next steps of this newsletter. There’s a lot going on behind the scenes and I can’t wait to send out the next one. Let’s dive into this edition.
“Does the perfect prompt truly exist?”
I often think about this question when I’m messing around with ChatGPT.
What made me pull the trigger to write this was Peter Yang’s tweet (link) about how we can make better prompts for Large Language Models, or LLMs.
» If you want to mess around with an LLM, check out Google’s Bard (link) and OpenAI’s ChatGPT (link).
To craft a perfect prompt, it’s important to understand how LLMs work. Here’s an overly-simplified breakdown of the inner workings:
It reads and translates the prompt we provide into numerical characters.
Passes these parameters through it’s “knowledge base”, which is trained on billions of parameters.
Predicts the next token (letter/word) based off of your input and its “knowledge base”.
Keep in mind that LLM’s create text that looks right but can not guarantee that it is correct. That is, it doesn’t have a source of truth and sometimes may produce incorrect information.
There’s intricacies within our language such as word and phrase ambiguity, grammar, and dialects that increase the complexity of the natural language to begin with.
To boot, there could be bias and/or unrepresentative data that the model is trained on, potentially skewing the model’s understanding and interpretation of the language. This could result in favoring certain perspectives and under-representing others.
With these limitations (and many more I haven’t mentioned) in mind, we’d have to craft our prompts in a way that will allow us to obtain accurate and relevant responses from the language model.
Let’s break down what I like to call “the 3 components of prompt crafting”, which are clarity, completeness, and specificity. In an ideal scenario, you’re going to end up in the middle:
The 3 components of prompt crafting with examples
Take, for instance, “Discuss how Python handles memory management”, we might find ourselves in a scenario where we didn’t define the target audience and the explanation may be too complicated, thus leaving out the “completeness” component of the trio.
Similarly, if we lack clarity with a prompt like “explain a popular software design in an easy way that a beginner could understand”, we may find ourselves with an output that lacks contextual information - what kind of software design?
Yet, there are situations when we might intentionally design these prompts without incorporating all three elements - clarity, specificity, or completeness. Doing so can be advantageous when we want to steer the LLM towards producing more vague, ambiguous, or wide-ranging responses.
I see 2 scenarios where we may want to do this:
Brainstorming: In scenarios where we may want to list out, for example, social media posts, we may prompt the LLM without specifics. This could allow the LLM to create a wider scope of social media posts across various topics in different styles.
Exploration of a Subject: If we find ourselves in a position to where where we’re learning a new subject, we may not want to be as clear with what we want as an output. Doing so allows the LLM to branch into different topics within that subject, creating an exploratory learning environment that can provide a more comprehensive understanding of the subject matter.
So, answering the question of “does the perfect prompt exist”, I believe it exists under one condition: by understanding the capabilities and limitations of the model to maximize the probability of generating the desired output by crafting prompts that meet your objective(s).
Crafting content for your audience is hard. It takes time, effort, creativity, and commitment. Thankfully, there’s a bunch of AI-backed tools out there to help support and perfect your writing.
Let’s take a look at a few of these tools:
Grammarly (link): An AI writing assistant to help write better make your writing better (thanks Grammarly for this sugestin suggestion!)
Jasper (link): If you’ve ever used ChatGPT, you’ll probably notice that you’d have to do a little bit of not-so-perfect prompting to get responses in the tone of your business/brand. Jasper ensures that the content you create remains on-brand.*
Writer’s Brew (link): An AI assistant that works with your native apps and in-browser for every day use.
Any links that contain * are affiliate links.