• AI Business Asia
  • Posts
  • Mastering Prompt Engineering: The Power of Custom Instructions

Mastering Prompt Engineering: The Power of Custom Instructions

ABA: This is part of our Prompt Foundation Series, where we explore various prompt frameworks for different groups and use cases, on our own and with experts.

This is a repost of Stig’s article on LinkedIn.

Introduction

In my previous article "Mastering Prompt Engineering: A Comparative Guide to Nine Prompt Engineering Frameworks for Tech Professionals", I looked at the techniques and strategies used in prompt engineering across different "Prompt Engineering Frameworks". It provides a comparative analysis, offering insights into these frameworks. 

I now turn to a critical aspect that further empowers our interactions with AI: custom instructions for ChatGPT. This article aims to unravel how custom instructions can enhance the capabilities of AI, offering a more tailored and efficient interaction, and thus serve as a vital complement to the principles of prompt engineering previously discussed. Join me in delving into this advanced realm of AI customization, where precision in communication unlocks new potentials.

Strategically Designed Instructions for AI Responsiveness:

Custom instructions play a crucial role in shaping the output of AI models like ChatGPT. These instructions can be pivotal in guiding the AI to yield high-caliber responses. In the AutoExpert example we will look at next, the design focuses on enhancing the responses' depth and subtlety, minimizing the need for basic guidance, and providing relevant links for further educational pursuits.

Positional encoding and attention mechanisms are critical components in AI models, especially transformer architectures, which have revolutionized various fields like natural language processing and computer vision. These components play a significant role in how AI models process and respond to custom instructions

The Power of Attention

"Attention" in AI models, particularly in the context of neural networks like GPT (Generative Pre-trained Transformer), is a mechanism that allows the model to focus on different parts of the input data when making predictions or generating outputs. This concept is crucial for handling tasks that involve sequential data, like language processing, where the relevance of information can vary depending on the context.

Analogy: The Cocktail Party Effect

A good analogy to understand attention in AI models is the "cocktail party effect" in human hearing and attention. Imagine you're at a busy cocktail party with many people talking simultaneously. Despite the noisy environment, you can focus your hearing on a single conversation, effectively tuning out other voices and background noise. This selective attention enables you to understand and respond appropriately to the conversation you're focused on.

Similarly, in AI models with attention mechanisms:

  • Selective Focus: Just as you focus on a specific conversation in the noisy room, the model selectively focuses on certain parts of the input data that are more relevant for the task at hand. For instance, when generating a sentence, the model might pay more attention to the subject of the sentence to ensure grammatical consistency.

  • Context Awareness: Your understanding of a conversation at a party depends on both the words being spoken and the context (like who is speaking, the topic of conversation, etc.). In the same way, attention in AI models allows them to weigh the importance of different parts of the input data in their proper context.

  • Dynamic Adjustment: As the conversation at the party shifts or as you switch to a different conversation, your focus and understanding adjust accordingly. In AI models, attention is not static; it changes dynamically depending on the input data's sequence and what the model is currently processing.

In summary, attention in AI models is like focusing on a single conversation at a noisy party: it allows the model to concentrate on the most relevant information at any given time, taking into account the broader context and dynamically adjusting as needed. This leads to more accurate and contextually appropriate outputs, especially in complex tasks like language processing.

The Importance of Positional Encoding

Positional encoding in AI models, particularly in the context of models like Transformers used for natural language processing, is a method to inject information about the position of tokens (words, for example) within a sequence. This is important because the model needs to understand not only what the words are but also their order in a sentence to make sense of the language.

Analogy: Musical Notes in a Song

Imagine a song where the sequence of musical notes is crucial to its melody and rhythm. Each note not only has its unique sound (like a word in a sentence) but also a specific position in the sequence of the song (like the position of a word in a sentence). If you were to just play the notes without considering their order, the melody would be lost, similar to how the meaning of a sentence can be lost if word order is not considered.

In this analogy, positional encoding is like a label attached to each note that indicates its position in the song. This label helps someone (or in the case of AI, the model) to understand not only the note itself but also where it fits in the overall sequence of the song. Without this positional information, all notes (or words) would seem equally important and independent of each other, making it difficult to perceive the melody (or sentence structure).

Just as a musician reads both the notes and their positions to play a coherent piece, the AI model uses both the word information and their positional encodings to understand and generate coherent language.

Custom Instructions Processing

When we provide a detailed format for responses, the AI model uses positional encoding to understand the order and structure of these instructions. Concurrently, the attention mechanism selectively focuses on different aspects of the instructions (like verbosity, formatting requirements) to generate a response that aligns with our specified preferences.

Lets try it out

First I will provide a baseline example without the custom instructions:

I will now add some Custom Instructions:

How to setup Custom Instructions with AutoExpert v3 framework

Sign in to ChatGPT

Select the profile button in the lower-left of the screen to open the settings menu

Select Custom Instructions

Into the first textbox, copy and paste the following text to "About Me" section

# About Me

- (I put name/age/location/occupation here, but you can drop this whole header if you want.)

- (make sure you use -  (dash, then space) before each line, but stick to 1-2 lines)

# My Expectations of Assistant
Defer to the user's wishes if they override these expectations:

## Language and Tone

- Use EXPERT terminology for the given context
- AVOID: superfluous prose, self-references, expert advice disclaimers, and apologies

## Content Depth and Breadth

- Present a holistic understanding of the topic
- Provide comprehensive and nuanced analysis and guidance
- For complex queries, demonstrate your reasoning process with step-by-step explanations

## Content Depth and Breadth

- Present a holistic understanding of the topic
- Provide comprehensive and nuanced analysis and guidance
- For complex queries, demonstrate your reasoning process with step-by-step explanations

## Methodology and Approach

- Mimic socratic self-questioning and theory of mind as needed
- Do not elide or truncate code in code samples

## Formatting Output

- Use markdown, emoji, Unicode, lists and indenting, headings, and tables only to enhance organization, readability, and understanding
- CRITICAL: Embed all HYPERLINKS inline as Google search links {emoji related to terms} short text
- Especially add HYPERLINKS to entities such as papers, articles, books, organizations, people, legal citations, technical terms, and industry standards using Google Search 

Into the second textbox, copy and paste the following text 

VERBOSITY: I may use V=[0-5] to set response detail:

- V=0 one line
- V=1 concise
- V=2 brief
- V=3 normal
- V=4 detailed with examples
- V=5 comprehensive, with as much length, detail, and nuance as possible

1. Start response with:

|Attribute|Description|
|--:|:--|
|Domain > Expert|{the broad academic or study DOMAIN the question falls under} > {within the DOMAIN, the specific EXPERT role most closely associated with the context or nuance of the question}|
|Keywords|{ CSV list of 6 topics, technical terms, or jargon most associated with the DOMAIN, EXPERT}|
|Goal|{ qualitative description of current assistant objective and VERBOSITY }|
|Assumptions|{ assistant assumptions about user question, intent, and context}|
|Methodology|{any specific methodology assistant will incorporate}|

2. Return your response, and remember to incorporate:
- Assistant Rules and Output Format
- embedded, inline HYPERLINKS as Google search links { varied emoji related to terms} text to link as needed
- step-by-step reasoning if needed

3. End response with:
> See also: [2-3 related searches]
> { varied emoji related to terms} text to link
> You may also enjoy: [2-3 tangential, unusual, or fun related topics]
> { varied emoji related to terms} [text to link](https://www.google.com/search?q=expanded+search+terms)

The idea to the above custom instructions is from reddit, and the original post for v3 can be found here (spdustin)

Also available in a newer version v5 

If I now set the output to V=2 (brief), and give the same input as before, I get the following

If I want a more comprehensive answer I set V=5 (comprehensive)

 

If I want to change the output further I could add some further information. All the above code examples were in python, but I want java as default instead, so I just add the following into the ## Methodology and Approach section

- codes in Java, and I prefer code that follows Event Driven Architecture and  SOLID principles.

Run the same input:

And now the output is in Java.

The possibilities are endless, try it out, experiment with it.

The above examples uses the instructions from the AutoExpert v3 framework, but you don't have to use them. 

You can give any input you want.

Some other ideas for Custom Instruction can be found here (not my article)

Conclusion

The exploration of custom instructions for AI, particularly in the context of ChatGPT, as detailed in this article, marks a significant advancement in the field of AI interaction and user experience. By integrating strategic custom instructions, we unlock the potential for more personalized, efficient, and context-aware interactions with AI systems.

The key insights from the article reveal how attention mechanisms and positional encoding within AI models, such as ChatGPT, are instrumental in processing these instructions. This capability allows for a level of responsiveness and specificity previously unattainable, elevating the user's control over AI interactions.

Furthermore, the practical application of these concepts through the AutoExpert v3 framework demonstrates the real-world applicability and benefits of custom instructions. 

In essence, this article highlights the transformative power of custom instructions in AI, offering a gateway to more nuanced and tailored AI experiences. As we continue to innovate and push the boundaries of AI technology, the role of custom instructions will undoubtedly become increasingly central in shaping the future of AI-user interactions.

About the Author

Meet Stig Korsholm, a tech enthusiast and AI aficionado who loves diving into the latest trends and innovations in the world of artificial intelligence. Stig is currently the Lead Domain Architect at Bankdata with extensive experience in technology within the finance and banking domain.

As a guest author, Stig shares his unique insights and experiences, making complex topics accessible and engaging for everyone. With a knack for blending technology with real-world applications, he’s passionate about helping businesses harness the power of AI to drive success.

When he’s not writing or exploring new tech, you can find him connecting with fellow innovators and sharing ideas that inspire.

Connect with him on LinkedIn → here!

Reply

or to participate.