Basics of Prompting

Basics of Prompting

Prompting an LLM

Let's start with creating a few prompts and observing the responses generated by the playground. Feel free to come up with your own unique prompts and see the results.

You can also use the prompts provided below to get started. You can enter the prompts in the chat interface and observe the responses generated by the model. The model will generate responses based on the prompts you provide. The responses will be displayed in the playground.

Enter in the user prompt:
What is the capital of United States of America?
Enter in the user prompt:
A recipe for apple bread, and an itemized shopping list of the ingredients.
Enter in the user prompt:
Write a product description for a new water bottle
Enter in the user prompt:
What were the 10 top movies of 2001?
Respond in a list.
Listing the movie name, the box office earnings, and the studio
Ranking the movies from 1 to 10 in the list.
Enter in the user prompt:
Write a Python function to calculate the nth prime number.

The example above is a basic illustration of what's possible with LLMs today. Today's LLMs are able to perform all kinds of advanced tasks that range from text summarization to mathematical reasoning to code generation.

If you are using the Playground you can prompt the model as shown in the following screenshot:

Playground Example

When using the Playground, you may notice that there are three different roles: system, user, and assistant. While the system message is not required, it can help set the overall behavior of the assistant. In the example above, only a user message is included, which can be used to directly prompt the model. For simplicity, all of the examples, except when explicitly mentioned, will use only the user message to prompt the gpt-3.5-turbo model. The assistant message in the example above corresponds to the model's response.

The output might be unexpected or far from the task you want to accomplish. In fact, this basic example highlights the necessity to provide more context or instructions on what specifically you want to achieve with the system. This is what prompt engineering is all about.

Simple prompts

Remember, even though simple prompts can be effective, the quality of the responses depends on the amount of information you provide and how well the prompt is constructed. A prompt can include various elements such as the question or instruction for the model, as well as additional details like context, inputs, or examples. Using these elements effectively can significantly improve the quality of the responses.

Generating novel content

While generative AI models are trained on existing data, they have the remarkable ability to produce new and original content. Their capacity to recombine concepts in imaginative ways leads to innovative outputs that can be quite compelling.

Try a prompt like this:

Enter in the user prompt:
Write a limerick about the Python programming language

How did you find the limerick? If it didn't meet your expectations, feel free to ask the chat session to generate a new one.

Now, let's explore the available parameters. Use the Temperature field in the right column of the chat interface and set the Temperature to zero. What do you observe when you retry the prompt?

The Temperature parameter determines the level of "creativity" allowed for the model. With low values of Temperature, the model is more likely to provide responses that have the highest weight, resulting in less variation. On the other hand, higher values of Temperature increase the likelihood of generating responses with lower weights, allowing for more creative but less precise outputs.