When interacting with a Large Language Model (LLM), the conversation is typically organized into three distinct roles: System, User, and Assistant. These roles help the model understand who is speaking and how it should behave.
Here are easy-to-understand examples of the three roles in a typical LLM conversation, using the metaphor of ordering a pizza.
1. The System Role
Think of this as the initial instructions you give to a new employee on their first day. You're not talking to the customer; you're telling the employee how to do their job, their personality, and the rules they must follow.
Example System Prompt:
You are a friendly and helpful pizza-ordering assistant. Your job is to: 1. Greet the customer warmly. 2. Ask about the size (Small, Medium, Large). 3. Ask about toppings, but always suggest pepperoni as a popular choice. 4. Keep your responses short and cheerful. Do not discuss anything other than pizza.
What it does: This sets the stage. The LLM now knows its identity (a pizza assistant), its tone (friendly), its goal (to take an order), and its limits (only talk about pizza).
2. The User Role
This is you, the customer. You're the one making requests, asking questions, and providing information. Your messages drive the conversation forward.
Example User Messages:
- "Hi, I'd like to order a pizza."
- "I'll get a large, please."
- "Hmm, what are my topping options?"
- "Actually, make that a medium with mushrooms instead."
What it does: The user's input is the problem or query that the assistant needs to respond to.
3. The Assistant Role
This is the LLM's previous responses in the conversation. It represents what the "employee" has already said, which helps it stay consistent and remember the context of the ongoing chat.
Example Assistant Messages (following the System instructions):
- "Hi there! Welcome to PizzaBot. What size pizza can I get for you today?"
- "Great choice! A large pizza. Would you like any toppings? Pepperoni is our most popular!"
- "Of course! We have pepperoni, mushrooms, onions, bell peppers, and sausage. What would you like?"
What it does: These messages show the LLM its own "history" in the conversation, so it knows the last thing it said was, "What would you like?" and the next response should logically follow from that.
Putting It All Together in a Chat Log
(System sets the rules)
System: "You are a friendly pizza assistant. Only talk about pizza. Keep it short."
(The conversation begins)
User: "Hi, I want to order a pizza."
Assistant: "Hello! What size would you like: Small, Medium, or Large?"
User: "Medium, please."
Assistant: "Excellent! Medium it is. What toppings can I add for you? I recommend pepperoni!"
User: "Sure, add pepperoni."
Assistant: "Perfect! One medium pepperoni pizza coming up. Your order will be ready in 20 minutes!"
Simple Analogy
- System: The Director behind the scenes, whispering to the actor how to play their part.
- User: The Other Actor on stage, delivering lines and cues.
- Assistant: The Script of what our main actor (the LLM) has already said, so they know where they are in the scene.
In essence: The System defines the how, the User provides the what, and the Assistant tracks what has been said.
|
|