Tips for prompting based on LLM
Role Structure
Uses system, user, and assistant roles explicitly
No native role handling, but you can simulate it using text cues
No role handling, but follows text cues and prompt templates
System Prompt Support
✅ Full support — allows defining model behavior (e.g., "You are a helpful assistant.")
🚫 No native support; must embed in user prompt manually
🚫 No native support; simulate via prompt prefix
Formatting Style
Natural conversation, JSON-compatible, markdown-friendly
Structured, requires consistent formatting for few-shot and instruct modes
Concise and direct; works well with bullet lists, steps, or templates
Few-shot Learning
Highly effective with few-shot examples
Effective, especially with CodeLLaMA and LLaMA-Instruct variants
Can benefit from few-shot, though prefers minimal examples
Chain-of-Thought Reasoning
Strong performance with "Let's think step by step" style prompts
Improves performance significantly with explicit CoT instructions
Supports CoT well, especially in instruct-tuned variants
Prompt Length Handling
Handles long prompts well (especially GPT-4-1 with large context windows)
Medium capacity; recent models like LLaMA 3 support longer prompts
Smaller context (e.g., 32K tokens), favors concise prompts
Fine-tuning Response Format
Easily aligns to JSON, tables, and multi-part instructions
Needs more specificity to get consistent formatting
Consistent if given strict format constraints
Use of Delimiters
Often uses """ or ### to separate instructions from input
Suggested to separate examples and instructions clearly
Benefits from template-like structures, including consistent line breaks
Multimodal Input Handling
GPT-4o supports images and audio
LLaMA 3 (future) may add modalities; current LLaMA is text-only
Mistral is text-only for now
Last updated
Was this helpful?