What a prompt is
A prompt is not just a question. It is the complete set of instructions you give the model: what task to perform, what constraints to follow, what format to produce, what role to adopt, and what context to use. The model has no prior knowledge of what you want. Every expectation you leave unstated will be filled by the model's default, which is shaped by its training data. If you ask for a summary without specifying length, tone, or audience, the model will produce whatever kind of summary is most common in its training data, usually a formal, medium-length, somewhat generic paragraph. Prompting is the practice of making your expectations explicit.
The model's defaults are not your defaults
When you first interact with a model, observe what it does that you did not ask for. It gives long answers to short questions. It adds explanations you did not request. It uses a particular register, a particular structure, a particular set of assumptions about what "helpful" means. These are defaults, derived from training. Not wrong; just not yours. Effective prompting begins with recognising these defaults and steering the model away from them, towards what you actually need. A process of negotiation: you push, the model responds, you adjust.
Structure your prompts
A well-structured prompt has distinct sections: the role ("you are a quality assessor for academic writing"), the task ("evaluate the following text against these five criteria"), the constraints ("respond in JSON format, with a score from 1 to 5 for each criterion and a one-sentence justification"), and the input (the actual text to process). Separating these sections makes the prompt easier to write, easier to debug, and easier to reuse. When a prompt does not produce the expected output, you can isolate which section needs adjustment rather than rewriting the whole thing.
Few-shot examples
One of the most effective prompting techniques is to include examples of the desired input-output pair. If you want the model to extract action items from meeting notes, show it one or two examples of meeting notes and the corresponding action items you expect. The model learns the pattern from the examples and applies it to new inputs. This is called few-shot prompting. It is particularly useful when the task involves a format or style that is hard to describe in words but easy to demonstrate.
Iterative refinement
No prompt works perfectly on the first try. Prompting is iterative: you write a prompt, test it, examine the output, identify where it diverges from what you want, and adjust. Sometimes the adjustment is small (adding "do not include a greeting" because the model keeps adding one). Sometimes it requires rethinking the approach (switching from a free-form instruction to a structured template with examples). The skill is diagnosis: identifying what went wrong and knowing which part of the prompt to change.
Examples
Steering against verbose defaults
You ask a model to answer a factual question. It responds with three paragraphs of context, a detailed explanation, and a caveat. You wanted one sentence. You add "respond in a single sentence with no additional context" to your prompt. The next response is exactly one sentence. The model was not being unhelpful; its default is to be thorough. Your job is to override that default when it does not serve you.