top of page

Priming: How to Steer a Language Model Like a Professional

  • Tom Hansen
  • Jul 15, 2025
  • 3 min read

Let's say you are meeting a new colleague and saying, “Just write me a report on the future of leadership.” No context. No purpose. No direction. What you get back might be well phrased, but it’s unlikely to be useful.


That’s exactly what happens in most conversations with a large language model. You get something that sounds like a response, but doesn’t actually help you in practice. Not because the model is weak, but because you asked too little.


Most people use AI without priming. They ask questions without considering what the model should focus on. As a result, they trigger answers that only skim the surface. It’s like using a brilliant analyst as a reference tool. You get polished language, but no real edge.


The Model Follows Your Thinking, Not Your Intention

Language models don’t operate on logic. They operate on statistical patterns. This means they guess what you expect, based on the first few words you write. If you don’t manage the context, the model will manage it for you. And then you’ll get the most likely answer, not the most valuable one.


The upside is that this mechanism also gives you access to real control. The model weighs what you introduce first. You can shape its focus and reasoning if you understand how learning in the model works.


It’s about sequence and depth. That’s where priming comes in. Not as a technique, but as a method. It is based on the same principles as human cognition. You start broad, narrow the focus gradually, and guide the model through a sequence of reflection, not just information.


The Conversation Strengthens When You Build It Right

Here’s a concrete example. Imagine you want to understand how AI strategies are evolving in business right now. You ask the model:


“What are the key AI strategies in companies in 2025?”


It sounds reasonable, but the response will most likely combine insights from 2023, some outdated white papers, and a few plausible guesses. It might sound smooth, but it’s probably not accurate. And rarely relevant.


Try this method instead.


Start by broadly activating the topic:

“What can you tell me about the AI strategy considerations companies have been working with in recent years?”

Once the model responds, narrow the scope:


“What can we know with confidence that has been published between July 2024 and July 2025? Please use your web tools to find the most reliable and up-to-date sources.”

Why does this work? Because you first activate a thematic vector space, meaning the model’s understanding of the topic’s breadth and key concepts. Then you narrow it to verifiable, time-specific evidence. That prevents the model from responding on autopilot. It forces it to retrieve real knowledge instead of guessing.


This kind of methodical adjustment is something very few people know how to use. But it’s precisely what makes the difference between an “okay” answer and one that is usable, trustworthy, and strategically relevant.


Priming Means Structuring the Model’s Mental Space Before You Ask It to Work

It’s not decoration. And it’s not technique. It’s cognitive steering. Instead of jumping straight to what you want, you do three things: You introduce the topic so the model doesn’t misread the context. You focus the model on the concepts you actually want to explore. You show what the answer will be used for, so the output is shaped by purpose, not just format.

It takes one extra minute. And it changes the answer entirely.


I use a three-phase structure for complex tasks with AI. It was created out of necessity. Shallow answers simply don’t work when dealing with strategy, analysis, or communication.


Phase 1: You Shape the Conversation Long Before the Question“I’m working on [topic]. My purpose is to [what you want to achieve]. I’d like to start with a broad understanding of [topic], and then go deeper into [key concept 1 and 2].”


Phase 2: Make the Answer Current and Verifiable“Let’s now look at what is actually known from [specific period]. Please use current sources and indicate what can be verified.”


Phase 3: Now You Can Ask for Something Concrete“My final task is to [solve this task] with the goal of [achieving something specific]. Please use the understanding we’ve built so far.”Or: “Run this prompt. Please use the understanding we’ve built so far. [Prompt]”


This structure doesn’t require technical prompt skills. It simply requires you to think with the model, not just ask it to think for you.And if you do, you’ll get responses that are not only more precise, but also more surprising, more reflective, and much closer to what you actually need.


It’s not about finding a cleverer prompt. It’s about using the model as it was designed to be used. It is a linguistic network that follows your focus. You direct the attention. The model follows.

bottom of page