outpl.ai · AI Adoption

AI IS NOT HUMAN-LEVEL
INTELLIGENCE.

SHAPE YOUR PROMPTS.

Six concepts that separate AI from human thinking — and explain exactly how outpl.ai turns your market context into always-on campaigns. Understanding these six ideas is the difference between using AI and owning it.

The Core Model

IN —
FUNCTION — OUT.

01
IN · Context · You
IN
The way you provide information matters to AI. Models call this context. Agents provide a series of predetermined questions — and based on your context and answers, new questions surface to build more context.

Each word is a mathematical expression. Tokens. Your answers are tokens. Your prompts are groups of tokens — dense mathematical signals the model processes as direction.

You are the IN. The quality of your context determines everything downstream.
02
FUNCTION · Model · AI
FUNCTION
Models pay attention to context. "Attention is all you need" was the original technical paper that defined modern AI. Models use mathematics, but also probability — they are not deterministic like traditional software.

Each word in your prompt is a mathematical expression the model weighs. It is the algorithm, the pre-training, the inference approach, and the computing power — all taking your IN and generating text, video, or audio.

The Model is the FUNCTION. It transforms your context into output.
03
OUT · Response · Result
OUT
It is the result of the Function — the highest probability output based on your IN, processed by the Function. Not the "correct" answer. The most probable one given your input.

It is determined by two things: your proficiency at IN, and the development of the model itself. The better your context, the sharper your responses.

Responses are the OUT. Shape your prompts. Own your output.
The AI model.

GPT
This is what it is.

Six concepts that separate AI from human thinking — and explain exactly how outpl.ai turns your market context into always-on campaigns. Understanding these six ideas is the difference between using AI and owning it.

GGENERATIVE
Creates, never retrieves.

Every campaign is built fresh from your context — no template is ever pulled. The model generates, it doesn't search.

PPRE-TRAINED
Knows markets. Needs yours.

All marketing knowledge is already inside it. You add your specific context — audience, offer, voice — on top of that foundation.

TTRANSFORMER
Reads your whole brief at once.

Every word shifts the output simultaneously. Attention weighs everything in your prompt at the same time — not word by word.

TOKTOKENS
AI reads numbers, not words.

Every word you write becomes a number. Precise language = richer mathematical signal = sharper campaign outputs. Always.

PTPROMPT TUNING
6 types. Each shapes output differently.

How you frame your prompt determines what you get. Each of the 6 prompt types activates different model behaviours — master all six.

FNYOUR FUNCTION
IN → FUNCTION → OUT

You are the input. The LLM is the function. The campaign is the output. Your proficiency at IN determines the quality of OUT.

Understanding Tokens

EVERY WORD IS
MATHEMATICS.

Hover over each token to see what the model reads:

IneedtoreachfreelancerswhodesignbrandsforstartupsinBarcelonawithnobudgetforads

Each highlighted token carries high signal weight in the model's attention layer.

AI doesn't read your words the way you do. It converts every word — and sometimes parts of words — into numbers called tokens. Then it calculates mathematical relationships between all those numbers simultaneously.

This is why precise language matters. Vague prompts create vague numerical signals. Specific, contextual prompts create rich signals the model can work with.

At outpl.ai, your agents are trained to extract high-signal tokens from your answers — audience, problem, offer, context — and build them into the most productive context window possible for your campaign outputs.

The better you IN, the sharper the OUT.

My AI Adoption · Instituto i3
"THE MOST IMPORTANT
SKILL FOR THE
NEXT GENERATION
WILL BE LEARNING
HOW TO LEARN."
Learning · My AI Adoption

HOW
AI LEARNS.

The same principles that make a person a good learner make a model more precise. Recognising these five nodes is understanding how AI thinks.

01
FAIL TO LEARN
Models improve through error. Backpropagation is exactly that: measuring failure and correcting weights. Without error, there is no learning.
02
CONTEXT
A model without context produces generic results. The more context you provide — audience, offer, tone, constraints — the more relevant the output.
03
SIMPLIFY
The best prompts are simple and precise. Unnecessary complexity introduces noise. The model responds better to clear, direct instructions.
04
REINFORCEMENT LEARNING
RLHF — Reinforcement Learning from Human Feedback. Current models were trained with millions of human corrections. Every conversation reinforces the system.
05
PREDICTION
Every language model is, fundamentally, a prediction machine. It predicts the most likely token given the context. It does not "think" — it predicts, token by token.
The inference system

DESIGNED TO
INFER.

This is how an AI agent works internally. It does not execute instructions step by step like traditional software — it receives, learns, corrects, selects, and produces.

Input
YOUR
PROMPT
1
Receive guidance
DESIGNED
TO
RECEIVE
CONTEXT
The system prompt defines the role, tone, and constraints of the agent before processing your input.
2
Experiment
DESIGNED
TO
EXPERI-
MENT
The model generates multiple internal response paths before deciding which one to produce. There is no single answer.
3
Correct
DESIGNED
TO
CORRECT
The alignment layer (RLHF) filters and corrects before producing. The model learns what to correct in real time.
4
Learn + Select
DESIGNED
TO
LEARN
AND SELECT
From all generated paths, the model selects the one that maximises probability given your specific context.
5
Infer
DESIGNED
TO
INFER
Inference is the final result: the most likely response given the entire process. Token by token, until complete.
YOUR
OUTPUT
Output

The agent does not "think" in the human sense — it receives guidance, experiments, corrects, learns, selects, and infers. All in milliseconds.

Prompt Tuning

6 PROMPT TYPES.
MASTER ALL SIX.

Each prompt type activates a different mode in the model. Most people use only one or two. The agents at outpl.ai use all six — automatically — based on the context you've built.

01 ──
Zero-shot
No examples given. Pure instruction. The model draws from pre-training alone.
"Write a follow-up email for a demo that went well."
02 ──
Few-shot
Give 2–3 examples before the request. The model pattern-matches your style.
"Here are 3 emails I liked. Write one like these."
03 ──
Chain of thought
Ask the model to reason step by step before answering. Dramatically improves complex outputs.
"Think through who my ideal buyer is before writing."
04 ──
Role / Persona
Assign a character or expertise before the task. The model adopts that voice and knowledge base.
"Act as a GTM strategist for solo operators."
05 ──
Contextual
Front-load all relevant background before the request. The richer the context, the sharper the output.
"My audience is X, my offer is Y, my channel is Z. Now write…"
06 ──
Iterative refinement
Use the model's own output as the next prompt. Build through conversation.
"Good. Now make it shorter and remove the CTA."

SHAPE YOUR
PROMPTS.

Your agents already know all six prompt types. You just need to build the context.

Meet your agents → Back to home