outpl.ai · AI Adoption

AI IS NOT HUMAN-LEVEL
INTELLIGENCE.

SHAPE YOUR PROMPTS.

Six concepts that separate AI from human thinking — and explain exactly how outpl.ai turns your market context into always-on campaigns. Understanding these six ideas is the difference between using AI and owning it.

The Core Model

IN —
FUNCTION — OUT.

01
IN · Context · You
IN
The way you provide information matters to AI. Models call this context. Agents provide a series of predetermined questions — and based on your context and answers, new questions surface to build more context.

Each word is a mathematical expression. Tokens. Your answers are tokens. Your prompts are groups of tokens — dense mathematical signals the model processes as direction.

You are the IN. The quality of your context determines everything downstream.
02
FUNCTION · Model · AI
FUNCTION
Models pay attention to context. "Attention is all you need" was the original technical paper that defined modern AI. Models use mathematics, but also probability — they are not deterministic like traditional software.

Each word in your prompt is a mathematical expression the model weighs. It is the algorithm, the pre-training, the inference approach, and the computing power — all taking your IN and generating text, video, or audio.

The Model is the FUNCTION. It transforms your context into output.
03
OUT · Response · Result
OUT
It is the result of the Function — the highest probability output based on your IN, processed by the Function. Not the "correct" answer. The most probable one given your input.

It is determined by two things: your proficiency at IN, and the development of the model itself. The better your context, the sharper your responses.

Responses are the OUT. Shape your prompts. Own your output.
The AI model.

GPT
This is what it is.

Seis conceptos que separan la IA del pensamiento humano — y explican exactamente cómo outpl.ai convierte tu contexto de mercado en campañas siempre activas. Comprender estas seis ideas es la diferencia entre usar IA y apropiarla.

GGENERATIVE
Crea, nunca recupera.

Every campaign is built fresh from your context — no template is ever pulled. The model generates, it doesn't search.

PPRE-TRAINED
Conoce mercados. Necesita el tuyo.

All marketing knowledge is already inside it. You add your specific context — audience, offer, voice — on top of that foundation.

TTRANSFORMER
Lee tu texto completo a la vez.

Every word shifts the output simultaneously. Attention weighs everything in your prompt at the same time — not word by word.

TOKTOKENS
La IA lee números, no palabras.

Every word you write becomes a number. Precise language = richer mathematical signal = sharper campaign outputs. Always.

PTPROMPT TUNING
6 tipos. Cada uno da forma distinta al resultado.

How you frame your prompt determines what you get. Each of the 6 prompt types activates different model behaviours — master all six.

FNYOUR FUNCTION
IN → FUNCIÓN → SALIDA

You are the input. The LLM is the function. The campaign is the output. Your proficiency at IN determines the quality of OUT.

Understanding Tokens

EVERY WORD IS
MATHEMATICS.

Hover over each token to see what the model reads:

NecesitollegarafreelancersquediseñanmarcasparastartupsenBarcelonasinpresupuestoparaanuncios

Each highlighted token carries high signal weight in the model's attention layer.

AI doesn't read your words the way you do. It converts every word — and sometimes parts of words — into numbers called tokens. Then it calculates mathematical relationships between all those numbers simultaneously.

This is why precise language matters. Vague prompts create vague numerical signals. Specific, contextual prompts create rich signals the model can work with.

At outpl.ai, your agents are trained to extract high-signal tokens from your answers — audience, problem, offer, context — and build them into the most productive context window possible for your campaign outputs.

The better you IN, the sharper the OUT.

My AI Adoption · Instituto i3
"LA HABILIDAD MÁS IMPORTANTE PARA LA PRÓXIMA GENERACIÓN SERÁ APRENDER A APRENDER."
Learning · My AI Adoption

CÓMO
APRENDE LA IA.

Los mismos principios que hacen a una persona un buen aprendiz hacen a un modelo más preciso. Reconocer estos cinco nodos es entender cómo piensa la IA.

01
FRACASAR PARA APRENDER
Los modelos mejoran a través del error. La retropropagación es exactamente eso: medir el fallo y corregir los pesos. Sin error, no hay aprendizaje.
02
CONTEXTO
Un modelo sin contexto produce resultados genéricos. Cuanto más contexto proporciones — audiencia, oferta, tono, restricciones — más relevante será el resultado.
03
SIMPLIFICA
Los mejores prompts son simples y precisos. La complejidad innecesaria introduce ruido. El modelo responde mejor a instrucciones claras y directas.
04
APRENDIZAJE POR REFUERZO
RLHF — Aprendizaje por Refuerzo con Retroalimentación Humana. Los modelos actuales fueron entrenados con millones de correcciones humanas. Cada conversación refuerza el sistema.
05
PREDICCIÓN
Todo modelo de lenguaje es, fundamentalmente, una máquina de predicción. Predice el token más probable dado el contexto. No "piensa" — predice, token a token.
The inference system

DESIGNED TO
INFER.

This is how an AI agent works internally. It does not execute instructions step by step like traditional software — it receives, learns, corrects, selects, and produces.

Entrada
TU
PROMPT
1
Recibe orientación
DISEÑADO
PARA
RECIBIR
CONTEXTO
El prompt del sistema define el rol, el tono y las restricciones del agente antes de procesar tu entrada.
2
Experimentar
DISEÑADO
PARA
EXPERI-
MENTAR
El modelo genera múltiples rutas de respuesta internas antes de decidir cuál producir. No existe una única respuesta.
3
Corregir
DISEÑADO
PARA
CORREGIR
La capa de alineación (RLHF) filtra y corrige antes de producir. El modelo aprende qué corregir en tiempo real.
4
Aprender + Seleccionar
DISEÑADO
PARA
APRENDER
Y SELECCIONAR
De todas las rutas generadas, el modelo selecciona la que maximiza la probabilidad dado tu contexto específico.
5
Inferir
DISEÑADO
PARA
INFERIR
La inferencia es el resultado final: la respuesta más probable dado todo el proceso. Token a token, hasta completar.
TU
RESULTADO
Salida

The agent does not "think" in the human sense — it receives guidance, experiments, corrects, learns, selects, and infers. All in milliseconds.

Prompt Tuning

6 PROMPT TYPES.
MASTER ALL SIX.

Each prompt type activates a different mode in the model. Most people use only one or two. The agents at outpl.ai use all six — automatically — based on the context you've built.

01 ──
Zero-shot
No examples given. Pure instruction. The model draws from pre-training alone.
"Write a follow-up email for a demo that went well."
02 ──
Few-shot
Give 2–3 examples before the request. The model pattern-matches your style.
"Here are 3 emails I liked. Write one like these."
03 ──
Chain of thought
Ask the model to reason step by step before answering. Dramatically improves complex outputs.
"Think through who my ideal buyer is before writing."
04 ──
Role / Persona
Assign a character or expertise before the task. The model adopts that voice and knowledge base.
"Act as a GTM strategist for solo operators."
05 ──
Contextual
Front-load all relevant background before the request. The richer the context, the sharper the output.
"My audience is X, my offer is Y, my channel is Z. Now write…"
06 ──
Iterative refinement
Use the model's own output as the next prompt. Build through conversation.
"Good. Now make it shorter and remove the CTA."

SHAPE YOUR
PROMPTS.

Your agents already know all six prompt types. You just need to build the context.

Meet your agents → Back to home