You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Refine "Primitives over pipelines" blog post to highlight advancements in AI model capabilities and the shift towards Primitive-Oriented Agent Design, emphasizing the benefits of modular functions and dynamic context retrieval.
Copy file name to clipboardExpand all lines: src/lib/posts/primitives-over-pipelines.mdx
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,12 +13,14 @@ export const metadata = {
13
13
14
14
A lot of early AI systems that got built at orgs were built under an old era where models didn't know how to follow instructions, drifted off task, and hallucinated.
15
15
16
-
To ensure these systems delivered value, we created rigid guardrails, prescriptive pipelines, and step-by-step orchestrations. The goal was to force user intent onto a predetermined path that the designer had already foreseen. In many ways, we were force-fitting our insights from the [classic software era](https://karpathy.medium.com/software-2-0-a64152b37c35) in this new age of AI-native software.
16
+
To ensure these systems delivered value, we created rigid guardrails, prescriptive pipelines, and step-by-step orchestrations. The goal was to force user intent onto a predetermined path that the designer had already foreseen. In many ways, we were force-fitting our insights from the [classic software era](https://karpathy.medium.com/software-2-0-a64152b37c35) in this new age of AI-native software.
17
17
18
18
However, as of gpt-5.2 and opus-4.6, models exhibit improved instruction following and significantly reduced hallucination. The addition of native reasoning capabilities has enhanced the models' ability to admit uncertainty and, consequently, [decrease hallucination](https://openai.com/index/why-language-models-hallucinate/#:~:text=most%20evaluations%20measure%20model%20performance%20in%20a%20way%20that%20encourages%20guessing%20rather%20than%20honesty%20about%20uncertainty.) by better saying "I don't know" when lacking context. The models also have better planning capabilities and are able to break down complex tasks into smaller chunks without losing sight of the broader goal.
19
19
20
20
Paradoxically, the quality-control pipelines that were once protective are now becoming obstacles, as they impose a rigid trajectory through a solution space that the models are now capable of navigating more effectively on their own.
21
21
22
+
> "In many ways, we were force-fitting our insights from the classic software era in this new age of AI-native software."
23
+
22
24
## Primitive-Oriented Agent Design
23
25
24
26
We propose an alternative: **Primitive-Oriented Agent Design**. Instead of prescribing rigid use cases or workflows, we provide the agent with a small, modular set of pure, composable functions — "primitives". This grants the agent the freedom to assemble its own workflows at runtime, utilizing the full context of its capabilities.
@@ -52,6 +54,8 @@ In other words, the system prompt encodes **taste, not state**.
52
54
53
55
The prompt defines how the agent should think and behave, while primitives supply the information required to act. This keeps the prompt small, keeps the system flexible, and ensures the agent reasons over fresh context rather than static assumptions.
54
56
57
+
> "A thin system prompt encodes taste, not state."
58
+
55
59
## A practical example
56
60
57
61
Consider a feature that displays a skeleton view of some inventory or content. There are two ways to build it.
0 commit comments