Skip to content

Context Engineering

“Context Engineering” is about designing and governing the whole context, not just writing a single prompt. English sources highlight a few shared ideas:

  • Context is the full set of tokens used at inference time, not only system prompts.
  • Context engineering is the curation, structuring, updating, and iteration of those tokens under a limited budget.
  • Typical components include system instructions, user input, tool definitions, external data, history/state, and structured inputs/outputs.

References: Anthropic “Effective context engineering for AI agents”; Prompting Guide “Context Engineering Guide”.

Below is an objective mapping of those ideas to Agently features, with runnable code for each claim.

1) Layered context: Agent vs. Request

Separate stable background context from per‑request input to reduce noise and improve reuse.

python
from agently import Agently

agent = Agently.create_agent()

# Agent-level context
agent.set_agent_prompt("system", "You are an enterprise knowledge assistant")
agent.set_agent_prompt("instruct", ["Keep answers concise and actionable"]) 

# Request-level context
result = (
  agent
  .set_request_prompt("input", "Give one deployment recommendation")
  .output({"advice": ("str", "One-line advice")})
  .start()
)

print(result)

2) Dynamic variables and templated context

Context engineering requires structured injection of dynamic variables.

python
from agently import Agently

agent = Agently.create_agent()

user = {"name": "Moxin", "role": "PM"}
agent.set_request_prompt(
  "input",
  "Summarize the decisions for {name} ({role})",
  mappings=user,
)
agent.set_request_prompt("output", {"summary": ("str", "One-line summary")})

print(agent.start())

3) Config prompts: maintainable context

Move context out of code for versioning and collaboration.

yaml
# prompt.yaml
.agent:
  system: "You are an enterprise knowledge assistant"
  instruct:
    - "Give the conclusion first"
.request:
  input: "{question}"
  output:
    summary:
      $type: str
      $desc: "One-line conclusion"
python
from agently import Agently

agent = Agently.create_agent()
agent.load_yaml_prompt("prompt.yaml", mappings={"question": "What is Agently good at?"})

print(agent.start())

4) Structured inputs/outputs reduce ambiguity

Structured outputs are a practical form of context constraints and make results easier to reuse.

python
from agently import Agently

agent = Agently.create_agent()

response = (
  agent
  .set_request_prompt("input", "Write a release plan with goals and milestones")
  .output({
    "goal": ("str", "Goal"),
    "milestones": [
      {"title": ("str", "Milestone"), "date": ("str", "Date")}
    ]
  })
  .get_response()
)

print(response.get_data(ensure_keys=["goal", "milestones"]))

5) External knowledge injection (RAG entry)

A core context‑engineering practice is bringing the right knowledge into the context window.

python
from agently import Agently

agent = Agently.create_agent()

# Local retrieval demo
kb = [
  {"title": "Agently v4", "content": "Output Format and TriggerFlow support"},
  {"title": "Agently Prompt", "content": "YAML/JSON configurable prompts"},
]

def retrieve(query: str):
  return [item for item in kb if "Agently" in item["title"]]

question = "What are Agently's core capabilities?"
knowledge = retrieve(question)

result = (
  agent
  .set_request_prompt("input", question)
  .set_request_prompt("info", {"knowledge": knowledge})
  .output({"answer": ("str", "Answer"), "sources": ["str"]})
  .start()
)

print(result)

6) Memory & history control

As multi‑turn context grows, Agently’s Session/Memo helps compress history and keep signal‑heavy context.

python
from agently import Agently
from agently.core import Session

agent = Agently.create_agent()
session = Session(agent=agent)

session.append_message({"role": "user", "content": "Keep answers short"})
session.append_message({"role": "assistant", "content": "Understood"})

session.set_settings("session.resize.max_messages_text_length", 800)
session.set_settings("session.memo.enabled", True)

session.resize()
agent.set_chat_history(session.current_chat_history)

print(agent.input("Repeat my preference").start())

7) Tools as context components

Tool specs are part of the model context. Agently can inject tool definitions and call them when needed.

python
from agently import Agently

agent = Agently.create_agent()

def fetch_order(order_id: str) -> dict:
  return {"order_id": order_id, "status": "paid"}

agent.register_tool(
  name="fetch_order",
  desc="Lookup order status",
  kwargs={"order_id": (str, "Order ID")},
  func=fetch_order,
  returns=dict,
)
agent.use_tools("fetch_order")

print(
  agent
  .input("Check order A-100 status")
  .output({"status": ("str", "Status")})
  .start()
)

8) Evaluation & iteration (integrate external evals)

Context engineering often requires iteration. Agently does not ship an eval platform, but response objects are easy to plug into your own evaluators.

python
from agently import Agently

agent = Agently.create_agent()
response = agent.input("Explain observability in one sentence").get_response()

text = response.get_text()
score = 1 if len(text) <= 30 else 0  # simple length check

print(text)
print("score=", score)

English references