Skip to main content
Organization Skills are procedural knowledge for your agent. Think of the agent as your apprentice—skills are the documentation that teaches it how to accomplish tasks that require chaining multiple steps together.

The Problem Skills Solve

Your agent has access to tools: it can look up orders, send emails, process refunds. But knowing how to use each tool individually isn’t the same as knowing when and how to orchestrate them together. Consider a return request. The agent needs to: verify the order exists, check eligibility against your policy, generate a return label, process the refund, and send confirmation. Without guidance, it might skip steps, do them out of order, or miss edge cases. Skills solve this by encoding your workflows. When a customer asks about returns, the agent loads the relevant skill and follows the documented procedure. It knows which tools to call, in what sequence, with what conditions.

Why SKILL.md Format

Char uses the AgentSkills specification, an open format originally developed by Anthropic and now adopted across the AI ecosystem. This matters for several reasons. First, portability. A skill you write for Char works in Claude Code, Cursor, and other compatible tools. You’re not locked into a proprietary format. Second, simplicity. Skills are Markdown files with YAML frontmatter—the same format developers already use for documentation, blog posts, and configuration. There’s no special syntax to learn, no compilation step, no deployment process. Third, version control. Skills are text files. You can store them in git, review changes in pull requests, and track who modified what and when. This is particularly valuable for regulated industries where audit trails matter.

Progressive Disclosure

Progressive Disclosure Diagram A naive approach would load all skills into every conversation. This wastes tokens and confuses the agent with irrelevant context. Char uses progressive disclosure instead, loading information only when the agent needs it. At startup, the agent receives a compact index: just the name and description of each skill. This typically runs 50-100 tokens per skill—enough for the agent to know what’s available without consuming significant context. When a user’s question matches a skill’s domain, the agent requests the full content. The detailed instructions—which might run thousands of tokens—load only then. This keeps routine conversations fast and cheap while ensuring complex questions get the depth they need. The model is similar to how a human expert works. You don’t rehearse every procedure you know before each conversation. You have a mental index of your expertise and dive deep when a question requires it.

The Agent’s Skill Tools

The agent doesn’t just consume skills passively—it has built-in tools to manage them:
  • read_skill loads full instructions when a task matches a skill’s description
  • create_skill creates new skills from SKILL.md content
  • update_skill_with_patch makes targeted edits without rewriting the entire skill
This means the agent can learn from conversations. When you describe a workflow, the agent can capture it as a skill for future use. When you refine a procedure, the agent can update the skill directly. The knowledge base grows through natural interaction.

Skills vs. System Prompts

You might wonder why skills exist when you could just put everything in the system prompt. The distinction becomes clear at scale. A system prompt is a single block of instructions that loads for every conversation. It’s appropriate for universal guidance: your brand voice, safety rules, things the agent should always know. Skills are modular. They load selectively based on context. As your knowledge base grows—ten skills, fifty skills, hundreds of skills—this modularity becomes essential. You couldn’t fit everything in a system prompt even if you wanted to. There’s also a maintenance story. Updating a single skill doesn’t require touching your core configuration. Teams can own their own skills. Changes are isolated and reviewable.

The Broader Context

Skills reflect a shift in how we think about AI customization. Traditional approaches required training data, compute resources, and machine learning expertise. You’d collect examples, fine-tune a model, deploy it, and hope it generalized well. Instruction-based customization is different. You tell the agent what to do in natural language. The feedback loop is immediate—change the instructions, see the behavior change. Domain experts who understand the business can contribute directly, without going through a technical translation layer. This doesn’t replace fine-tuning entirely. If you need the model to recognize patterns it wasn’t trained on or generate outputs in a specific style, training still has a role. But for teaching procedures, policies, and domain knowledge, instructions are often simpler and more transparent.

Authorship and Collaboration

Skills can come from multiple sources: dashboard users writing them manually, end users creating them through conversation, or the agent drafting them based on workflows you describe. This distributed model means your knowledge base grows organically. A support agent notices they keep explaining the same procedure and asks the agent to capture it. A power user documents a workflow for their colleagues. The barrier to contribution is low because you’re just describing what should happen. All skills remain under organizational control. You can review, edit, and archive skills from the dashboard regardless of who created them or how they were created.

Further Reading