AI Crash Course

Everything you need to understand enterprise AI-assisted development in 15 minutes. Same skills, same plugins — across Claude Code, GitHub Copilot CLI, and VS Code Chat.

Fundamentals

Model, Agent, Context

Three concepts you must understand before everything else.

What is a Model?

A model (Claude, GPT, Gemini) is a pre-trained neural network. You send it text, it returns text. It has no memory. After each request/response, it forgets everything. Every new request starts from zero — the model only knows what you include in that specific request.

Think of it as a brilliant expert with amnesia. Incredibly capable, but needs to be briefed from scratch every single time.

What is an Agent?

An agent is software that wraps the model. It collects context, manages conversation history, reads files, calls APIs, executes commands — and assembles all of this into the next request to the model.

Claude Code and Copilot CLI are agents. They read your codebase, load rules, invoke MCP tools, and pack all that context into each model request so the model can give useful answers about your project.

What is Context?

Context is everything the model sees in a single request — your message, conversation history, file contents, tool results, rules, and instructions. The model can only reason about what’s in the context.

Context has a size limit measured in tokens (~1 token = 4 characters). More context = better answers, but there’s a ceiling.

Context

The Context Window

What the model sees -- and how the agent fills it.

Context Size

  • GPT-4 — 128k tokens (~300 pages)
  • Claude Sonnet — 200k tokens (~500 pages)
  • Claude Opus — 1M tokens (~2,500 pages)

Bigger context = the agent can show the model more files, more history, more rules before asking it to reason. A 1M context means the model can “see” your entire codebase in one request.

What Fills the Context

The agent automatically packs the context with layers of information:

  • System prompt (0.6%) — base behavior rules
  • System tools (1.1%) — tool definitions (Read, Edit, Bash, etc.)
  • MCP tools (0.0%) — external tool schemas (loaded on demand)
  • Custom agents (0.3%) — agent descriptions for dispatch
  • Memory files (0.5%) — CLAUDE.md, rules, project memory
  • Skills (0.7%) — skill descriptions for discovery
  • Messages (39%) — your conversation + file reads + tool results
  • Free space (54%) — room for more work
  • Autocompact buffer (3%) — safety margin before compression kicks in

Why Context Matters for You

When a skill reads 10 files and calls 5 MCP tools, all those results go into the context. If context fills up, the agent compresses older messages (autocompact) — which means the model may lose early conversation details. This is why KAI uses subagents: each subagent gets a fresh context, does focused work, and returns only a summary. The parent conversation stays lean.

Connection

How They Connect

Model + Agent + Context = AI-assisted development.

The Loop

  1. You type /dx-plan
  2. The agent reads the skill instructions, loads rules, reads your spec files
  3. The agent packs all this into the context and sends it to the model
  4. The model generates a response (an implementation plan)
  5. The agent writes the plan to disk, shows you the result
  6. Next command: the agent packs the updated context (now including the plan) and sends again

The model never remembers previous requests. The agent creates the illusion of memory by re-sending relevant context each time.

Building Blocks

The Plugin System

Skills, agents, hooks, and MCP servers -- the four building blocks that make it all work.

Skills + Agents = What & Who

Skills tell the AI what to do (instructions, steps, expected output). Agents define who does it (which model, which tools, what persona). A skill like /dx-step-all coordinates execution by invoking skills directly.

Hooks + MCP = Automation & Data

Hooks fire automatically on events (validate before commit, log after subagent). MCP servers connect to external systems (ADO/Jira for tickets, AEM for content, Figma for designs). Together they give the AI real data and safety guardrails.

Flow

How It Works

From developer command to finished output in four steps.

The Execution Flow

Developer invokes skill (/dx-plan) → AI reads skill instructions → AI uses tools (read files, run commands, call APIs) → AI produces output (code, PRs, comments).

Without AI

  1. Open ADO or Jira, read the story
  2. Manually search codebase for related files
  3. Write PR description by hand
  4. Reviewer reads every line of diff
  5. Author replies to each comment manually

~2-4 hours for a typical PR review cycle

With AI

  1. /dx-req 12345 — AI reads the story and researches codebase
  2. /dx-plan — AI generates implementation plan
  3. /dx-pr-review — AI reviews the diff
  4. /dx-pr-answer — AI drafts replies to comments

~15-30 minutes with AI assistance

Plugins

What's a Plugin?

A package of skills + agents + hooks + MCP configs. Install once, get everything.

dx-core

Core development workflow — requirements, planning, execution, review, bug fix. Works with any tech stack.

49 skills 7 agents

dx-aem

AEM full-flow — component dialogs, JCR content, editorial QA, browser automation, demo capture. The complete AEM development lifecycle.

12 skills 6 agents

dx-automation

Autonomous 24/7 agents — PR review, bug fix, estimation, documentation. Triggered by ADO webhooks (Jira planned).

11 skills
Install all plugins
/dx-init   # one-time setup per repo
Docs

Official Documentation

Bookmark these -- the authoritative references for each tool.

Claude Code

docs.anthropic.com/en/docs/claude-code

Skills, agents, hooks, MCP, memory, settings.

Copilot CLI

docs.github.com/en/copilot/how-tos/copilot-cli

Skills, agents, hooks, MCP — GitHub’s CLI tool.

Open Plugins Spec

agentskills.io

The open standard for portable AI agent plugins.

KAI by Dragan Filipovic