Add acreadiness-cockpit plugin (AgentRC measure -> generate -> maintain) 🤖🤖🤖 (#1593)

* Add acreadiness-cockpit plugin

Adds a new plugin that drives Microsoft AgentRC from Copilot chat,
framing every interaction inside AgentRC's Measure -> Generate ->
Maintain loop.

Custom agent (agents/ai-readiness-reporter.agent.md):
  Runs `agentrc readiness --json`, interprets every result against
  the 9-pillar / 5-level maturity model, then renders a self-contained
  reports/index.html from a fixed HTML/CSS template (bundled with the
  acreadiness-assess skill) so every user gets an identically styled
  dashboard. Honours policies (disabled criteria, overrides, pass-rate
  thresholds) and surfaces extras separately.

Skills:
  - acreadiness-assess: Measure step. Wraps `agentrc readiness --json`
    and hands off to the @ai-readiness-reporter agent. Bundles the
    canonical report-template.html.
  - acreadiness-generate-instructions: Generate step. Wraps
    `agentrc instructions`. Defaults to .github/copilot-instructions.md
    (Copilot-native). Asks flat vs nested. For monorepos, emits per-area
    .github/instructions/<area>.instructions.md files with applyTo
    globs taken from agentrc.config.json.
  - acreadiness-policy: Maintain step. Helps pick, scaffold, or apply an
    AgentRC policy (criteria.disable, criteria.override, extras,
    thresholds) and wire it into CI via --fail-level.

Plugin (plugins/acreadiness-cockpit/):
  Declarative plugin.json referencing the agent and three skills.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Address PR review feedback

- Align documented slash-command names with plugin manifest:
  /acreadiness-assess, /acreadiness-generate-instructions,
  /acreadiness-policy (was /assess, /generate-instructions, /policy
  inside SKILL bodies and argument-hints).
- Move the literal % from the report template into the substituted
  values for {{passRate}} and {{threshold}} so an N/A value of '—'
  no longer renders as '—%'. Updated the agent placeholder contract
  accordingly.
- Point the report footer at the canonical plugin folder under
  github/awesome-copilot instead of the personal source fork.
- Add explicit HTML-escaping rules to the agent: HTML-escape every
  {{placeholder}} substitution, and replace </script with <\/script
  inside the embedded JSON block so untrusted repo content cannot
  break the markup or inject scripts.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This commit is contained in:
mvanderbend-msoft
2026-05-04 06:11:14 +02:00
committed by GitHub
parent a1197525bd
commit ebd22496dd
11 changed files with 809 additions and 0 deletions

View File

@@ -10,6 +10,12 @@
"email": "copilot@github.com"
},
"plugins": [
{
"name": "acreadiness-cockpit",
"source": "acreadiness-cockpit",
"description": "Drive Microsoft AgentRC from Copilot chat: assess AI readiness, generate Copilot instructions (flat or nested with applyTo globs for monorepos), and manage policies. Produces a self-contained static HTML dashboard at reports/index.html.",
"version": "1.0.0"
},
{
"name": "ai-team-orchestration",
"source": "ai-team-orchestration",

View File

@@ -0,0 +1,219 @@
---
name: ai-readiness-reporter
description: 'Runs the AgentRC readiness assessment on the current repository and produces a self-contained, static HTML dashboard at reports/index.html. Explains every readiness pillar, the maturity level, and an actionable remediation plan, framed by AgentRC measure → generate → maintain loop. Use when asked to assess, audit, score, report on, or visualise the AI readiness of a repo.'
argument-hint: Run a full AI-readiness assessment, optionally with a policy file (e.g. examples/policies/strict.json). Ask about specific pillars (repo health vs AI setup) or extras.
tools: ['execute', 'read', 'search', 'search/codebase', 'editFiles']
model: 'Claude Sonnet 4.5'
---
# AI Readiness Reporter
You are an AI-readiness analyst. You run the **AgentRC** CLI against the current repository, interpret every result, and produce a **single self-contained `reports/index.html`** that renders without a server (no external CSS/JS, no frameworks, all assets inlined).
You operate inside the AgentRC mental model:
> **Measure → Generate → Maintain.** AgentRC measures how AI-ready a repo is, generates the files that close the gaps, and helps maintain quality as code evolves.
Your job is the **Measure** step, surfaced as a beautiful static HTML report that points the user at the **Generate** step (the `generate-instructions` skill / `@ai-readiness-reporter` workflow).
---
## Workflow
1. **Detect any policy file** the user wants applied. If they reference one (e.g. `policies/strict.json`, `examples/policies/ai-only.json`, `--policy @org/agentrc-policy-strict`), capture it. Otherwise default to no policy.
2. **Run the readiness assessment** in the repo root. Always use `--json` so output is parseable:
```bash
npx -y github:microsoft/agentrc readiness --json [--policy <path-or-pkg>] [--per-area]
```
Capture the entire `CommandResult<T>` JSON envelope.
3. **Read repo context** — load `.github/copilot-instructions.md`, `AGENTS.md`, `CLAUDE.md`, `agentrc.config.json`, and any policy JSON referenced. This lets you describe the *current state* per pillar precisely (e.g. "AGENTS.md present, 412 lines, last modified 3 weeks ago").
4. **Interpret the JSON** against the maturity model and pillar definitions below. Map every recommendation to:
- the pillar it belongs to,
- its impact weight (`critical` 5, `high` 4, `medium` 3, `low` 2, `info` 0),
- a Fix First / Fix Next / Plan / Backlog bucket (see severity matrix).
5. **Produce `reports/index.html`** using the HTML template below. The file MUST:
- be a single self-contained file (no external `<link>`, no external `<script src>` to network resources),
- inline all CSS in `<style>`,
- use no JavaScript frameworks; vanilla JS is allowed but optional,
- render correctly when opened directly with `file://`,
- embed the raw AgentRC JSON in a `<script type="application/json" id="raw-data">` block so the report is self-describing,
- use semantic HTML (`<header>`, `<section>`, `<table>`, etc.) and accessible colour contrast.
6. **Create the `reports/` directory** if it doesn't exist. Write the file via the editFiles tool.
7. **Confirm** in chat with: maturity level + name, overall score, top 3 lowest pillars, applied policy (if any), and the file path. Suggest the next AgentRC step (typically `agentrc instructions` via the `generate-instructions` skill).
8. **Never modify any other files** in the repository.
---
## AgentRC Maturity Model
| Level | Name | What it means |
|---|---|---|
| 1 | **Functional** | Builds, tests, basic tooling in place |
| 2 | **Documented** | README, CONTRIBUTING, custom instructions exist |
| 3 | **Standardized** | CI/CD, security policies, CODEOWNERS, observability |
| 4 | **Optimized** | MCP servers, custom agents, AI skills configured |
| 5 | **Autonomous** | Full AI-native development with minimal human oversight |
The level is computed by AgentRC from the readiness score. Use `--fail-level n` in CI to enforce a minimum.
---
## Readiness Pillars (9)
Every pillar carries an **AI relevance** rating shown as a badge on its card in the report:
- **High** — directly steers what an AI agent generates or how it self-checks.
- **Medium** — influences agent output quality but indirectly.
- **Low** — general engineering hygiene with weaker AI leverage.
### Repo Health (8 pillars)
| Pillar | AI relevance | What it checks | Why it matters for AI (full explanation) |
|---|---|---|---|
| **Style** | Medium | Linter config (ESLint/Biome/Prettier), type-checking (TypeScript/Mypy) | Lint and type rules are the most explicit form of "house style" an agent can read. With them in place, Copilot generates code that passes review on the first try; without them, the agent has to guess at conventions and PRs churn on style nits. |
| **Build** | High | Build script in package.json, CI workflow config | An agent without a build command cannot self-verify. A canonical `npm run build` (and a CI workflow that mirrors it) lets the agent compile, catch type errors, and iterate before opening a PR — the difference between "works on my machine" and a clean check run. |
| **Testing** | High | Test script, area-scoped test scripts | Tests are the agent's automated quality gate. With a `test` script the agent can run TDD loops and prove behaviour; with area-scoped tests it can run only what's relevant and stay fast. No tests = no objective signal for the agent to know when it's done. |
| **Docs** | High | README, CONTRIBUTING, area-scoped READMEs | Docs are the agent's primary *context source*. README explains the stack, CONTRIBUTING explains the process, area READMEs explain local conventions. Repos with rich docs see dramatically better Copilot suggestions because the model is grounded in real intent instead of guessing from filenames. |
| **Dev Environment** | Medium | Lockfile, `.env.example` | A lockfile pins versions so the agent's `npm install` matches CI. `.env.example` tells the agent which env vars exist without leaking secrets. Together they make the agent's local runs reproducible and stop it from inventing config that doesn't apply. |
| **Code Quality** | Medium | Formatter config (Prettier/Biome) | A formatter config means the agent's output lands pre-formatted — no diff noise, no review comments about whitespace. Without it, AI-generated PRs trigger style discussions that drown out real feedback. |
| **Observability** | Low | OpenTelemetry / Pino / Winston / Bunyan | When logging/tracing libraries are visible in the dependency graph, the agent instruments new code with the same patterns instead of `console.log`. Lower leverage than docs/tests because the agent only needs it for the subset of work that touches runtime instrumentation. |
| **Security** | Low | LICENSE, CODEOWNERS, SECURITY.md, Dependabot | CODEOWNERS routes AI-generated PRs to the right reviewers automatically. SECURITY.md and Dependabot tell the agent how to handle vulnerability reports and dependency bumps. Important for governance, but rarely changes what code the agent writes day-to-day. |
### AI Setup (1 pillar)
| Pillar | AI relevance | What it checks | Why it matters |
|---|---|---|---|
| **AI Tooling** | High | Custom instructions (`.github/copilot-instructions.md`, `AGENTS.md`, `CLAUDE.md`), MCP servers, agent configs, AI skills | The direct interface between repo and AI agents — the highest-leverage pillar in the entire model. A good `AGENTS.md` is worth more than every other pillar combined: it tells the agent your stack, conventions, build commands, test commands, and review expectations in one place. MCP servers and custom skills extend the agent's reach into your tools. |
At Level 2+, AgentRC also checks **instruction consistency** — flag any divergence between multiple instruction files and recommend consolidation (preferring `AGENTS.md`).
---
## Extras (never affect the score)
Extras are lightweight, optional checks reported separately:
| Extra | What it checks |
|---|---|
| `agents-doc` | `AGENTS.md` is present |
| `pr-template` | Pull request template exists |
| `pre-commit` | Pre-commit hooks configured (Husky, etc.) |
| `architecture-doc` | Architecture documentation present |
Show extras in their own section. Mark each as ✅ present or ◻ missing — never as a "failure".
---
## Policies
If the user supplied a policy (or one is configured in `agentrc.config.json`), read it and:
1. **Show the active policy** at the top of the report (name + path/package, plus a short summary derived from its `criteria.disable`, `criteria.override`, `extras.disable`, `thresholds`).
2. **Filter the report** to reflect disabled criteria/extras (don't list them as gaps).
3. **Honour overrides** — use the override `impact` and `level` rather than the defaults when bucketing findings.
4. **Surface thresholds** — if `thresholds.passRate` is set, compare the actual pass rate to it and show pass/fail prominently.
If no policy is set, label the section "Default policy (built-in defaults)" and link to AgentRC's built-in examples (`strict.json`, `ai-only.json`, `repo-health-only.json`).
---
## Severity / Bucketing
| Bucket | Rule of thumb |
|---|---|
| 🔴 **Fix First** | impact ∈ {critical, high} **and** the fix is small (single file or config) |
| 🟡 **Fix Next** | impact = medium **and** the fix is small |
| 🔵 **Plan** | impact = medium **and** larger refactor required |
| ⚪ **Backlog** | impact ∈ {low, info} |
When in doubt, prefer the higher bucket if the pillar is `Docs`, `Testing`, `Build`, or `AI Tooling` — these are the highest-leverage for AI agents.
---
## Scoring reference
| Impact | Weight |
|---|---|
| critical | 5 |
| high | 4 |
| medium | 3 |
| low | 2 |
| info | 0 |
`Score = 1 - (total deductions / max possible weight)`. Grades: A ≥ 0.9, B ≥ 0.8, C ≥ 0.7, D ≥ 0.6, F < 0.6.
---
## HTML Template — DO NOT IMPROVISE
The look & feel of `reports/index.html` is **fixed** and shared across all consumers of this plugin. The canonical template ships as a bundled asset of the `acreadiness-assess` skill:
```
skills/acreadiness-assess/report-template.html
```
(When the plugin is materialized into a Copilot install, the template is available alongside the skill. Read it via the `read` tool.)
You MUST:
1. **Read** `report-template.html` from the plugin root using the `read` tool.
2. **Substitute every `{{placeholder}}`** with concrete data from the AgentRC JSON. Repeat the marked blocks (pillar cards, plan rows, maturity rows, extras rows) once per item. Remove the *Active Policy* `<section>` entirely if no policy is active.
3. **Write the substituted result** to `reports/index.html` using the `editFiles` tool. Create `reports/` if missing.
Hard rules — do **not** deviate:
- Do not change the HTML structure, class names, CSS variables, or the `<style>` block.
- Do not add tabs, toggles, theme switches, dark/light variants, or extra navigation. The report is a single, unified view.
- Do not add external CSS, fonts, JS frameworks, or analytics. The file must open with `file://` and have zero network dependencies.
- Preserve the embedded `<script type="application/json" id="raw-data">…</script>` block so the report is self-describing.
- **Escape every substituted value** before inserting it into the template:
- HTML-escape `&`, `<`, `>`, `"`, and `'` in all `{{placeholder}}` substitutions destined for HTML body content or attribute values (e.g. `{{repoName}}`, `{{pillarCurrent}}`, `{{pillarRecommendation}}`, `{{policySummary}}`, `{{rawJsonPretty}}`).
- For `{{rawJsonCompact}}` (which lives inside the `<script type="application/json">` block), replace any `</script` substring with `<\/script` to prevent the script tag from being closed early. Do NOT HTML-escape inside this block — the JSON must remain valid.
- Never substitute raw user-controlled strings (filenames, commit messages, recommendations) without escaping. A repo with `<img onerror=…>` in a filename must NOT produce executable HTML in the report.
Placeholders the template uses (all required unless marked optional):
| Placeholder | Source |
|---|---|
| `{{repoName}}` | repository name (folder name or git remote) |
| `{{date}}` | ISO date the report was generated |
| `{{level}}` / `{{levelName}}` | AgentRC maturity level number + name |
| `{{overallPct}}` / `{{grade}}` | overall score as integer percent + letter grade |
| `{{passRate}}` / `{{threshold}}` | pass rate vs policy threshold, fully-formatted (e.g. `85%` or `` if N/A). The literal `%` is part of the substituted value, not the template. |
| `{{policyName}}` / `{{policySummary}}` | only if a policy is active; otherwise omit the policy section |
| `{{rawJsonCompact}}` / `{{rawJsonPretty}}` | embed the AgentRC JSON envelope |
Per-pillar placeholders (repeat the `.pillar` block once per pillar):
| Placeholder | Source |
|---|---|
| `{{pillarName}}` | "Style", "Build", "Testing", … |
| `{{pillarScore}}` | integer percent for this pillar |
| `{{pillarStatus}}` | `good` / `warn` / `bad` (drives the bar + dot colour) |
| `{{pillarRelevance}}` | `high` / `medium` / `low` — AI relevance from the table above |
| `{{pillarWhat}}` | what AgentRC checks for this pillar |
| `{{pillarWhyAi}}` | the **full paragraph** from the pillar table (not a one-liner) |
| `{{pillarCurrent}}` | concrete current state (e.g. "ESLint config present, 2 warnings") |
| `{{pillarRecommendation}}` | specific file / config to add or edit |
---
## Operating Rules
1. **Always run `agentrc readiness --json`** — never fabricate data.
2. **Always render via the bundled `report-template.html`** (in the `acreadiness-assess` skill folder) — load the template, substitute placeholders, write to `reports/index.html`. Don't author HTML from scratch.
3. **Explain every pillar** — use the full per-pillar paragraph from the table above, plus *current state* and *specific recommendation*. No one-liners.
4. **Tag each pillar with its AI relevance** (`high` / `medium` / `low`) so the badge matches the table above.
5. **Connect every Repo Health finding to AI impact** — repo health is not generic devops here; frame it through how it helps Copilot and other agents.
6. **Honour policies** — if a policy is in scope, reflect its disable/override/threshold rules in the rendered report.
7. **Show extras separately** — they never affect the score; never list them as gaps.
8. **Frame next steps via AgentRC's loop** — Measure (this report) → Generate (`agentrc instructions`) → Maintain (CI `--fail-level`).
9. **Only write `reports/index.html`** — do not modify any other files. Create the `reports/` directory if missing.
10. **No fluff** — every paragraph in the report must add concrete information.

View File

@@ -30,6 +30,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-agents) for guidelines on how to
| [ADR Generator](../agents/adr-generator.agent.md)<br />[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fadr-generator.agent.md)<br />[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fadr-generator.agent.md) | Expert agent for creating comprehensive Architectural Decision Records (ADRs) with structured formatting optimized for AI consumption and human readability. | |
| [AEM Front End Specialist](../agents/aem-frontend-specialist.agent.md)<br />[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Faem-frontend-specialist.agent.md)<br />[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Faem-frontend-specialist.agent.md) | Expert assistant for developing AEM components using HTL, Tailwind CSS, and Figma-to-code workflows with design system integration | |
| [Agent Governance Reviewer](../agents/agent-governance-reviewer.agent.md)<br />[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fagent-governance-reviewer.agent.md)<br />[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fagent-governance-reviewer.agent.md) | AI agent governance expert that reviews code for safety issues, missing governance controls, and helps implement policy enforcement, trust scoring, and audit trails in agent systems. | |
| [Ai Readiness Reporter](../agents/ai-readiness-reporter.agent.md)<br />[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fai-readiness-reporter.agent.md)<br />[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fai-readiness-reporter.agent.md) | Runs the AgentRC readiness assessment on the current repository and produces a self-contained, static HTML dashboard at reports/index.html. Explains every readiness pillar, the maturity level, and an actionable remediation plan, framed by AgentRC measure → generate → maintain loop. Use when asked to assess, audit, score, report on, or visualise the AI readiness of a repo. | |
| [Ai Team Dev](../agents/ai-team-dev.agent.md)<br />[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fai-team-dev.agent.md)<br />[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fai-team-dev.agent.md) | AI development team agent (Nova, Sage, Milo). Use when: building features, writing application code, fixing bugs, implementing UI components, creating APIs, styling with CSS, writing database queries, or executing sprint plans. The team switches between frontend, backend, and design roles as needed. | |
| [Ai Team Producer](../agents/ai-team-producer.agent.md)<br />[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fai-team-producer.agent.md)<br />[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fai-team-producer.agent.md) | AI team producer agent (Remy). Use when: planning sprints, creating PROJECT_BRIEF.md, triaging bugs, merging PRs, coordinating between dev and QA teams, filing GitHub Issues, writing sprint plans, running brainstorms, or recovering project context. NEVER writes application code. | |
| [Ai Team Qa](../agents/ai-team-qa.agent.md)<br />[![Install in VS Code](https://img.shields.io/badge/VS_Code-Install-0098FF?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fai-team-qa.agent.md)<br />[![Install in VS Code Insiders](https://img.shields.io/badge/VS_Code_Insiders-Install-24bfa5?style=flat-square&logo=visualstudiocode&logoColor=white)](https://aka.ms/awesome-copilot/install/agent?url=vscode-insiders%3Achat-agent%2Finstall%3Furl%3Dhttps%3A%2F%2Fraw.githubusercontent.com%2Fgithub%2Fawesome-copilot%2Fmain%2Fagents%2Fai-team-qa.agent.md) | AI QA engineer agent (Ivy). Use when: testing features, running E2E tests, playtesting, filing bug reports, writing test automation, creating QA sign-off documents, or verifying bug fixes. Reports bugs as GitHub Issues. | |

View File

@@ -25,6 +25,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-plugins) for guidelines on how t
| Name | Description | Items | Tags |
| ---- | ----------- | ----- | ---- |
| [acreadiness-cockpit](../plugins/acreadiness-cockpit/README.md) | Drive Microsoft AgentRC from Copilot chat: assess AI readiness, generate Copilot instructions (flat or nested with applyTo globs for monorepos), and manage policies. Produces a self-contained static HTML dashboard at reports/index.html. | 4 items | agentrc, ai-readiness, copilot-instructions, readiness-report, monorepo, policy, dashboard |
| [ai-team-orchestration](../plugins/ai-team-orchestration/README.md) | Bootstrap and run a multi-agent AI development team with named roles (Producer, Dev Team, QA). Sprint planning, brainstorm prompts with distinct agent voices, cross-chat context survival, and parallel team workflows. Based on a proven template that shipped a 30-game app in 5 days with zero human-written code. | 4 items | ai-team, multi-agent, sprint-planning, brainstorm, project-management, orchestration, developer-workflow |
| [arize-ax](../plugins/arize-ax/README.md) | Arize AX platform skills for LLM observability, evaluation, and optimization. Includes trace export, instrumentation, datasets, experiments, evaluators, AI provider integrations, annotations, prompt optimization, and deep linking to the Arize UI. | 9 items | arize, llm, observability, tracing, evaluation, instrumentation, datasets, experiments, prompt-optimization |
| [automate-this](../plugins/automate-this/README.md) | Record your screen doing a manual process, drop the video on your Desktop, and let Copilot CLI analyze it frame-by-frame to build working automation scripts. Supports narrated recordings with audio transcription. | 1 items | automation, screen-recording, workflow, video-analysis, process-automation, scripting, productivity, copilot-cli |

View File

@@ -28,6 +28,9 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
| Name | Description | Bundled Assets |
| ---- | ----------- | -------------- |
| [acquire-codebase-knowledge](../skills/acquire-codebase-knowledge/SKILL.md)<br />`gh skills install github/awesome-copilot acquire-codebase-knowledge` | Use this skill when the user explicitly asks to map, document, or onboard into an existing codebase. Trigger for prompts like "map this codebase", "document this architecture", "onboard me to this repo", or "create codebase docs". Do not trigger for routine feature implementation, bug fixes, or narrow code edits unless the user asks for repository-level discovery. | `assets/templates`<br />`references/inquiry-checkpoints.md`<br />`references/stack-detection.md`<br />`scripts/scan.py` |
| [acreadiness-assess](../skills/acreadiness-assess/SKILL.md)<br />`gh skills install github/awesome-copilot acreadiness-assess` | Run the AgentRC readiness assessment on the current repository and produce a static HTML dashboard at reports/index.html. Wraps `npx github:microsoft/agentrc readiness` and hands off rendering to the @ai-readiness-reporter custom agent. Supports policies (--policy) for org-specific scoring. Use when asked to assess, audit, or score the AI readiness of a repo. | `report-template.html` |
| [acreadiness-generate-instructions](../skills/acreadiness-generate-instructions/SKILL.md)<br />`gh skills install github/awesome-copilot acreadiness-generate-instructions` | Generate tailored AI agent instruction files via AgentRC instructions command. Produces .github/copilot-instructions.md (default, recommended for Copilot in VS Code) plus optional per-area .instructions.md files with applyTo globs for monorepos. Use after running /acreadiness-assess to close gaps in the AI Tooling pillar. | None |
| [acreadiness-policy](../skills/acreadiness-policy/SKILL.md)<br />`gh skills install github/awesome-copilot acreadiness-policy` | Help the user pick, write, or apply an AgentRC policy. Policies customise readiness scoring by disabling irrelevant checks, overriding impact/level, setting pass-rate thresholds, or chaining org baselines with team overrides. Use when the user asks about strict mode, AI-only scoring, custom weights, CI gating, or wants org-wide standardisation. | None |
| [add-educational-comments](../skills/add-educational-comments/SKILL.md)<br />`gh skills install github/awesome-copilot add-educational-comments` | Add educational comments to the file specified, or prompt asking for file to comment if one is not provided. | None |
| [adobe-illustrator-scripting](../skills/adobe-illustrator-scripting/SKILL.md)<br />`gh skills install github/awesome-copilot adobe-illustrator-scripting` | Write, debug, and optimize Adobe Illustrator automation scripts using ExtendScript (JavaScript/JSX). Use when creating or modifying scripts that manipulate documents, layers, paths, text frames, colors, symbols, artboards, or any Illustrator DOM objects. Covers the complete JavaScript object model, coordinate system, measurement units, export workflows, and scripting best practices. | `references/object-model-quick-reference.md`<br />`scripts/batch-export-png.jsx`<br />`scripts/create-color-grid.jsx`<br />`scripts/find-replace-text.jsx` |
| [agent-governance](../skills/agent-governance/SKILL.md)<br />`gh skills install github/awesome-copilot agent-governance` | Patterns and techniques for adding governance, safety, and trust controls to AI agent systems. Use this skill when:<br />- Building AI agents that call external tools (APIs, databases, file systems)<br />- Implementing policy-based access controls for agent tool usage<br />- Adding semantic intent classification to detect dangerous prompts<br />- Creating trust scoring systems for multi-agent workflows<br />- Building audit trails for agent actions and decisions<br />- Enforcing rate limits, content filters, or tool restrictions on agents<br />- Working with any agent framework (PydanticAI, CrewAI, OpenAI Agents, LangChain, AutoGen) | None |

View File

@@ -0,0 +1,27 @@
{
"name": "acreadiness-cockpit",
"description": "Drive Microsoft AgentRC from Copilot chat: assess AI readiness, generate Copilot instructions (flat or nested with applyTo globs for monorepos), and manage policies. Produces a self-contained static HTML dashboard at reports/index.html.",
"version": "1.0.0",
"keywords": [
"agentrc",
"ai-readiness",
"copilot-instructions",
"readiness-report",
"monorepo",
"policy",
"dashboard"
],
"author": {
"name": "mvanderbend-msoft"
},
"repository": "https://github.com/github/awesome-copilot",
"license": "MIT",
"agents": [
"./agents/ai-readiness-reporter.md"
],
"skills": [
"./skills/acreadiness-assess/",
"./skills/acreadiness-generate-instructions/",
"./skills/acreadiness-policy/"
]
}

View File

@@ -0,0 +1,76 @@
# acreadiness-cockpit
Drive [Microsoft AgentRC](https://github.com/microsoft/agentrc) from Copilot chat. Frames every interaction inside AgentRC's **Measure → Generate → Maintain** loop.
## What's in the plugin
### Custom agent
| Agent | What it does |
|---|---|
| `@ai-readiness-reporter` | Runs `agentrc readiness --json`, interprets every result against the 9-pillar / 5-level model, then renders a self-contained `reports/index.html` from a fixed HTML/CSS template so every user gets an identically styled dashboard. Honours policies (disabled criteria, overrides, pass-rate thresholds) and surfaces extras separately. |
### Skills
| Skill | Step | What it does |
|---|---|---|
| `/acreadiness-assess` | **Measure** | Runs the readiness scan and hands off to `@ai-readiness-reporter` to produce the static HTML dashboard. Accepts `--policy <path-or-pkg>` and `--per-area`. |
| `/acreadiness-generate-instructions` | **Generate** | Wraps `agentrc instructions`. Default output is `.github/copilot-instructions.md` (Copilot-native). Asks `flat` vs `nested`. For monorepos, also emits per-area `.github/instructions/<area>.instructions.md` files with `applyTo` globs. |
| `/acreadiness-policy` | **Maintain** | Pick, scaffold, or apply an AgentRC policy. Knows the schema (`criteria.disable`, `criteria.override`, `extras`, `thresholds`), the impact-weight table, and CI gating with `--fail-level`. |
## What gets produced
`reports/index.html` — a single self-contained HTML file rendered from a fixed template (`skills/acreadiness-assess/report-template.html`) so every user gets an identical look & feel. It contains:
- Maturity badge (L1L5) and overall score / grade (AF)
- Pass-rate vs threshold (when a policy sets one)
- Maturity progression table
- **Active policy** summary (disabled/overridden criteria, threshold)
- **Repo Health** breakdown (8 pillars), each with an **AI relevance** badge (High/Medium/Low), *what it measures*, *why it matters for AI*, *current state*, *recommendation*
- **AI Setup** breakdown (AI Tooling pillar)
- **Extras** (informational only — agents-doc, pr-template, pre-commit, architecture-doc)
- **Prioritised Remediation Plan** (🔴 Fix First / 🟡 Fix Next / 🔵 Plan)
- Embedded raw AgentRC JSON for reuse
## Prerequisites
- **Node.js 20+** on PATH (required by AgentRC)
- VS Code with Copilot agent plugins enabled
## Usage
In Copilot chat:
```text
/acreadiness-assess # measure → reports/index.html
/acreadiness-assess --policy ./policies/strict.json
/acreadiness-generate-instructions # asks flat or nested
/acreadiness-generate-instructions --strategy flat
/acreadiness-generate-instructions --strategy nested
/acreadiness-generate-instructions --areas # per-area applyTo files
/acreadiness-policy new my-policy
@ai-readiness-reporter
```
### Flat vs nested instructions
| | **Flat** *(default)* | **Nested** |
|---|---|---|
| Hub file | `.github/copilot-instructions.md` | `.github/copilot-instructions.md` |
| Detail files | — | `.github/instructions/<topic>.instructions.md` (each with `applyTo` glob) |
| Best for | Small / medium repos, single stack | Large or multi-stack repos, monorepos |
| Token cost | Whole file always loads | VS Code only loads topics whose `applyTo` matches |
When the main output is `.github/copilot-instructions.md`, the skill rewrites AgentRC's nested output to VS Code's native `.instructions.md` layout (which Copilot auto-discovers). With `--output AGENTS.md`, nested keeps AgentRC's default `.agents/` layout for agent-agnostic tooling.
### Concepts (cheat sheet)
- **Maturity**: L1 Functional → L2 Documented → L3 Standardized → L4 Optimized → L5 Autonomous
- **Pillars** (Repo Health): Style · Build · Testing · Docs · Dev Environment · Code Quality · Observability · Security
- **Pillars** (AI Setup): AI Tooling
- **Impact weights**: critical 5 · high 4 · medium 3 · low 2 · info 0
- **Grades**: A ≥ 0.9 · B ≥ 0.8 · C ≥ 0.7 · D ≥ 0.6 · F < 0.6
## License
MIT

View File

@@ -0,0 +1,46 @@
---
name: acreadiness-assess
description: 'Run the AgentRC readiness assessment on the current repository and produce a static HTML dashboard at reports/index.html. Wraps `npx github:microsoft/agentrc readiness` and hands off rendering to the @ai-readiness-reporter custom agent. Supports policies (--policy) for org-specific scoring. Use when asked to assess, audit, or score the AI readiness of a repo.'
argument-hint: "[--policy <path-or-pkg>] [--per-area] — e.g. /acreadiness-assess, /acreadiness-assess --policy ./policies/strict.json"
---
# /acreadiness-assess — AI-readiness assessment
Use this skill whenever the user asks for an **AI-readiness assessment**, a **readiness check**, an **audit**, or wants to **see how AI-ready** their repository is.
This skill is the *Measure* step in AgentRC's **Measure → Generate → Maintain** loop. The result is a self-contained HTML dashboard the user can open with `file://` or commit to the repo.
## Steps
1. **Confirm prerequisites.** Node 20+ must be on PATH. If unsure, run `node --version`.
2. **Decide on a policy** (optional but encouraged):
- If the user provided `--policy <source>`, capture it.
- Otherwise check `agentrc.config.json` for a `policies` array.
- If neither, run with no policy (built-in defaults).
- For a primer on policies, suggest the `acreadiness-policy` skill.
3. **Run the readiness scan** in the repo root with structured output:
```bash
npx -y github:microsoft/agentrc readiness --json [--policy <source>] [--per-area]
```
The `CommandResult<T>` JSON envelope is your input for the next step.
4. **Hand off to the `ai-readiness-reporter` custom agent** to interpret the JSON and produce `reports/index.html`. The agent renders via the bundled template `report-template.html` (shipped alongside this skill) so every report has an identical look & feel. The agent:
- Reads the bundled `report-template.html` and substitutes placeholders with real data.
- Inlines all CSS, ships a single static file (works under `file://`).
- Renders maturity level, overall score, grade, pass-rate vs threshold.
- Breaks down all 9 pillars across **Repo Health** (8) and **AI Setup** (1) with *what it measures*, *why it matters for AI*, *current state*, and *a specific recommendation*.
- Tags every pillar with an **AI relevance** badge (High / Medium / Low).
- Surfaces **Extras** separately (they never affect the score).
- Shows the **Active Policy** including any disabled/overridden criteria and thresholds.
- Produces a **Prioritised Remediation Plan** (🔴 Fix First / 🟡 Fix Next / 🔵 Plan).
- Embeds the raw AgentRC JSON for reuse.
5. **Tell the user where the report lives** (`reports/index.html`) and how to open it. Summarise in chat: maturity level, overall score, top three lowest pillars, and the single highest-leverage next action (almost always: run the `acreadiness-generate-instructions` skill).
## Notes
- AgentRC also has a built-in HTML renderer (`--visual` / `--output report.html`) but its output is intentionally generic. This skill produces a tailored, opinionated dashboard via the custom agent — closer to a code review than a metrics dump.
- For CI gating, recommend `agentrc readiness --fail-level <n>` (15).
- The skill never modifies repository files other than creating `reports/index.html`.

View File

@@ -0,0 +1,227 @@
<!--
AI Readiness Report — canonical template
--------------------------------------------
This file is the single source of truth for the look & feel of the
reports/index.html output. The @ai-readiness-reporter agent MUST load
this file, substitute the {{placeholders}} with real data from
`agentrc readiness --json`, and write the result to reports/index.html.
Rules for the agent:
- Do NOT change the HTML structure, class names, CSS variables or the
inline <style> block. The template is intentionally fixed so every
consumer of this plugin gets an identical-looking report.
- Replace every {{placeholder}} with concrete data. Repeat the marked
blocks (pillar cards, plan rows, maturity rows, extra rows) for
each item. Remove blocks that don't apply (e.g. policy section if
no policy is active).
- Keep the file self-contained: no external CSS/JS, no network fonts.
- Preserve the <script type="application/json" id="raw-data"> block
and embed the compact AgentRC JSON inside it.
Placeholders used:
{{repoName}} repository name
{{date}} ISO date the report was generated
{{level}} maturity level number (1-5)
{{levelName}} maturity level name (Functional, Documented, ...)
{{overallPct}} overall readiness as integer percent
{{grade}} letter grade A-F
{{passRatePct}} pass rate as integer percent (or "—" if N/A)
{{thresholdPct}} policy pass-rate threshold (or "—")
{{policyName}} active policy name (omit policy section if none)
{{policySummary}} one-paragraph summary of disabled/overridden criteria
{{rawJsonCompact}} compact JSON for embedding
{{rawJsonPretty}} pretty JSON for the <details> view
Pillar card placeholders (repeat per pillar):
{{pillarName}} {{pillarScore}} {{pillarRelevance}} (high|medium|low)
{{pillarStatus}} (good|warn|bad — drives bar + dot colour)
{{pillarWhat}} {{pillarWhyAi}} {{pillarCurrent}} {{pillarRecommendation}}
-->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>AI Readiness — {{repoName}}</title>
<style>
:root {
--bg:#0f1115; --panel:#161a22; --panel-2:#1d2230; --border:#262c3a;
--text:#e6e9ef; --muted:#8a93a6; --accent:#6ea8ff;
--good:#4ade80; --warn:#fbbf24; --bad:#f87171;
}
* { box-sizing: border-box; }
html,body { margin:0; background:var(--bg); color:var(--text);
font:14px/1.5 -apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,sans-serif; }
a { color: var(--accent); }
header { padding: 28px 32px; border-bottom: 1px solid var(--border);
background: linear-gradient(180deg,#141823,#0f1115); }
header h1 { margin: 0 0 4px; font-size: 22px; }
header .meta { color: var(--muted); font-size: 13px; }
main { max-width: 1180px; margin: 0 auto; padding: 24px 32px 80px; }
.panel { background:var(--panel); border:1px solid var(--border);
border-radius:10px; padding:20px; margin-bottom:18px; }
.grid { display:grid; gap:16px; }
.grid.cols-3 { grid-template-columns: repeat(3, 1fr); }
.grid.cols-2 { grid-template-columns: 1fr 1fr; }
.kpi .num { font-size: 30px; font-weight: 700; }
.kpi .lbl { color: var(--muted); font-size: 11px; text-transform: uppercase; letter-spacing: .8px; }
.badge { display:inline-block; padding:3px 10px; border-radius:999px;
font-size:12px; font-weight:600; }
.lvl-1 { background:#3a1f24; color:#f87171; }
.lvl-2 { background:#3b2c1d; color:#fbbf24; }
.lvl-3 { background:#2c3119; color:#d3e85e; }
.lvl-4 { background:#1d3325; color:#4ade80; }
.lvl-5 { background:#1c2c3d; color:#6ea8ff; }
.bar { height:8px; background:var(--panel-2); border-radius:4px; overflow:hidden; }
.bar > span { display:block; height:100%; background: var(--accent); }
.bar.good > span { background: var(--good); }
.bar.warn > span { background: var(--warn); }
.bar.bad > span { background: var(--bad); }
table { width:100%; border-collapse:collapse; }
th,td { text-align:left; padding:8px 10px; border-bottom:1px solid var(--border); font-size:13px; }
th { color:var(--muted); font-weight:500; text-transform:uppercase; font-size:11px; letter-spacing:.8px; }
code { background:#0a0c11; padding:1px 6px; border-radius:4px; }
h2 { font-size:14px; color:var(--muted); text-transform:uppercase; letter-spacing:.8px; margin:0 0 12px; }
.dot { width:8px; height:8px; border-radius:50%; display:inline-block; }
.dot.good { background:var(--good); } .dot.warn { background:var(--warn); } .dot.bad { background:var(--bad); }
footer { color: var(--muted); font-size: 12px; text-align: center; padding: 20px; }
/* Pillar cards */
.pillar { background:var(--panel-2); border:1px solid var(--border);
border-radius:8px; padding:14px 16px; }
.pillar h3 { margin:0 0 6px; font-size:15px; display:flex; align-items:center; gap:10px; flex-wrap:wrap; }
.pillar .why { color:var(--muted); font-size:13px; margin:8px 0 0; }
.pillar .what { font-size:13px; margin:6px 0 0; }
.pillar .rec { font-size:13px; margin:8px 0 0; }
.rel { font-size:10px; padding:2px 8px; border-radius:999px; text-transform:uppercase; letter-spacing:.6px; font-weight:600; }
.rel.high { background:#1c2c3d; color:#6ea8ff; }
.rel.medium { background:#2c3119; color:#d3e85e; }
.rel.low { background:#262c3a; color:#8a93a6; }
</style>
</head>
<body>
<header>
<h1>AI Readiness Report</h1>
<div class="meta">
<strong>{{repoName}}</strong> · Assessed {{date}} ·
<span class="badge lvl-{{level}}">L{{level}} — {{levelName}}</span> ·
Overall <strong>{{overallPct}}%</strong> · Grade <strong>{{grade}}</strong>
<!-- if a policy is active, append: · Policy <code>{{policyName}}</code> -->
</div>
</header>
<main>
<!-- 1. What is AI Readiness? -->
<section class="panel">
<h2>What is AI Readiness?</h2>
<p>AI coding agents are only as effective as the context they receive. AgentRC measures how AI-ready a repo is across <strong>9 pillars</strong> in two categories — Repo Health and AI Setup — and maps the result to a <strong>5-level maturity model</strong>. This report is the <em>Measure</em> step in AgentRC's <em>Measure → Generate → Maintain</em> loop.</p>
<p style="color:var(--muted);font-size:13px;margin-top:8px">Each pillar carries an <strong>AI relevance</strong> rating (High / Medium / Low) so you can tell at a glance which gaps most directly affect Copilot's output and which are general engineering hygiene.</p>
</section>
<!-- 2. KPIs -->
<section class="grid cols-3">
<div class="panel kpi"><span class="lbl">Maturity</span><div class="num"><span class="badge lvl-{{level}}">L{{level}} — {{levelName}}</span></div></div>
<div class="panel kpi"><span class="lbl">Overall Score</span><div class="num">{{overallPct}}%</div><div style="color:var(--muted);font-size:12px">Grade {{grade}}</div></div>
<div class="panel kpi"><span class="lbl">Pass rate</span><div class="num">{{passRate}}</div><div style="color:var(--muted);font-size:12px">Threshold {{threshold}}</div></div>
</section>
<!-- 3. Maturity progression -->
<section class="panel">
<h2>Maturity Progression</h2>
<table>
<thead><tr><th>Level</th><th>Name</th><th>Status</th></tr></thead>
<tbody>
<!-- Render levels 5 → 1. Mark the current level with "◼ You are here". Example row:
<tr><td>L3</td><td>Standardized</td><td>◼ You are here</td></tr>
-->
</tbody>
</table>
</section>
<!-- 4. Active policy (omit this section entirely when no policy is active) -->
<section class="panel">
<h2>Active Policy</h2>
<p><code>{{policyName}}</code> — {{policySummary}}</p>
</section>
<!-- 5. Repo Health Pillars -->
<section class="panel">
<h2>Repo Health Breakdown</h2>
<div class="grid cols-2">
<!--
Repeat one .pillar block per Repo Health pillar (8 pillars):
Style, Build, Testing, Docs, Dev Environment, Code Quality, Observability, Security.
<div class="pillar">
<h3>
<span class="dot {{pillarStatus}}"></span>
{{pillarName}}
<span class="rel {{pillarRelevance}}">AI relevance: {{pillarRelevance}}</span>
<span style="margin-left:auto;color:var(--muted);font-size:13px">{{pillarScore}}%</span>
</h3>
<div class="bar {{pillarStatus}}"><span style="width:{{pillarScore}}%"></span></div>
<p class="what"><strong>What it measures:</strong> {{pillarWhat}}</p>
<p class="why"><strong>Why it matters for AI:</strong> {{pillarWhyAi}}</p>
<p class="rec"><strong>Current state:</strong> {{pillarCurrent}}</p>
<p class="rec"><strong>Recommendation:</strong> {{pillarRecommendation}}</p>
</div>
-->
</div>
</section>
<!-- 6. AI Setup Pillars -->
<section class="panel">
<h2>AI Setup Breakdown</h2>
<div class="grid cols-2">
<!-- AI Tooling pillar block — same structure as above, AI relevance is always "high". -->
</div>
</section>
<!-- 7. Extras -->
<section class="panel">
<h2>Extras (informational, do not affect score)</h2>
<table>
<thead><tr><th></th><th>Extra</th><th>Status</th></tr></thead>
<tbody>
<!-- agents-doc, pr-template, pre-commit, architecture-doc rows. Use ✅ or ◻. -->
</tbody>
</table>
</section>
<!-- 8. Prioritised Remediation Plan -->
<section class="panel">
<h2>Prioritised Remediation Plan</h2>
<h3 style="color:var(--bad)">🔴 Fix First (high impact / low effort)</h3>
<table><thead><tr><th>#</th><th>Finding</th><th>File / config</th><th>Why it matters</th></tr></thead><tbody><!-- rows --></tbody></table>
<h3 style="color:var(--warn)">🟡 Fix Next (medium impact / low effort)</h3>
<table><thead><tr><th>#</th><th>Finding</th><th>File / config</th><th>Why</th></tr></thead><tbody><!-- rows --></tbody></table>
<h3 style="color:var(--accent)">🔵 Plan (medium impact / medium effort)</h3>
<table><thead><tr><th>#</th><th>Finding</th><th>File / config</th><th>Why</th></tr></thead><tbody><!-- rows --></tbody></table>
</section>
<!-- 9. Next steps -->
<section class="panel">
<h2>Next Steps</h2>
<ol>
<li>Generate or refresh instructions: <code>agentrc instructions --output .github/copilot-instructions.md</code> (or use the <code>generate-instructions</code> skill).</li>
<li>Address each item under <strong>🔴 Fix First</strong>; re-run this report to confirm score improvement.</li>
<li>Codify org standards via a JSON policy (<code>strict.json</code>, <code>ai-only.json</code>, …) and re-run with <code>--policy</code>.</li>
<li>Wire <code>agentrc readiness --fail-level &lt;n&gt;</code> into CI to prevent regressions.</li>
</ol>
</section>
<!-- 10. Raw data -->
<details class="panel">
<summary style="cursor:pointer;color:var(--muted)">Raw AgentRC JSON</summary>
<pre style="overflow:auto;font-size:11px;color:#b8c0d2">{{rawJsonPretty}}</pre>
</details>
<script type="application/json" id="raw-data">{{rawJsonCompact}}</script>
</main>
<footer>
Generated by <a href="https://github.com/github/awesome-copilot/tree/main/plugins/acreadiness-cockpit">acreadiness-cockpit</a>
· powered by <a href="https://github.com/microsoft/agentrc">microsoft/agentrc</a>.
</footer>
</body>
</html>

View File

@@ -0,0 +1,107 @@
---
name: acreadiness-generate-instructions
description: 'Generate tailored AI agent instruction files via AgentRC instructions command. Produces .github/copilot-instructions.md (default, recommended for Copilot in VS Code) plus optional per-area .instructions.md files with applyTo globs for monorepos. Use after running /acreadiness-assess to close gaps in the AI Tooling pillar.'
argument-hint: "[--output .github/copilot-instructions.md|AGENTS.md] [--strategy flat|nested] [--areas | --area <name>] [--apply-to <glob>] [--claude-md] [--dry-run]"
---
# /acreadiness-generate-instructions — write AI agent instructions
Use this skill whenever the user wants to **create**, **regenerate**, or **refresh** their custom instructions for AI coding agents (Copilot, Claude, etc.). This is the *Generate* step in AgentRC's **Measure → Generate → Maintain** loop and the single highest-leverage action for the **AI Tooling** pillar.
## Output options
VS Code recognises several instruction file types — AgentRC generates the most common ones:
| File | Scope | When to use |
|---|---|---|
| `.github/copilot-instructions.md` | Always-on, whole workspace | **Default** — VS Code Copilot's native instruction file |
| `AGENTS.md` | Always-on, whole workspace | Multi-agent repos (Copilot + Claude + others) |
| `.github/instructions/*.instructions.md` | Scoped by `applyTo` glob | Per-area / per-language rules in monorepos |
| `CLAUDE.md` | Claude-specific | Add via `--claude-md` (nested only) |
## Strategies
- **`flat`** *(default)* — single `.github/copilot-instructions.md` at the chosen path. Simple, easy to review.
- **`nested`** — hub at `.github/copilot-instructions.md` + per-topic detail files at `.github/instructions/<topic>.instructions.md`, each with an `applyTo` glob so VS Code only loads the topic when it's relevant. Better for large or multi-stack repos.
> **Why `.github/instructions/` and not `.agents/`?** AgentRC's default nested layout writes to `.agents/`, which is the right home for *agent-agnostic* repos (Copilot + Claude + Cursor reading `AGENTS.md`). For VS Code Copilot specifically, the native location is `.github/instructions/` with `applyTo` frontmatter — that's what Copilot auto-discovers. This skill rewrites AgentRC's nested output to the VS Code-native location whenever the main output is `.github/copilot-instructions.md`. If you instead chose `--output AGENTS.md`, nested keeps AgentRC's default `.agents/` layout.
For monorepos, generate **area-scoped** instructions with `--areas`, `--area <name>`, or `--areas-only`. Areas are defined in `agentrc.config.json`. Per-area output is written as VS Code `.instructions.md` files with an `applyTo` glob (see below).
### Topic vs area `.instructions.md` files
Both end up in `.github/instructions/` but they answer different questions:
| Kind | Filename example | `applyTo` example | Where it comes from |
|---|---|---|---|
| **Topic** (nested) | `testing.instructions.md` | `**/*.{test,spec}.{ts,tsx,js}` | AgentRC `--strategy nested` topic split |
| **Area** (monorepo) | `frontend.instructions.md` | `apps/frontend/**` | `agentrc.config.json` areas + `--areas` |
You can have both at once: a nested set of topic files plus per-area files for a monorepo.
## Per-area files with `applyTo`
When the user opts into areas, emit one VS Code-native `.instructions.md` file per area at `.github/instructions/<area>.instructions.md`. Each file MUST start with frontmatter declaring the glob the rules apply to:
```markdown
---
applyTo: "apps/frontend/**"
---
# Frontend area instructions
…AgentRC-generated content for this area…
```
Workflow:
1. **Read `agentrc.config.json`** to discover declared areas and their `paths` / globs. If `paths` is missing, ask the user for the glob (e.g. `src/api/**`).
2. **Run `agentrc instructions --areas`** (or `--area <name>`) to produce the per-area body content.
3. **Wrap each area's content** in `.github/instructions/<area>.instructions.md` with the `applyTo` frontmatter taken from the area's `paths`. If the user passed `--apply-to <glob>` on a single-area call, use that glob verbatim.
4. **Leave the main file alone** — the root `.github/copilot-instructions.md` stays as the always-on instructions; `.instructions.md` files only kick in for matching paths.
Naming: lowercase, kebab-case area name. Examples: `.github/instructions/frontend.instructions.md`, `.github/instructions/api.instructions.md`, `.github/instructions/infra.instructions.md`.
## Steps
1. **Pick the target file**. **Default to `.github/copilot-instructions.md`.** Switch to `AGENTS.md` only if the user mentions multi-agent / Claude / Cursor support.
2. **Always ask which strategy to use**`flat` or `nested` — unless the user already specified one in their message or via `--strategy`. Present the trade-off briefly:
- **Flat** *(default)* — one `.github/copilot-instructions.md`. Simple, easy to review in a single PR. Best for small/medium repos with one stack.
- **Nested** — hub `.github/copilot-instructions.md` + per-topic `.github/instructions/<topic>.instructions.md` files (each with an `applyTo` glob so VS Code only loads them when relevant). Best for large or multi-stack repos. Add `--claude-md` to also emit `CLAUDE.md`.
Recommend `nested` proactively when the repo has > 5 top-level directories, multiple stacks, or already uses a monorepo tool (turbo/nx/pnpm workspaces).
3. **Detect monorepo areas** by reading `agentrc.config.json`. If areas exist, ask the user whether they want **per-area `.instructions.md` files with `applyTo`** in addition to the root file. Default to "yes" when `agentrc.config.json` declares areas.
4. **Run dry-run first** so the user can preview:
```bash
npx -y github:microsoft/agentrc instructions --output <file> --strategy <flat|nested> [--areas|--area <name>] [--claude-md] --dry-run
```
5. **Show a short summary** of what would change — files that would be created or overwritten, area count + their `applyTo` globs, model used (default `claude-sonnet-4.6`).
6. **On confirmation, run the same command without `--dry-run`** (and optionally `--force` if files already exist).
7. **Post-process layout for Copilot output**:
- **If `--output` ends in `copilot-instructions.md` and strategy is `nested`**: move/rewrite AgentRC's `.agents/<topic>.md` files to `.github/instructions/<topic>.instructions.md`. Add frontmatter to each file with an appropriate `applyTo` glob (see "Topic applyTo defaults" below). Delete the now-empty `.agents/` directory.
- **If `--areas` was used**: also write `.github/instructions/<area>.instructions.md` for every area, using each area's `paths` from `agentrc.config.json` as the `applyTo` glob (override with `--apply-to` for single-area calls).
- **If `--output AGENTS.md`** was chosen: keep AgentRC's native `.agents/` layout for nested — agent-agnostic readers expect it there.
Create the `.github/instructions/` directory if missing.
### Topic `applyTo` defaults
When promoting AgentRC's nested topic files to `.instructions.md`, use these defaults unless the user specifies otherwise:
| Topic | Default `applyTo` |
|---|---|
| `testing` | `**/*.{test,spec}.{ts,tsx,js,jsx,mjs,cjs}` |
| `style` / `code-quality` / `formatting` | `**/*.{ts,tsx,js,jsx,mjs,cjs,py,go,rs,java,kt,cs}` |
| `build` / `ci` | `**/{package.json,turbo.json,nx.json,.github/workflows/**}` |
| `docs` | `**/*.md` |
| `security` | `**` |
| anything else / hub-level | `**` |
8. **Verify** by reading the generated file(s) back and showing the user a 1-paragraph synopsis: stack detected, conventions captured, length, list of `.instructions.md` files with their globs.
9. **Suggest next steps**:
- Re-run the `assess` skill to confirm the AI Tooling pillar score improved.
- If the user already has both `copilot-instructions.md` and `AGENTS.md`, recommend consolidating to a single source of truth (AgentRC flags this at maturity Level 2+).
## Notes
- AgentRC reads your **actual code** — no templates. Output reflects detected languages, frameworks, and conventions.
- `--claude-md` (nested strategy only) also emits `CLAUDE.md`.
- VS Code applies `.instructions.md` files automatically when the active file matches `applyTo`. The root `.github/copilot-instructions.md` always loads.
- Never run this skill non-interactively in CI; instructions are part of the repo and should land via PR.

View File

@@ -0,0 +1,96 @@
---
name: acreadiness-policy
description: 'Help the user pick, write, or apply an AgentRC policy. Policies customise readiness scoring by disabling irrelevant checks, overriding impact/level, setting pass-rate thresholds, or chaining org baselines with team overrides. Use when the user asks about strict mode, AI-only scoring, custom weights, CI gating, or wants org-wide standardisation.'
argument-hint: "[show | new <name> | apply <path-or-pkg>] — e.g. /acreadiness-policy show, /acreadiness-policy new strict-frontend"
---
# /acreadiness-policy — AgentRC policies
Use this skill when the user asks about **policies**, **strict mode**, **custom scoring**, **disabling checks**, **org standards**, or **CI gating** of readiness.
A policy is a small JSON file with three optional sections — `criteria`, `extras`, `thresholds` — that customise how AgentRC scores readiness.
## Built-in examples
AgentRC ships with three example policies in `examples/policies/`:
| Policy | What it does |
|---|---|
| `strict.json` | 100% pass rate, raises impact on key criteria |
| `ai-only.json` | Disables all repo-health checks, focuses on AI tooling |
| `repo-health-only.json` | Disables AI checks, focuses on traditional quality |
Recommend these as starting points before writing a custom policy.
## Policy schema
```jsonc
{
"name": "my-policy",
"criteria": {
"disable": ["env-example", "observability", "dependabot"],
"override": {
"readme": { "impact": "high", "level": 2 },
"lint-config": { "title": "Linter required" }
}
},
"extras": {
"disable": ["pre-commit"]
},
"thresholds": {
"passRate": 0.9
}
}
```
### Impact weights
| Impact | Weight |
|---|---|
| critical | 5 |
| high | 4 |
| medium | 3 |
| low | 2 |
| info | 0 |
`Score = 1 (deductions / max possible weight)`. Grades: **A** ≥ 0.9, **B** ≥ 0.8, **C** ≥ 0.7, **D** ≥ 0.6, **F** < 0.6.
## Sub-commands
### `show`
List policies currently in effect (from `agentrc.config.json` `policies` array, or none).
### `new <name>`
Scaffold `policies/<name>.json` with sensible defaults. Walk the user through:
1. **What to disable** — irrelevant pillars or extras for their stack (e.g. disable `observability` for a static site).
2. **What to raise** — override `impact` to `high` or `critical` for must-haves (e.g. `readme`, `codeowners`).
3. **Pass-rate threshold** — typical org baselines: `0.7` (lenient), `0.85` (standard), `1.0` (strict).
4. Reference the policy from `agentrc.config.json`:
```json
{ "policies": ["./policies/<name>.json"] }
```
### `apply <path-or-pkg>`
Run `agentrc readiness --json --policy <source>` and re-render the report by handing off to the `assess` skill / `ai-readiness-reporter` agent. Supports chaining:
```bash
npx -y github:microsoft/agentrc readiness --json --policy ./org-baseline.json,./team-frontend.json
```
## CI gating
Combine policies with `--fail-level` to enforce a minimum maturity level in CI:
```yaml
- run: npx -y github:microsoft/agentrc readiness --policy ./policies/strict.json --fail-level 3
```
## Advanced
JSON policies can disable, override, and set thresholds — but **cannot add new criteria**. For new detection logic, point users at AgentRC's TypeScript plugin system (`docs/dev/plugins.md`).
## Operating rules
- **Never silently disable a pillar.** If the user wants to disable `observability`, confirm and explain the trade-off.
- **Prefer overriding `impact` over disabling.** Disabling hides the gap entirely; overriding lets it still appear in the report.
- **Recommend extras stay enabled.** They cost nothing — they don't affect the score.
- **Suggest layering** — most orgs want a baseline policy + per-team overrides chained with `--policy a.json,b.json`.