mirror of
https://github.com/github/awesome-copilot.git
synced 2026-05-16 19:51:45 +00:00
Update FlowStudio Power Automate skills (#1664)
* feat(flowstudio): align Power Automate skills with MCP server v1.1.6 Foundation skill (flowstudio-power-automate-mcp) rewritten to use the server's new tool_search and list_skills meta-tools (v1.1.5+) for discovery instead of cataloging every tool by hand. Cut from 519 to 295 lines. New "Which Skill to Use When" intent-keyed decision tree points at the four specialized skills. Build/debug/governance/monitoring updated for use-case framing. Tools that genuinely cross tiers (e.g. debug skill borrowing get_store_flow_summary) are correct when the workflow needs them — the split between skills is by use-case intent, not by tool partition. Build skill: new Step 3a Resolving Dynamic Connector Values covers get_live_dynamic_options outer-parameter auto-bridge (v1.1.6+) and the AadGraph user-picker fallback via shared_office365users.SearchUserV2 (replaces broken builtInOperation:AadGraph.GetUsers). Debug skill: Outlook user-picker failure note pointing at the fallback. Monitoring skill description disambiguates from the server's monitor-flow tool bundle (runtime control of a single flow) — this skill is tenant-wide health analytics over the cached store. All 5 skills validate via npm run skill:validate; line endings LF only; codespell clean; auto-regenerated docs/README.skills.md included. * fix(flowstudio): remove deprecated tool references The v1.1.5 MCP server release marked 5 tools [DEPRECATED] but the previous alignment commit missed them. Replacements per server source: - get_live_flow_http_schema → read trigger.inputs.schema from get_live_flow - get_live_flow_trigger_url → read trigger.metadata.callbackUrl from get_live_flow - get_store_flow_trigger_url → get_store_flow.triggerUrl field - get_store_flow_errors → get_store_flow_runs(status=["Failed"]) - set_store_flow_state → set_live_flow_state Touches build, debug, governance, monitoring SKILL.md and the foundation skill's tool-reference.md. Remaining mentions of the deprecated names are intentional — they live in deprecation notices naming the obsolete wrapper alongside its replacement. * Update FlowStudio Power Automate skills * Cover latest FlowStudio MCP actions * Trim FlowStudio Power Automate skills * Number FlowStudio build workflow steps
This commit is contained in:
@@ -162,8 +162,8 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
|
|||||||
| [flowstudio-power-automate-build](../skills/flowstudio-power-automate-build/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-build` | Build, scaffold, and deploy Power Automate cloud flows using the FlowStudio MCP server. Your agent constructs flow definitions, wires connections, deploys, and tests — all via MCP without opening the portal. Load this skill when asked to: create a flow, build a new flow, deploy a flow definition, scaffold a Power Automate workflow, construct a flow JSON, update an existing flow's actions, patch a flow definition, add actions to a flow, wire up connections, or generate a workflow definition from scratch. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app | `references/action-patterns-connectors.md`<br />`references/action-patterns-core.md`<br />`references/action-patterns-data.md`<br />`references/build-patterns.md`<br />`references/flow-schema.md`<br />`references/trigger-types.md` |
|
| [flowstudio-power-automate-build](../skills/flowstudio-power-automate-build/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-build` | Build, scaffold, and deploy Power Automate cloud flows using the FlowStudio MCP server. Your agent constructs flow definitions, wires connections, deploys, and tests — all via MCP without opening the portal. Load this skill when asked to: create a flow, build a new flow, deploy a flow definition, scaffold a Power Automate workflow, construct a flow JSON, update an existing flow's actions, patch a flow definition, add actions to a flow, wire up connections, or generate a workflow definition from scratch. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app | `references/action-patterns-connectors.md`<br />`references/action-patterns-core.md`<br />`references/action-patterns-data.md`<br />`references/build-patterns.md`<br />`references/flow-schema.md`<br />`references/trigger-types.md` |
|
||||||
| [flowstudio-power-automate-debug](../skills/flowstudio-power-automate-debug/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-debug` | Debug failing Power Automate cloud flows using the FlowStudio MCP server. The Graph API only shows top-level status codes. This skill gives your agent action-level inputs and outputs to find the actual root cause. Load this skill when asked to: debug a flow, investigate a failed run, why is this flow failing, inspect action outputs, find the root cause of a flow error, fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure, check connector auth errors, read error details from a run, or troubleshoot expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app | `references/common-errors.md`<br />`references/debug-workflow.md` |
|
| [flowstudio-power-automate-debug](../skills/flowstudio-power-automate-debug/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-debug` | Debug failing Power Automate cloud flows using the FlowStudio MCP server. The Graph API only shows top-level status codes. This skill gives your agent action-level inputs and outputs to find the actual root cause. Load this skill when asked to: debug a flow, investigate a failed run, why is this flow failing, inspect action outputs, find the root cause of a flow error, fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure, check connector auth errors, read error details from a run, or troubleshoot expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app | `references/common-errors.md`<br />`references/debug-workflow.md` |
|
||||||
| [flowstudio-power-automate-governance](../skills/flowstudio-power-automate-governance/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-governance` | Govern Power Automate flows and Power Apps at scale using the FlowStudio MCP cached store. Classify flows by business impact, detect orphaned resources, audit connector usage, enforce compliance standards, manage notification rules, and compute governance scores — all without Dataverse or the CoE Starter Kit. Load this skill when asked to: tag or classify flows, set business impact, assign ownership, detect orphans, audit connectors, check compliance, compute archive scores, manage notification rules, run a governance review, generate a compliance report, offboard a maker, or any task that involves writing governance metadata to flows. Requires a FlowStudio for Teams or MCP Pro+ subscription — see https://mcp.flowstudio.app | None |
|
| [flowstudio-power-automate-governance](../skills/flowstudio-power-automate-governance/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-governance` | Govern Power Automate flows and Power Apps at scale using the FlowStudio MCP cached store. Classify flows by business impact, detect orphaned resources, audit connector usage, enforce compliance standards, manage notification rules, and compute governance scores — all without Dataverse or the CoE Starter Kit. Load this skill when asked to: tag or classify flows, set business impact, assign ownership, detect orphans, audit connectors, check compliance, compute archive scores, manage notification rules, run a governance review, generate a compliance report, offboard a maker, or any task that involves writing governance metadata to flows. Requires a FlowStudio for Teams or MCP Pro+ subscription — see https://mcp.flowstudio.app | None |
|
||||||
| [flowstudio-power-automate-mcp](../skills/flowstudio-power-automate-mcp/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-mcp` | Foundation skill for Power Automate via FlowStudio MCP — auth setup, the reusable MCP helper (Python + Node.js), tool discovery via `list_skills` / `tool_search`, and oversized-response handling. Load this skill first when connecting an agent to Power Automate. For specialized workflows, load `power-automate-build`, `power-automate-debug`, `power-automate-monitoring` (Pro+), or `power-automate-governance` (Pro+) — each contains the workflow narrative, this skill provides the plumbing they all rely on. Requires a FlowStudio MCP subscription or compatible server — see https://mcp.flowstudio.app | `references/MCP-BOOTSTRAP.md`<br />`references/action-types.md`<br />`references/connection-references.md`<br />`references/tool-reference.md` |
|
| [flowstudio-power-automate-mcp](../skills/flowstudio-power-automate-mcp/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-mcp` | Foundation skill for Power Automate via FlowStudio MCP — auth setup, the reusable MCP helper (Python + Node.js), tool discovery via `list_skills` / `tool_search`, and oversized-response handling. Load this skill first when connecting an agent to Power Automate. For specialized workflows, load `flowstudio-power-automate-build`, `flowstudio-power-automate-debug`, `flowstudio-power-automate-monitoring` (Pro+), or `flowstudio-power-automate-governance` (Pro+) — each contains the workflow narrative, this skill provides the plumbing they all rely on. Requires a FlowStudio MCP subscription or compatible server — see https://mcp.flowstudio.app | `references/MCP-BOOTSTRAP.md`<br />`references/action-types.md`<br />`references/connection-references.md`<br />`references/tool-reference.md` |
|
||||||
| [flowstudio-power-automate-monitoring](../skills/flowstudio-power-automate-monitoring/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-monitoring` | **Pro+ subscription required.** Tenant-wide Power Automate flow health monitoring, failure rate analytics, and asset inventory using the FlowStudio MCP cached store. Load this skill ONLY for tenant-wide aggregated views — not for listing flows in a single environment or debugging a specific run (use power-automate-mcp or power-automate-debug for those). Not the same as the server's `monitor-flow` tool bundle (`tool_search query: "skill:monitor-flow"`) — that bundle is for runtime control of a single flow (start/stop/trigger/ cancel/resubmit); this skill is for tenant-wide health analytics over the cached store. Load when asked to: monitor tenant health, get aggregated failure rates over a time window, review tenant-wide error trends, find inactive makers across the tenant, inventory all Power Apps in the tenant, compute governance scores, generate a compliance report, or run a tenant-wide health overview. Requires a FlowStudio for Teams or MCP Pro+ subscription — see https://mcp.flowstudio.app | None |
|
| [flowstudio-power-automate-monitoring](../skills/flowstudio-power-automate-monitoring/SKILL.md)<br />`gh skills install github/awesome-copilot flowstudio-power-automate-monitoring` | Pro+ subscription required. Tenant-wide Power Automate monitoring using the FlowStudio MCP cached store: failure rates, run-health trends, maker/app inventory, inactive owners, and compliance/health reports. Use only for aggregated tenant views. For one environment, one flow, run control, or root-cause debugging, use flowstudio-power-automate-mcp, flowstudio-power-automate-debug, or the server monitor-flow bundle. Requires FlowStudio for Teams or MCP Pro+. | None |
|
||||||
| [fluentui-blazor](../skills/fluentui-blazor/SKILL.md)<br />`gh skills install github/awesome-copilot fluentui-blazor` | Guide for using the Microsoft Fluent UI Blazor component library (Microsoft.FluentUI.AspNetCore.Components NuGet package) in Blazor applications. Use this when the user is building a Blazor app with Fluent UI components, setting up the library, using FluentUI components like FluentButton, FluentDataGrid, FluentDialog, FluentToast, FluentNavMenu, FluentTextField, FluentSelect, FluentAutocomplete, FluentDesignTheme, or any component prefixed with "Fluent". Also use when troubleshooting missing providers, JS interop issues, or theming. | `references/DATAGRID.md`<br />`references/LAYOUT-AND-NAVIGATION.md`<br />`references/SETUP.md`<br />`references/THEMING.md` |
|
| [fluentui-blazor](../skills/fluentui-blazor/SKILL.md)<br />`gh skills install github/awesome-copilot fluentui-blazor` | Guide for using the Microsoft Fluent UI Blazor component library (Microsoft.FluentUI.AspNetCore.Components NuGet package) in Blazor applications. Use this when the user is building a Blazor app with Fluent UI components, setting up the library, using FluentUI components like FluentButton, FluentDataGrid, FluentDialog, FluentToast, FluentNavMenu, FluentTextField, FluentSelect, FluentAutocomplete, FluentDesignTheme, or any component prefixed with "Fluent". Also use when troubleshooting missing providers, JS interop issues, or theming. | `references/DATAGRID.md`<br />`references/LAYOUT-AND-NAVIGATION.md`<br />`references/SETUP.md`<br />`references/THEMING.md` |
|
||||||
| [folder-structure-blueprint-generator](../skills/folder-structure-blueprint-generator/SKILL.md)<br />`gh skills install github/awesome-copilot folder-structure-blueprint-generator` | Comprehensive technology-agnostic prompt for analyzing and documenting project folder structures. Auto-detects project types (.NET, Java, React, Angular, Python, Node.js, Flutter), generates detailed blueprints with visualization options, naming conventions, file placement patterns, and extension templates for maintaining consistent code organization across diverse technology stacks. | None |
|
| [folder-structure-blueprint-generator](../skills/folder-structure-blueprint-generator/SKILL.md)<br />`gh skills install github/awesome-copilot folder-structure-blueprint-generator` | Comprehensive technology-agnostic prompt for analyzing and documenting project folder structures. Auto-detects project types (.NET, Java, React, Angular, Python, Node.js, Flutter), generates detailed blueprints with visualization options, naming conventions, file placement patterns, and extension templates for maintaining consistent code organization across diverse technology stacks. | None |
|
||||||
| [foundry-agent-sync](../skills/foundry-agent-sync/SKILL.md)<br />`gh skills install github/awesome-copilot foundry-agent-sync` | Create and synchronize prompt-based AI agents directly within Azure AI Foundry via REST API, from a local JSON manifest. Unlike scaffolding skills that only generate local code, this skill registers agents in the Foundry service itself — making them immediately available for invocation. Use when the user asks to create agents in Foundry, sync, deploy, register, or push agents to Foundry, update agent instructions, or scaffold the manifest and sync script for a new repository. Triggers: 'create agent in foundry', 'sync foundry agents', 'deploy agents to foundry', 'register agents in foundry', 'push agents', 'create foundry agent manifest', 'scaffold agent sync'. | None |
|
| [foundry-agent-sync](../skills/foundry-agent-sync/SKILL.md)<br />`gh skills install github/awesome-copilot foundry-agent-sync` | Create and synchronize prompt-based AI agents directly within Azure AI Foundry via REST API, from a local JSON manifest. Unlike scaffolding skills that only generate local code, this skill registers agents in the Foundry service itself — making them immediately available for invocation. Use when the user asks to create agents in Foundry, sync, deploy, register, or push agents to Foundry, update agent instructions, or scaffold the manifest and sync script for a new repository. Triggers: 'create agent in foundry', 'sync foundry agents', 'deploy agents to foundry', 'register agents in foundry', 'push agents', 'create foundry agent manifest', 'scaffold agent sync'. | None |
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ copilot plugin install flowstudio-power-automate@awesome-copilot
|
|||||||
| -------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| -------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `flowstudio-power-automate-mcp` | Foundation skill — auth setup, the reusable MCP helper (Python + Node.js), tool discovery via `list_skills`/`tool_search`, oversized-response handling. Load first. |
|
| `flowstudio-power-automate-mcp` | Foundation skill — auth setup, the reusable MCP helper (Python + Node.js), tool discovery via `list_skills`/`tool_search`, oversized-response handling. Load first. |
|
||||||
| `flowstudio-power-automate-debug` | Step-by-step diagnostic workflow — action-level inputs and outputs, not just error codes. Identifies root cause across nested child flows and loop iterations. |
|
| `flowstudio-power-automate-debug` | Step-by-step diagnostic workflow — action-level inputs and outputs, not just error codes. Identifies root cause across nested child flows and loop iterations. |
|
||||||
| `flowstudio-power-automate-build` | Build and deploy flow definitions from scratch — scaffold triggers, wire connections, deploy, and test via resubmit. |
|
| `flowstudio-power-automate-build` | Build and deploy flow definitions from scratch — load `create-flow`, discover connector operations, resolve dynamic options/properties, wire connection templates, deploy, and test via resubmit. |
|
||||||
| `flowstudio-power-automate-monitoring` | Flow health from the cached store — failure rates, run history with remediation hints, maker inventory, Power Apps, environment and connection counts. |
|
| `flowstudio-power-automate-monitoring` | Flow health from the cached store — failure rates, run history with remediation hints, maker inventory, Power Apps, environment and connection counts. |
|
||||||
| `flowstudio-power-automate-governance` | Governance workflows — classify flows by business impact, detect orphaned resources, audit connectors, manage notification rules, compute archive scores. |
|
| `flowstudio-power-automate-governance` | Governance workflows — classify flows by business impact, detect orphaned resources, audit connectors, manage notification rules, compute archive scores. |
|
||||||
|
|
||||||
@@ -49,7 +49,7 @@ The first three skills call the live Power Automate API. The monitoring and gove
|
|||||||
1. Install the plugin
|
1. Install the plugin
|
||||||
2. Get your API key at [mcp.flowstudio.app](https://mcp.flowstudio.app)
|
2. Get your API key at [mcp.flowstudio.app](https://mcp.flowstudio.app)
|
||||||
3. Configure the MCP connection in VS Code (`.vscode/mcp.json`):
|
3. Configure the MCP connection in VS Code (`.vscode/mcp.json`):
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"servers": {
|
"servers": {
|
||||||
|
|||||||
@@ -9,13 +9,6 @@ description: >-
|
|||||||
JSON, update an existing flow's actions, patch a flow definition, add actions
|
JSON, update an existing flow's actions, patch a flow definition, add actions
|
||||||
to a flow, wire up connections, or generate a workflow definition from scratch.
|
to a flow, wire up connections, or generate a workflow definition from scratch.
|
||||||
Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
||||||
metadata:
|
|
||||||
openclaw:
|
|
||||||
requires:
|
|
||||||
env:
|
|
||||||
- FLOWSTUDIO_MCP_TOKEN
|
|
||||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
|
||||||
homepage: https://mcp.flowstudio.app
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Build & Deploy Power Automate Flows with FlowStudio MCP
|
# Build & Deploy Power Automate Flows with FlowStudio MCP
|
||||||
@@ -24,18 +17,28 @@ Step-by-step guide for constructing and deploying Power Automate cloud flows
|
|||||||
programmatically through the FlowStudio MCP server.
|
programmatically through the FlowStudio MCP server.
|
||||||
|
|
||||||
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
|
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
|
||||||
See the `power-automate-mcp` skill for connection setup.
|
See the `flowstudio-power-automate-mcp` skill for connection setup.
|
||||||
Subscribe at https://mcp.flowstudio.app
|
Subscribe at https://mcp.flowstudio.app
|
||||||
|
|
||||||
|
Workflow:
|
||||||
|
1. Load current build tools.
|
||||||
|
2. Check for an existing flow.
|
||||||
|
3. Resolve connection references.
|
||||||
|
4. Build the definition.
|
||||||
|
5. Deploy.
|
||||||
|
6. Verify.
|
||||||
|
7. Test.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Source of Truth
|
## Source of Truth
|
||||||
|
|
||||||
> **Always call `tools/list` first** to confirm available tool names and their
|
> **Always call `list_skills` / `tool_search` first** to confirm available tool
|
||||||
> parameter schemas. Tool names and parameters may change between server versions.
|
> names and parameter schemas. Tool names and parameters may change between
|
||||||
|
> server versions.
|
||||||
> This skill covers response shapes, behavioral notes, and build patterns —
|
> This skill covers response shapes, behavioral notes, and build patterns —
|
||||||
> things `tools/list` cannot tell you. If this document disagrees with `tools/list`
|
> things tool schemas cannot tell you. If this document disagrees with
|
||||||
> or a real API response, the API wins.
|
> `tool_search` or a real API response, the API wins.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -68,14 +71,38 @@ ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 1 — Safety Check: Does the Flow Already Exist?
|
## 0. Load the Current Build Tools
|
||||||
|
|
||||||
|
For a brand-new flow, load the server's `create-flow` bundle. For editing an
|
||||||
|
existing flow, load `build-flow`. This keeps the agent aligned with the MCP
|
||||||
|
server's current schema before constructing JSON.
|
||||||
|
|
||||||
|
```python
|
||||||
|
schemas = mcp("tool_search", query="skill:create-flow")
|
||||||
|
# Includes list_live_environments, list_live_connections,
|
||||||
|
# describe_live_connector, get_live_dynamic_options, update_live_flow.
|
||||||
|
```
|
||||||
|
|
||||||
|
If you need a tool outside the bundle, load it explicitly:
|
||||||
|
|
||||||
|
```python
|
||||||
|
mcp("tool_search", query="select:get_live_dynamic_properties")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Safety Check: Does the Flow Already Exist?
|
||||||
|
|
||||||
Always look before you build to avoid duplicates:
|
Always look before you build to avoid duplicates:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
results = mcp("list_live_flows", environmentName=ENV)
|
results = mcp("list_live_flows",
|
||||||
|
environmentName=ENV,
|
||||||
|
mode="owner",
|
||||||
|
search="My New Flow",
|
||||||
|
top=20)
|
||||||
|
|
||||||
# list_live_flows returns { "flows": [...] }
|
# list_live_flows returns { "flows": [...], "mode": "...", ... }
|
||||||
matches = [f for f in results["flows"]
|
matches = [f for f in results["flows"]
|
||||||
if "My New Flow".lower() in f["displayName"].lower()]
|
if "My New Flow".lower() in f["displayName"].lower()]
|
||||||
|
|
||||||
@@ -89,9 +116,14 @@ else:
|
|||||||
FLOW_ID = None
|
FLOW_ID = None
|
||||||
```
|
```
|
||||||
|
|
||||||
|
For very large environments, `list_live_flows` may return a continuation URL.
|
||||||
|
Pass it back as `continuationUrl` with the same `mode` to retrieve the next
|
||||||
|
batch. Use `mode="admin"` only when the user needs all environment flows and
|
||||||
|
the MCP identity has admin rights.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 2 — Obtain Connection References
|
## 2. Obtain Connection References
|
||||||
|
|
||||||
Every connector action needs a `connectionName` that points to a key in the
|
Every connector action needs a `connectionName` that points to a key in the
|
||||||
flow's `connectionReferences` map. That key links to an authenticated connection
|
flow's `connectionReferences` map. That key links to an authenticated connection
|
||||||
@@ -101,103 +133,72 @@ in the environment.
|
|||||||
> user for connection names or GUIDs. The API returns the exact values you need.
|
> user for connection names or GUIDs. The API returns the exact values you need.
|
||||||
> Only prompt the user if the API confirms that required connections are missing.
|
> Only prompt the user if the API confirms that required connections are missing.
|
||||||
|
|
||||||
### 2a — Always call `list_live_connections` first
|
### 2a — Find active connections
|
||||||
|
|
||||||
```python
|
```python
|
||||||
conns = mcp("list_live_connections", environmentName=ENV)
|
conns = mcp("list_live_connections", environmentName=ENV)
|
||||||
|
|
||||||
# Filter to connected (authenticated) connections only
|
|
||||||
active = [c for c in conns["connections"]
|
active = [c for c in conns["connections"]
|
||||||
if c["statuses"][0]["status"] == "Connected"]
|
if c["statuses"][0]["status"] == "Connected"]
|
||||||
|
conn_map = {c["connectorName"]: c["id"] for c in active}
|
||||||
|
```
|
||||||
|
|
||||||
# Build a lookup: connectorName → connectionName (id)
|
For a known connector, pass `search` to reduce output and get paste-ready
|
||||||
conn_map = {}
|
`connectionReferenceTemplate` and `hostTemplate` values:
|
||||||
for c in active:
|
|
||||||
conn_map[c["connectorName"]] = c["id"]
|
|
||||||
|
|
||||||
print(f"Found {len(active)} active connections")
|
```python
|
||||||
print("Available connectors:", list(conn_map.keys()))
|
sp_conns = mcp("list_live_connections",
|
||||||
|
environmentName=ENV,
|
||||||
|
search="shared_sharepointonline")
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2b — Determine which connectors the flow needs
|
### 2b — Determine which connectors the flow needs
|
||||||
|
|
||||||
Based on the flow you are building, identify which connectors are required.
|
Common connector API names: SharePoint `shared_sharepointonline`, Outlook
|
||||||
Common connector API names:
|
`shared_office365`, Teams `shared_teams`, Approvals `shared_approvals`,
|
||||||
|
OneDrive `shared_onedriveforbusiness`, Excel `shared_excelonlinebusiness`,
|
||||||
|
Dataverse `shared_commondataserviceforapps`, Forms `shared_microsoftforms`.
|
||||||
|
|
||||||
| Connector | API name |
|
Flows that need no connectors, such as Recurrence + Compose + HTTP only, can
|
||||||
|---|---|
|
omit `connectionReferences`.
|
||||||
| SharePoint | `shared_sharepointonline` |
|
|
||||||
| Outlook / Office 365 | `shared_office365` |
|
|
||||||
| Teams | `shared_teams` |
|
|
||||||
| Approvals | `shared_approvals` |
|
|
||||||
| OneDrive for Business | `shared_onedriveforbusiness` |
|
|
||||||
| Excel Online (Business) | `shared_excelonlinebusiness` |
|
|
||||||
| Dataverse | `shared_commondataserviceforapps` |
|
|
||||||
| Microsoft Forms | `shared_microsoftforms` |
|
|
||||||
|
|
||||||
> **Flows that need NO connections** (e.g. Recurrence + Compose + HTTP only)
|
|
||||||
> can skip the rest of Step 2 — omit `connectionReferences` from the deploy call.
|
|
||||||
|
|
||||||
### 2c — If connections are missing, guide the user
|
### 2c — If connections are missing, guide the user
|
||||||
|
|
||||||
```python
|
```python
|
||||||
connectors_needed = ["shared_sharepointonline", "shared_office365"] # adjust per flow
|
connectors_needed = ["shared_sharepointonline", "shared_office365"] # adjust per flow
|
||||||
|
|
||||||
missing = [c for c in connectors_needed if c not in conn_map]
|
missing = [c for c in connectors_needed if c not in conn_map]
|
||||||
|
if missing:
|
||||||
if not missing:
|
# STOP: connections require browser OAuth consent.
|
||||||
print("✅ All required connections are available — proceeding to build")
|
# Ask the user to create the missing connector connections in the
|
||||||
else:
|
# selected environment, then re-run list_live_connections.
|
||||||
# ── STOP: connections must be created interactively ──
|
raise Exception(f"Missing active connections: {missing}")
|
||||||
# Connections require OAuth consent in a browser — no API can create them.
|
|
||||||
print("⚠️ The following connectors have no active connection in this environment:")
|
|
||||||
for c in missing:
|
|
||||||
friendly = c.replace("shared_", "").replace("onlinebusiness", " Online (Business)")
|
|
||||||
print(f" • {friendly} (API name: {c})")
|
|
||||||
print()
|
|
||||||
print("Please create the missing connections:")
|
|
||||||
print(" 1. Open https://make.powerautomate.com/connections")
|
|
||||||
print(" 2. Select the correct environment from the top-right picker")
|
|
||||||
print(" 3. Click '+ New connection' for each missing connector listed above")
|
|
||||||
print(" 4. Sign in and authorize when prompted")
|
|
||||||
print(" 5. Tell me when done — I will re-check and continue building")
|
|
||||||
# DO NOT proceed to Step 3 until the user confirms.
|
|
||||||
# After user confirms, re-run Step 2a to refresh conn_map.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2d — Build the connectionReferences block
|
### 2d — Build the connectionReferences block
|
||||||
|
|
||||||
Only execute this after 2c confirms no missing connectors:
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
connection_references = {}
|
connection_references = {}
|
||||||
|
host_templates = {}
|
||||||
for connector in connectors_needed:
|
for connector in connectors_needed:
|
||||||
connection_references[connector] = {
|
c = next(c for c in active if c["connectorName"] == connector)
|
||||||
"connectionName": conn_map[connector], # the GUID from list_live_connections
|
connection_references[connector] = c.get("connectionReferenceTemplate") or {
|
||||||
|
"connectionName": c["id"], # the connection id from list_live_connections
|
||||||
"source": "Invoker",
|
"source": "Invoker",
|
||||||
"id": f"/providers/Microsoft.PowerApps/apis/{connector}"
|
"id": f"/providers/Microsoft.PowerApps/apis/{connector}"
|
||||||
}
|
}
|
||||||
|
host_templates[connector] = c.get("hostTemplate") or {
|
||||||
|
"connectionName": connector
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> **IMPORTANT — `host.connectionName` in actions**: When building actions in
|
In Step 3 action JSON, `inputs.host.connectionName` must be the map key such as
|
||||||
> Step 3, set `host.connectionName` to the **key** from this map (e.g.
|
`shared_teams`, not the GUID. The GUID belongs only inside the
|
||||||
> `shared_teams`), NOT the connection GUID. The GUID only goes inside the
|
`connectionReferences[connector].connectionName` value. If an existing flow uses
|
||||||
> `connectionReferences` entry. The engine matches the action's
|
the same connectors, you may also copy its `properties.connectionReferences`
|
||||||
> `host.connectionName` to the key to find the right connection.
|
from `get_live_flow`.
|
||||||
|
|
||||||
> **Alternative** — if you already have a flow using the same connectors,
|
|
||||||
> you can extract `connectionReferences` from its definition:
|
|
||||||
> ```python
|
|
||||||
> ref_flow = mcp("get_live_flow", environmentName=ENV, flowName="<existing-flow-id>")
|
|
||||||
> connection_references = ref_flow["properties"]["connectionReferences"]
|
|
||||||
> ```
|
|
||||||
|
|
||||||
See the `power-automate-mcp` skill's **connection-references.md** reference
|
|
||||||
for the full connection reference structure.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 3 — Build the Flow Definition
|
## 3. Build the Flow Definition
|
||||||
|
|
||||||
Construct the definition object. See [flow-schema.md](references/flow-schema.md)
|
Construct the definition object. See [flow-schema.md](references/flow-schema.md)
|
||||||
for the full schema and these action pattern references for copy-paste templates:
|
for the full schema and these action pattern references for copy-paste templates:
|
||||||
@@ -217,62 +218,72 @@ definition = {
|
|||||||
> See [build-patterns.md](references/build-patterns.md) for complete, ready-to-use
|
> See [build-patterns.md](references/build-patterns.md) for complete, ready-to-use
|
||||||
> flow definitions covering Recurrence+SharePoint+Teams, HTTP triggers, and more.
|
> flow definitions covering Recurrence+SharePoint+Teams, HTTP triggers, and more.
|
||||||
|
|
||||||
---
|
### Discover connector operations before guessing JSON
|
||||||
|
|
||||||
## Step 3a — Resolving Dynamic Connector Values
|
For connector-backed triggers/actions, prefer the live connector describer over
|
||||||
|
hand-written shapes. It can return authored hints, canonical examples, variant
|
||||||
When an action input needs a value picked from a connector dropdown (e.g. a
|
keys, inputs/outputs, and dynamic metadata pointers.
|
||||||
SharePoint list ID, a Dataverse table name, a user's Azure AD UPN), use
|
|
||||||
`get_live_dynamic_options` to resolve it via MCP rather than hardcoding GUIDs.
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Resolve a SharePoint list by site
|
# Search across connectors when you know the user's intent but not the API.
|
||||||
opts = mcp("get_live_dynamic_options",
|
matches = mcp("describe_live_connector",
|
||||||
|
environmentName=ENV,
|
||||||
|
search="send email",
|
||||||
|
top=5)
|
||||||
|
|
||||||
|
# Describe a specific operation before copying an exampleDefinition.
|
||||||
|
op = mcp("describe_live_connector",
|
||||||
|
environmentName=ENV,
|
||||||
|
connectorName="shared_office365",
|
||||||
|
operationId="SendEmailV2")
|
||||||
|
print(op.get("hint"))
|
||||||
|
```
|
||||||
|
|
||||||
|
When an operation has multiple authored variants, request the variant the flow
|
||||||
|
needs:
|
||||||
|
|
||||||
|
```python
|
||||||
|
teams_chat = mcp("describe_live_connector",
|
||||||
|
environmentName=ENV,
|
||||||
|
connectorName="shared_teams",
|
||||||
|
operationId="PostMessageToConversation",
|
||||||
|
variant="flowbot_chat")
|
||||||
|
```
|
||||||
|
|
||||||
|
When the operation description says a parameter has dynamic options or dynamic
|
||||||
|
properties, call the indicated next tool:
|
||||||
|
|
||||||
|
```python
|
||||||
|
sp_op = mcp("describe_live_connector",
|
||||||
environmentName=ENV,
|
environmentName=ENV,
|
||||||
connectorName="shared_sharepointonline",
|
connectorName="shared_sharepointonline",
|
||||||
operationId="GetTables",
|
operationId="GetItems")
|
||||||
parameters={"dataset": "https://contoso.sharepoint.com/sites/HR"})
|
|
||||||
# opts["value"] → [{"Name": "<list-guid>", "DisplayName": "Employees"}, ...]
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Outer-parameter auto-bridge** (server v1.1.6+): you can pass arbitrary outer
|
sites = mcp("get_live_dynamic_options",
|
||||||
> parameters directly in `parameters` — the server now synthesizes the
|
|
||||||
> `parameterReference` mapping that PA's listEnum requires. Before 1.1.6 you had
|
|
||||||
> to declare `dynamicMetadata.parameters: {paramName: {parameterReference: "name"}}`
|
|
||||||
> manually or get `IncorrectDynamicInvokeParameter`. This makes it practical to
|
|
||||||
> invoke arbitrary connector operations through the dynamic-options pipeline
|
|
||||||
> (e.g. `shared_office365users.SearchUserV2` for AAD user lookup).
|
|
||||||
|
|
||||||
### AadGraph user-picker fallback
|
|
||||||
|
|
||||||
For Outlook actions like `GetEmailsV3` (parameters `mailboxAddress`, `to`, `cc`,
|
|
||||||
`from`), PA's listEnum uses `builtInOperation:AadGraph.GetUsers` — which is
|
|
||||||
broken and returns `DynamicListValuesUndefinedOrInvalid` for every call.
|
|
||||||
|
|
||||||
`describe_live_connector` (v1.1.6+) detects these parameters and returns a
|
|
||||||
structured `fallback` field on each affected parameter pointing at a working
|
|
||||||
alternative. **Use `shared_office365users.SearchUserV2`** to resolve the same
|
|
||||||
AAD user shape `{value: [{id, displayName, mail, userPrincipalName, ...}]}`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Borrow a shared_office365users connection (any active one will do)
|
|
||||||
conn = next(c for c in conn_map if "office365users" in c)
|
|
||||||
|
|
||||||
users = mcp("get_live_dynamic_options",
|
|
||||||
environmentName=ENV,
|
environmentName=ENV,
|
||||||
connectorName="shared_office365users",
|
connectorName="shared_sharepointonline",
|
||||||
connectionName=conn_map[conn], # see Step 2a
|
connectionName=conn_map["shared_sharepointonline"],
|
||||||
operationId="SearchUserV2",
|
operationId="GetItems",
|
||||||
parameters={"searchTerm": "john", "top": 10})
|
parameterName="dataset",
|
||||||
# users["value"] → [{"Id": "...", "DisplayName": "John Smith", "Mail": "..."}, ...]
|
dynamicMetadata=sp_op["dynamicParameters"]["dataset"])
|
||||||
|
|
||||||
|
fields = mcp("get_live_dynamic_properties",
|
||||||
|
environmentName=ENV,
|
||||||
|
connectorName="shared_sharepointonline",
|
||||||
|
connectionName=conn_map["shared_sharepointonline"],
|
||||||
|
operationId="GetItems",
|
||||||
|
parameterName="item",
|
||||||
|
parameters={"dataset": "<site-url>", "table": "<list-id>"},
|
||||||
|
dynamicMetadata=sp_op["dynamicProperties"]["item"])
|
||||||
```
|
```
|
||||||
|
|
||||||
Then plug the resolved `Mail` value into the Outlook action's parameter — no
|
Use dynamic options for dropdown IDs such as SharePoint sites/lists and Teams
|
||||||
need to call `AadGraph.GetUsers` directly.
|
teams/channels. Use dynamic properties for schema/field shapes such as
|
||||||
|
SharePoint list item columns.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 4 — Deploy (Create or Update)
|
## 4. Deploy (Create or Update)
|
||||||
|
|
||||||
`update_live_flow` handles both creation and updates in a single tool.
|
`update_live_flow` handles both creation and updates in a single tool.
|
||||||
|
|
||||||
@@ -281,13 +292,14 @@ need to call `AadGraph.GetUsers` directly.
|
|||||||
Omit `flowName` — the server generates a new GUID and creates via PUT:
|
Omit `flowName` — the server generates a new GUID and creates via PUT:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
definition["description"] = "Weekly SharePoint → Teams notification flow, built by agent"
|
||||||
|
|
||||||
result = mcp("update_live_flow",
|
result = mcp("update_live_flow",
|
||||||
environmentName=ENV,
|
environmentName=ENV,
|
||||||
# flowName omitted → creates a new flow
|
# flowName omitted → creates a new flow
|
||||||
definition=definition,
|
definition=definition,
|
||||||
connectionReferences=connection_references,
|
connectionReferences=connection_references,
|
||||||
displayName="Overdue Invoice Notifications",
|
displayName="Overdue Invoice Notifications"
|
||||||
description="Weekly SharePoint → Teams notification flow, built by agent"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if result.get("error") is not None:
|
if result.get("error") is not None:
|
||||||
@@ -303,13 +315,16 @@ else:
|
|||||||
Provide `flowName` to PATCH:
|
Provide `flowName` to PATCH:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
definition["description"] = (
|
||||||
|
"Updated by agent on " + __import__('datetime').datetime.utcnow().isoformat()
|
||||||
|
)
|
||||||
|
|
||||||
result = mcp("update_live_flow",
|
result = mcp("update_live_flow",
|
||||||
environmentName=ENV,
|
environmentName=ENV,
|
||||||
flowName=FLOW_ID,
|
flowName=FLOW_ID,
|
||||||
definition=definition,
|
definition=definition,
|
||||||
connectionReferences=connection_references,
|
connectionReferences=connection_references,
|
||||||
displayName="My Updated Flow",
|
displayName="My Updated Flow"
|
||||||
description="Updated by agent on " + __import__('datetime').datetime.utcnow().isoformat()
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if result.get("error") is not None:
|
if result.get("error") is not None:
|
||||||
@@ -321,7 +336,9 @@ else:
|
|||||||
> ⚠️ `update_live_flow` always returns an `error` key.
|
> ⚠️ `update_live_flow` always returns an `error` key.
|
||||||
> `null` (Python `None`) means success — do not treat the presence of the key as failure.
|
> `null` (Python `None`) means success — do not treat the presence of the key as failure.
|
||||||
>
|
>
|
||||||
> ⚠️ `description` is required for both create and update.
|
> ⚠️ Flow description lives at `definition["description"]`. The current server
|
||||||
|
> appends `#flowstudio-mcp` for usage tracking. Do not pass a top-level
|
||||||
|
> `description` argument unless `tool_search` shows one in the active schema.
|
||||||
|
|
||||||
### Common deployment errors
|
### Common deployment errors
|
||||||
|
|
||||||
@@ -334,7 +351,7 @@ else:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 5 — Verify the Deployment
|
## 5. Verify the Deployment
|
||||||
|
|
||||||
```python
|
```python
|
||||||
check = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
check = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||||
@@ -351,7 +368,7 @@ print("Actions:", list(acts.keys()))
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Step 6 — Test the Flow
|
## 6. Test the Flow
|
||||||
|
|
||||||
> **MANDATORY**: Before triggering any test run, **ask the user for confirmation**.
|
> **MANDATORY**: Before triggering any test run, **ask the user for confirmation**.
|
||||||
> Running a flow has real side effects — it may send emails, post Teams messages,
|
> Running a flow has real side effects — it may send emails, post Teams messages,
|
||||||
@@ -382,9 +399,10 @@ than the original run. For verifying a fix, `resubmit_live_flow_run` is
|
|||||||
better because it uses the exact data that caused the failure.
|
better because it uses the exact data that caused the failure.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
schema = mcp("get_live_flow_http_schema",
|
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||||
environmentName=ENV, flowName=FLOW_ID)
|
triggers = defn["properties"]["definition"]["triggers"]
|
||||||
print("Expected body:", schema.get("requestSchema"))
|
manual = next(iter(triggers.values()))
|
||||||
|
print("Expected body:", manual.get("inputs", {}).get("schema"))
|
||||||
|
|
||||||
result = mcp("trigger_live_flow",
|
result = mcp("trigger_live_flow",
|
||||||
environmentName=ENV, flowName=FLOW_ID,
|
environmentName=ENV, flowName=FLOW_ID,
|
||||||
@@ -399,95 +417,43 @@ resubmit and no HTTP endpoint to call. This is the ONLY scenario where you
|
|||||||
need the temporary HTTP trigger approach below. **Deploy with a temporary
|
need the temporary HTTP trigger approach below. **Deploy with a temporary
|
||||||
HTTP trigger first, test the actions, then swap to the production trigger.**
|
HTTP trigger first, test the actions, then swap to the production trigger.**
|
||||||
|
|
||||||
#### 7a — Save the real trigger, deploy with a temporary HTTP trigger
|
Compact recipe:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# Save the production trigger you built in Step 3
|
|
||||||
production_trigger = definition["triggers"]
|
production_trigger = definition["triggers"]
|
||||||
|
|
||||||
# Replace with a temporary HTTP trigger
|
|
||||||
definition["triggers"] = {
|
definition["triggers"] = {
|
||||||
"manual": {
|
"manual": {"type": "Request", "kind": "Http", "inputs": {"schema": {}}}
|
||||||
"type": "Request",
|
|
||||||
"kind": "Http",
|
|
||||||
"inputs": {
|
|
||||||
"schema": {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
# Deploy (create or update) with the temp trigger
|
|
||||||
result = mcp("update_live_flow",
|
result = mcp("update_live_flow",
|
||||||
environmentName=ENV,
|
environmentName=ENV,
|
||||||
flowName=FLOW_ID, # omit if creating new
|
flowName=FLOW_ID, # omit if creating new
|
||||||
definition=definition,
|
definition=definition,
|
||||||
connectionReferences=connection_references,
|
connectionReferences=connection_references,
|
||||||
displayName="Overdue Invoice Notifications",
|
displayName="Overdue Invoice Notifications")
|
||||||
description="Deployed with temp HTTP trigger for testing")
|
FLOW_ID = FLOW_ID or result["created"]
|
||||||
|
|
||||||
if result.get("error") is not None:
|
test = mcp("trigger_live_flow", environmentName=ENV, flowName=FLOW_ID,
|
||||||
print("Deploy failed:", result["error"])
|
body={"sample": "payload"})
|
||||||
else:
|
runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=1)
|
||||||
if not FLOW_ID:
|
|
||||||
FLOW_ID = result["created"]
|
|
||||||
print(f"✅ Deployed with temp HTTP trigger: {FLOW_ID}")
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 7b — Fire the flow and check the result
|
if runs[0]["status"] == "Failed":
|
||||||
|
|
||||||
```python
|
|
||||||
# Trigger the flow
|
|
||||||
test = mcp("trigger_live_flow",
|
|
||||||
environmentName=ENV, flowName=FLOW_ID)
|
|
||||||
print(f"Trigger response status: {test['status']}")
|
|
||||||
|
|
||||||
# Wait for the run to complete
|
|
||||||
import time; time.sleep(15)
|
|
||||||
|
|
||||||
# Check the run result
|
|
||||||
runs = mcp("get_live_flow_runs",
|
|
||||||
environmentName=ENV, flowName=FLOW_ID, top=1)
|
|
||||||
run = runs[0]
|
|
||||||
print(f"Run {run['name']}: {run['status']}")
|
|
||||||
|
|
||||||
if run["status"] == "Failed":
|
|
||||||
err = mcp("get_live_flow_run_error",
|
err = mcp("get_live_flow_run_error",
|
||||||
environmentName=ENV, flowName=FLOW_ID, runName=run["name"])
|
environmentName=ENV, flowName=FLOW_ID, runName=runs[0]["name"])
|
||||||
root = err["failedActions"][-1]
|
raise Exception(err["failedActions"][-1])
|
||||||
print(f"Root cause: {root['actionName']} → {root.get('code')}")
|
|
||||||
# Debug and fix the definition before proceeding
|
|
||||||
# See power-automate-debug skill for full diagnosis workflow
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 7c — Swap to the production trigger
|
|
||||||
|
|
||||||
Once the test run succeeds, replace the temporary HTTP trigger with the real one:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Restore the production trigger
|
|
||||||
definition["triggers"] = production_trigger
|
definition["triggers"] = production_trigger
|
||||||
|
mcp("update_live_flow",
|
||||||
result = mcp("update_live_flow",
|
|
||||||
environmentName=ENV,
|
environmentName=ENV,
|
||||||
flowName=FLOW_ID,
|
flowName=FLOW_ID,
|
||||||
definition=definition,
|
definition=definition,
|
||||||
connectionReferences=connection_references,
|
connectionReferences=connection_references)
|
||||||
description="Swapped to production trigger after successful test")
|
|
||||||
|
|
||||||
if result.get("error") is not None:
|
|
||||||
print("Trigger swap failed:", result["error"])
|
|
||||||
else:
|
|
||||||
print("✅ Production trigger deployed — flow is live")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Why this works**: The trigger is just the entry point — the actions are
|
The trigger is only the entry point; testing through HTTP still exercises the
|
||||||
> identical regardless of how the flow starts. Testing via HTTP trigger
|
same actions. If actions use `triggerBody()` or `triggerOutputs()`, pass a
|
||||||
> exercises all the same Compose, SharePoint, Teams, etc. actions.
|
representative `trigger_live_flow.body` shaped like the production trigger
|
||||||
>
|
payload.
|
||||||
> **Connector triggers** (e.g. "When an item is created in SharePoint"):
|
|
||||||
> If actions reference `triggerBody()` or `triggerOutputs()`, pass a
|
|
||||||
> representative test payload in `trigger_live_flow`'s `body` parameter
|
|
||||||
> that matches the shape the connector trigger would produce.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -502,6 +468,12 @@ else:
|
|||||||
| Checking `result["error"]` exists | Always present; true error is `!= null` | Use `result.get("error") is not None` |
|
| Checking `result["error"]` exists | Always present; true error is `!= null` | Use `result.get("error") is not None` |
|
||||||
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Call `set_live_flow_state` with `state: "Started"` — do **not** use `update_live_flow` for state changes |
|
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Call `set_live_flow_state` with `state: "Started"` — do **not** use `update_live_flow` for state changes |
|
||||||
| Teams "Chat with Flow bot" recipient as object | 400 `GraphUserDetailNotFound` | Use plain string with trailing semicolon (see below) |
|
| Teams "Chat with Flow bot" recipient as object | 400 `GraphUserDetailNotFound` | Use plain string with trailing semicolon (see below) |
|
||||||
|
| Copilot/Skills flow not in a solution | Copilot Studio may not discover it as an agent tool | After deploy, call `add_live_flow_to_solution` with the target `solutionId` |
|
||||||
|
| Button/Skills trigger used for MCP testing | MCP cannot directly fire the production trigger | Test the same actions through a temporary HTTP twin, then swap the trigger back |
|
||||||
|
| Connector action missing `metadata.operationMetadataId` | Designer/run-only UI can behave inconsistently | Preserve existing IDs; add stable GUIDs for new connector actions |
|
||||||
|
| Placeholder Excel `scriptId` | Dynamic validation fails at save time | Resolve the real Office Script ID before deploying |
|
||||||
|
| SharePoint `PatchItem` omits required fields | Save can fail even if the field is not changing | Echo unchanged required fields such as `item/Title` |
|
||||||
|
| Copilot Studio connector calls a draft agent | Connector invocation can fail or hit stale behavior | Publish the agent before testing/resubmitting the flow |
|
||||||
|
|
||||||
### Teams `PostMessageToConversation` — Recipient Formats
|
### Teams `PostMessageToConversation` — Recipient Formats
|
||||||
|
|
||||||
@@ -528,5 +500,5 @@ The `body/recipient` parameter format depends on the `location` value:
|
|||||||
|
|
||||||
## Related Skills
|
## Related Skills
|
||||||
|
|
||||||
- `power-automate-mcp` — Foundation skill: connection setup, MCP helper, tool discovery
|
- `flowstudio-power-automate-mcp` — Core connection setup and tool reference
|
||||||
- `power-automate-debug` — Debug failing flows after deployment
|
- `flowstudio-power-automate-debug` — Debug failing flows after deployment
|
||||||
|
|||||||
@@ -132,6 +132,10 @@ Result reference: `@body('Get_SP_Item')?['FieldName']`
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> `PatchItem` can validate required SharePoint columns even when you are not
|
||||||
|
> changing those fields. Echo unchanged required fields from the trigger or a
|
||||||
|
> prior Get Item action, for example `item/Title`, and use internal field names.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### SharePoint — File Upsert (Create or Overwrite in Document Library)
|
### SharePoint — File Upsert (Create or Overwrite in Document Library)
|
||||||
@@ -286,6 +290,10 @@ SharePoint REST API via the `HttpRequest` operation:
|
|||||||
> The `HttpRequest` operation reuses the existing SharePoint connection — no extra
|
> The `HttpRequest` operation reuses the existing SharePoint connection — no extra
|
||||||
> authentication needed. Use this when the standard Update Item connector can't
|
> authentication needed. Use this when the standard Update Item connector can't
|
||||||
> reach the target list (different site collection, or you need raw REST control).
|
> reach the target list (different site collection, or you need raw REST control).
|
||||||
|
> Keep the connector-specific parameter names exactly as shown:
|
||||||
|
> `parameters/method`, `parameters/uri`, `parameters/headers`, and
|
||||||
|
> `parameters/body`. The body is a JSON string, and `parameters/uri` is relative
|
||||||
|
> to the SharePoint `dataset`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -340,6 +348,22 @@ the file; the flow downloads and filters it for before/after comparisons.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Excel Online
|
||||||
|
|
||||||
|
### Excel — Run Office Script
|
||||||
|
|
||||||
|
Office Script actions require real workbook and script identifiers at save time.
|
||||||
|
Do not deploy placeholder `scriptId` values; `update_live_flow` can fail during
|
||||||
|
dynamic operation validation even before a test run exists.
|
||||||
|
|
||||||
|
Use `describe_live_connector` or `get_live_dynamic_options` when available, or
|
||||||
|
ask the user for the workbook and script if they are not discoverable. If a real
|
||||||
|
`scriptId` still cannot be resolved, ask the user to add the Run script action
|
||||||
|
once in the designer, then read the flow definition and preserve the resolved
|
||||||
|
parameters.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Outlook
|
## Outlook
|
||||||
|
|
||||||
### Outlook — Send Email
|
### Outlook — Send Email
|
||||||
@@ -479,6 +503,20 @@ For 1:1 ("Chat with Flow bot"), use `"location": "Chat with Flow bot"` and set
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Copilot Studio
|
||||||
|
|
||||||
|
### Copilot Studio — Invoke Agent
|
||||||
|
|
||||||
|
When using the Copilot Studio connector, publish the agent before running the
|
||||||
|
flow. Draft/test agents can exist in the studio canvas but still be unavailable
|
||||||
|
or stale through the flow connector endpoint.
|
||||||
|
|
||||||
|
If a connector action fails with an unavailable-agent or endpoint-style error,
|
||||||
|
publish the agent, wait briefly for propagation, then resubmit the same flow run
|
||||||
|
before changing the flow definition.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Approvals
|
## Approvals
|
||||||
|
|
||||||
### Split Approval (Create → Wait)
|
### Split Approval (Create → Wait)
|
||||||
|
|||||||
@@ -337,6 +337,23 @@ walking a time range, polling until a status changes).
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
### Agent Retry Loop
|
||||||
|
|
||||||
|
When a flow calls an AI or Copilot-style agent until it reaches a terminal
|
||||||
|
outcome, keep the loop state explicit:
|
||||||
|
|
||||||
|
- Initialize variables such as `agentStatus`, `attempt`, and `finalPayload`
|
||||||
|
before the `Until`.
|
||||||
|
- Inside the loop, call the agent, validate the response, update the status, and
|
||||||
|
delay/retry only when the status is non-terminal.
|
||||||
|
- Put final dispatch actions such as email, SharePoint update, or Teams post
|
||||||
|
after the loop so retries do not duplicate side effects.
|
||||||
|
- If the platform rejects a complex `Switch` nested inside `Until`, keep the
|
||||||
|
loop body to simple validation and state updates, then route with `Switch`
|
||||||
|
after the loop.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Async Polling with RequestId Correlation
|
### Async Polling with RequestId Correlation
|
||||||
|
|
||||||
When an API starts a long-running job asynchronously (e.g. Power BI dataset refresh,
|
When an API starts a long-running job asynchronously (e.g. Power BI dataset refresh,
|
||||||
@@ -486,6 +503,19 @@ Normalize before compare: @replace(coalesce(outputs('Value'),''),'_',' ')
|
|||||||
Robust non-empty check: @greater(length(trim(coalesce(string(outputs('Val')), ''))), 0)
|
Robust non-empty check: @greater(length(trim(coalesce(string(outputs('Val')), ''))), 0)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Unsupported / Risky Expression Assumptions
|
||||||
|
|
||||||
|
Power Automate expressions are Workflow Definition Language, not JavaScript.
|
||||||
|
These patterns often look plausible but do not deploy or do not behave as agents
|
||||||
|
expect:
|
||||||
|
|
||||||
|
| Goal | Avoid | Use instead |
|
||||||
|
|---|---|---|
|
||||||
|
| Build an object inline | `createObject(...)` | A Compose action with a JSON object literal |
|
||||||
|
| Transform an array inline | `select(...)` inside an expression | Data Operations `Select` action |
|
||||||
|
| Filter an array inline | `filter(...)` inside an expression | Data Operations `Filter array` action |
|
||||||
|
| Find an array item index | `indexOf(array, item)` | Foreach with a counter variable, or build a keyed object map |
|
||||||
|
|
||||||
### Newlines in Expressions
|
### Newlines in Expressions
|
||||||
|
|
||||||
> **`\n` does NOT produce a newline inside Power Automate expressions.** It is
|
> **`\n` does NOT produce a newline inside Power Automate expressions.** It is
|
||||||
|
|||||||
@@ -142,24 +142,8 @@ without a loop:
|
|||||||
|
|
||||||
Result: `@body('Generate_Date_Series')` → `["2025-01-06", "2025-01-07", …, "2025-01-19"]`
|
Result: `@body('Generate_Date_Series')` → `["2025-01-06", "2025-01-07", …, "2025-01-19"]`
|
||||||
|
|
||||||
```json
|
For Cartesian products, iterate `range(0, mul(rowCount, colCount))` and derive
|
||||||
// Flatten a 2D array (rows × cols) into 1D using arithmetic indexing
|
indexes with `div(item(), colCount)` and `mod(item(), colCount)`.
|
||||||
"Flatten_Grid": {
|
|
||||||
"type": "Select",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@range(0, mul(length(outputs('Rows')), length(outputs('Cols'))))",
|
|
||||||
"select": {
|
|
||||||
"row": "@outputs('Rows')[div(item(), length(outputs('Cols')))]",
|
|
||||||
"col": "@outputs('Cols')[mod(item(), length(outputs('Cols')))]"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
> `range()` is zero-based. The Cartesian product pattern above uses `div(i, cols)`
|
|
||||||
> for the row index and `mod(i, cols)` for the column index — equivalent to a
|
|
||||||
> nested for-loop flattened into a single pass. Useful for generating time-slot ×
|
|
||||||
> date grids, shift × location assignments, etc.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -184,23 +168,6 @@ dictionary type, build one from an array using Select + join + json:
|
|||||||
|
|
||||||
Lookup: `@outputs('Assemble_Dictionary')?['myKey']`
|
Lookup: `@outputs('Assemble_Dictionary')?['myKey']`
|
||||||
|
|
||||||
```json
|
|
||||||
// Practical example: date → rate-code lookup for business rules
|
|
||||||
"Build_Holiday_Rates": {
|
|
||||||
"type": "Select",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@body('Get_Holidays')?['value']",
|
|
||||||
"select": "@concat('\"', formatDateTime(item()?['Date'], 'yyyy-MM-dd'), '\":\"', item()?['RateCode'], '\"')"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"Holiday_Dict": {
|
|
||||||
"type": "Compose",
|
|
||||||
"inputs": "@json(concat('{', join(body('Build_Holiday_Rates'), ','), '}'))"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Then inside a loop: `@coalesce(outputs('Holiday_Dict')?[item()?['Date']], 'Standard')`
|
|
||||||
|
|
||||||
> The `json(concat('{', join(...), '}'))` pattern works for string values. For numeric
|
> The `json(concat('{', join(...), '}'))` pattern works for string values. For numeric
|
||||||
> or boolean values, omit the inner escaped quotes around the value portion.
|
> or boolean values, omit the inner escaped quotes around the value portion.
|
||||||
> Keys must be unique — duplicate keys silently overwrite earlier ones.
|
> Keys must be unique — duplicate keys silently overwrite earlier ones.
|
||||||
@@ -280,111 +247,20 @@ CSV → database), avoid nested `Apply to each` loops to find changed records.
|
|||||||
Instead, **project flat key arrays** and use `contains()` to perform set operations —
|
Instead, **project flat key arrays** and use `contains()` to perform set operations —
|
||||||
zero nested loops, and the final loop only touches changed items.
|
zero nested loops, and the final loop only touches changed items.
|
||||||
|
|
||||||
**Full insert/update/delete sync pattern:**
|
**Insert/update/delete sync recipe:**
|
||||||
|
|
||||||
```json
|
1. `Select_Dest_Keys` from destination rows.
|
||||||
// Step 1 — Project a flat key array from the DESTINATION (e.g. SharePoint)
|
2. `Filter_To_Insert`: source rows whose key is not in destination keys.
|
||||||
"Select_Dest_Keys": {
|
3. `Filter_Already_Exists`: source rows whose key is in destination keys.
|
||||||
"type": "Select",
|
4. For each compared field, run `Filter_<Field>_Changed`; combine them with
|
||||||
"inputs": {
|
`union()` into `Union_Changed`.
|
||||||
"from": "@outputs('Get_Dest_Items')?['body/value']",
|
5. `Select_Changed_Keys` from `Union_Changed`, then filter destination rows to
|
||||||
"select": "@item()?['Title']"
|
only those keys before updating.
|
||||||
}
|
6. `Select_Source_Keys`, then `Filter_To_Delete` destination rows whose key is
|
||||||
}
|
not in source keys.
|
||||||
// → ["KEY1", "KEY2", "KEY3", ...]
|
|
||||||
|
|
||||||
// Step 2 — INSERT: source rows whose key is NOT in destination
|
This changes O(n x m) nested loops to O(n + m) set operations and helps avoid
|
||||||
"Filter_To_Insert": {
|
Power Automate's 100k-action run limit.
|
||||||
"type": "Query",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@body('Source_Array')",
|
|
||||||
"where": "@not(contains(body('Select_Dest_Keys'), item()?['key']))"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// → Apply to each Filter_To_Insert → CreateItem
|
|
||||||
|
|
||||||
// Step 3 — INNER JOIN: source rows that exist in destination
|
|
||||||
"Filter_Already_Exists": {
|
|
||||||
"type": "Query",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@body('Source_Array')",
|
|
||||||
"where": "@contains(body('Select_Dest_Keys'), item()?['key'])"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 4 — UPDATE: one Filter per tracked field, then union them
|
|
||||||
"Filter_Field1_Changed": {
|
|
||||||
"type": "Query",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@body('Filter_Already_Exists')",
|
|
||||||
"where": "@not(equals(item()?['field1'], item()?['dest_field1']))"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"Filter_Field2_Changed": {
|
|
||||||
"type": "Query",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@body('Filter_Already_Exists')",
|
|
||||||
"where": "@not(equals(item()?['field2'], item()?['dest_field2']))"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"Union_Changed": {
|
|
||||||
"type": "Compose",
|
|
||||||
"inputs": "@union(body('Filter_Field1_Changed'), body('Filter_Field2_Changed'))"
|
|
||||||
}
|
|
||||||
// → rows where ANY tracked field differs
|
|
||||||
|
|
||||||
// Step 5 — Resolve destination IDs for changed rows (no nested loop)
|
|
||||||
"Select_Changed_Keys": {
|
|
||||||
"type": "Select",
|
|
||||||
"inputs": { "from": "@outputs('Union_Changed')", "select": "@item()?['key']" }
|
|
||||||
}
|
|
||||||
"Filter_Dest_Items_To_Update": {
|
|
||||||
"type": "Query",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@outputs('Get_Dest_Items')?['body/value']",
|
|
||||||
"where": "@contains(body('Select_Changed_Keys'), item()?['Title'])"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Step 6 — Single loop over changed items only
|
|
||||||
"Apply_to_each_Update": {
|
|
||||||
"type": "Foreach",
|
|
||||||
"foreach": "@body('Filter_Dest_Items_To_Update')",
|
|
||||||
"actions": {
|
|
||||||
"Get_Source_Row": {
|
|
||||||
"type": "Query",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@outputs('Union_Changed')",
|
|
||||||
"where": "@equals(item()?['key'], items('Apply_to_each_Update')?['Title'])"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"Update_Item": {
|
|
||||||
"...": "...",
|
|
||||||
"id": "@items('Apply_to_each_Update')?['ID']",
|
|
||||||
"item/field1": "@first(body('Get_Source_Row'))?['field1']"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 7 — DELETE: destination keys NOT in source
|
|
||||||
"Select_Source_Keys": {
|
|
||||||
"type": "Select",
|
|
||||||
"inputs": { "from": "@body('Source_Array')", "select": "@item()?['key']" }
|
|
||||||
}
|
|
||||||
"Filter_To_Delete": {
|
|
||||||
"type": "Query",
|
|
||||||
"inputs": {
|
|
||||||
"from": "@outputs('Get_Dest_Items')?['body/value']",
|
|
||||||
"where": "@not(contains(body('Select_Source_Keys'), item()?['Title']))"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// → Apply to each Filter_To_Delete → DeleteItem
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Why this beats nested loops**: the naive approach (for each dest item, scan source)
|
|
||||||
> is O(n × m) and hits Power Automate's 100k-action run limit fast on large lists.
|
|
||||||
> This pattern is O(n + m): one pass to build key arrays, one pass per filter.
|
|
||||||
> The update loop in Step 6 only iterates *changed* records — often a tiny fraction
|
|
||||||
> of the full collection. Run Steps 2/4/7 in **parallel Scopes** for further speed.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -649,14 +525,8 @@ Parse a raw CSV string into an array of objects using only built-in expressions.
|
|||||||
Avoids the premium "Parse CSV" connector action.
|
Avoids the premium "Parse CSV" connector action.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"Delimiter": {
|
"Delimiter": { "type": "Compose", "inputs": "," },
|
||||||
"type": "Compose",
|
"Strip_Quotes": { "type": "Compose", "inputs": "@replace(body('Get_File_Content'), '\"', '')" },
|
||||||
"inputs": ","
|
|
||||||
},
|
|
||||||
"Strip_Quotes": {
|
|
||||||
"type": "Compose",
|
|
||||||
"inputs": "@replace(body('Get_File_Content'), '\"', '')"
|
|
||||||
},
|
|
||||||
"Detect_Line_Ending": {
|
"Detect_Line_Ending": {
|
||||||
"type": "Compose",
|
"type": "Compose",
|
||||||
"inputs": "@if(equals(indexOf(outputs('Strip_Quotes'), decodeUriComponent('%0D%0A')), -1), if(equals(indexOf(outputs('Strip_Quotes'), decodeUriComponent('%0A')), -1), decodeUriComponent('%0D'), decodeUriComponent('%0A')), decodeUriComponent('%0D%0A'))"
|
"inputs": "@if(equals(indexOf(outputs('Strip_Quotes'), decodeUriComponent('%0D%0A')), -1), if(equals(indexOf(outputs('Strip_Quotes'), decodeUriComponent('%0A')), -1), decodeUriComponent('%0D'), decodeUriComponent('%0A')), decodeUriComponent('%0D%0A'))"
|
||||||
@@ -665,10 +535,7 @@ Avoids the premium "Parse CSV" connector action.
|
|||||||
"type": "Compose",
|
"type": "Compose",
|
||||||
"inputs": "@split(first(split(outputs('Strip_Quotes'), outputs('Detect_Line_Ending'))), outputs('Delimiter'))"
|
"inputs": "@split(first(split(outputs('Strip_Quotes'), outputs('Detect_Line_Ending'))), outputs('Delimiter'))"
|
||||||
},
|
},
|
||||||
"Data_Rows": {
|
"Data_Rows": { "type": "Compose", "inputs": "@skip(split(outputs('Strip_Quotes'), outputs('Detect_Line_Ending')), 1)" },
|
||||||
"type": "Compose",
|
|
||||||
"inputs": "@skip(split(outputs('Strip_Quotes'), outputs('Detect_Line_Ending')), 1)"
|
|
||||||
},
|
|
||||||
"Select_CSV_Body": {
|
"Select_CSV_Body": {
|
||||||
"type": "Select",
|
"type": "Select",
|
||||||
"inputs": {
|
"inputs": {
|
||||||
@@ -691,16 +558,9 @@ Avoids the premium "Parse CSV" connector action.
|
|||||||
|
|
||||||
Result: `@body('Filter_Empty_Rows')` — array of objects with header names as keys.
|
Result: `@body('Filter_Empty_Rows')` — array of objects with header names as keys.
|
||||||
|
|
||||||
> **`Detect_Line_Ending`** handles CRLF (Windows), LF (Unix), and CR (old Mac) automatically
|
Notes: `Detect_Line_Ending` handles CRLF/LF/CR. Dynamic keys in `Select` require
|
||||||
> using `indexOf()` with `decodeUriComponent('%0D%0A' / '%0A' / '%0D')`.
|
`@{...}` interpolation. This simple pattern does not safely parse quoted fields
|
||||||
>
|
with embedded delimiters; for those, use a dedicated parser or custom action.
|
||||||
> **Dynamic key names in `Select`**: `@{outputs('Headers')[0]}` as a JSON key in a
|
|
||||||
> `Select` shape sets the output property name at runtime from the header row —
|
|
||||||
> this works as long as the expression is in `@{...}` interpolation syntax.
|
|
||||||
>
|
|
||||||
> **Columns with embedded commas**: if field values can contain the delimiter,
|
|
||||||
> use `length(split(row, ','))` in a Switch to detect the column count and manually
|
|
||||||
> reassemble the split fragments: `@concat(split(item(),',')[1],',',split(item(),',')[2])`
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -59,6 +59,15 @@ Beyond the required `type`, `runAfter`, and `inputs`, actions can include:
|
|||||||
| `runtimeConfiguration` | Pagination, concurrency, secure data, chunked transfer |
|
| `runtimeConfiguration` | Pagination, concurrency, secure data, chunked transfer |
|
||||||
| `operationOptions` | `"Sequential"` for Foreach, `"DisableAsyncPattern"` for HTTP |
|
| `operationOptions` | `"Sequential"` for Foreach, `"DisableAsyncPattern"` for HTTP |
|
||||||
| `limit` | Timeout override (e.g. `{"timeout": "PT2H"}`) |
|
| `limit` | Timeout override (e.g. `{"timeout": "PT2H"}`) |
|
||||||
|
| `metadata` | Designer metadata such as `operationMetadataId` |
|
||||||
|
|
||||||
|
#### Designer Metadata
|
||||||
|
|
||||||
|
For existing connector actions, preserve `metadata.operationMetadataId` when you
|
||||||
|
edit the definition. For new connector actions or Skills/HTTP response actions,
|
||||||
|
add a stable GUID and keep it stable across updates. Do not regenerate these IDs
|
||||||
|
on every deploy; the designer and some run-only surfaces use them to keep action
|
||||||
|
identity consistent.
|
||||||
|
|
||||||
#### `runtimeConfiguration` Variants
|
#### `runtimeConfiguration` Variants
|
||||||
|
|
||||||
|
|||||||
@@ -93,6 +93,40 @@ Access any field dynamically: `@triggerBody()?['anyField']`
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Manual (Copilot Studio Skills)
|
||||||
|
|
||||||
|
Use the Skills trigger when the flow is meant to be called by a Copilot Studio
|
||||||
|
agent tool. Keep the trigger schema explicit so the agent receives predictable
|
||||||
|
input names and types.
|
||||||
|
|
||||||
|
```json
|
||||||
|
"manual": {
|
||||||
|
"type": "Request",
|
||||||
|
"kind": "Skills",
|
||||||
|
"inputs": {
|
||||||
|
"schema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"itemId": { "type": "string" },
|
||||||
|
"notes": { "type": "string" }
|
||||||
|
},
|
||||||
|
"required": ["itemId"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"metadata": {
|
||||||
|
"operationMetadataId": "<stable-guid>"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
After deploying a production Skills-triggered flow, call
|
||||||
|
`add_live_flow_to_solution` with the target `solutionId`; Copilot Studio agent
|
||||||
|
tool discovery expects the flow to be solution-aware. For MCP-driven testing,
|
||||||
|
use a temporary HTTP twin with the same actions and payload shape, then restore
|
||||||
|
the Skills trigger after the actions are verified.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Automated (SharePoint Item Created)
|
## Automated (SharePoint Item Created)
|
||||||
|
|
||||||
```json
|
```json
|
||||||
|
|||||||
@@ -9,13 +9,6 @@ description: >-
|
|||||||
fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure,
|
fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure,
|
||||||
check connector auth errors, read error details from a run, or troubleshoot
|
check connector auth errors, read error details from a run, or troubleshoot
|
||||||
expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
|
||||||
metadata:
|
|
||||||
openclaw:
|
|
||||||
requires:
|
|
||||||
env:
|
|
||||||
- FLOWSTUDIO_MCP_TOKEN
|
|
||||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
|
||||||
homepage: https://mcp.flowstudio.app
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Power Automate Debugging with FlowStudio MCP
|
# Power Automate Debugging with FlowStudio MCP
|
||||||
@@ -28,18 +21,19 @@ cloud flows through the FlowStudio MCP server.
|
|||||||
> [Null value crashes child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/null-child-flow.md)
|
> [Null value crashes child flow](https://github.com/ninihen1/power-automate-mcp-skills/blob/main/examples/null-child-flow.md)
|
||||||
|
|
||||||
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
|
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
|
||||||
See the `power-automate-mcp` skill for connection setup.
|
See the `flowstudio-power-automate-mcp` skill for connection setup.
|
||||||
Subscribe at https://mcp.flowstudio.app
|
Subscribe at https://mcp.flowstudio.app
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Source of Truth
|
## Source of Truth
|
||||||
|
|
||||||
> **Always call `tools/list` first** to confirm available tool names and their
|
> **Always call `list_skills` / `tool_search` first** to confirm available tool
|
||||||
> parameter schemas. Tool names and parameters may change between server versions.
|
> names and parameter schemas. Tool names and parameters may change between
|
||||||
|
> server versions.
|
||||||
> This skill covers response shapes, behavioral notes, and diagnostic patterns —
|
> This skill covers response shapes, behavioral notes, and diagnostic patterns —
|
||||||
> things `tools/list` cannot tell you. If this document disagrees with `tools/list`
|
> things tool schemas cannot tell you. If this document disagrees with
|
||||||
> or a real API response, the API wins.
|
> `tool_search` or a real API response, the API wins.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -161,6 +155,8 @@ detail = mcp("get_live_flow_run_action_outputs",
|
|||||||
runName=RUN_ID,
|
runName=RUN_ID,
|
||||||
actionName=root_action)
|
actionName=root_action)
|
||||||
|
|
||||||
|
if len(detail) > 1:
|
||||||
|
print(f"{root_action} returned {len(detail)} repetitions; inspect iteration indexes")
|
||||||
out = detail[0] if detail else {}
|
out = detail[0] if detail else {}
|
||||||
print(f"Action: {out.get('actionName')}")
|
print(f"Action: {out.get('actionName')}")
|
||||||
print(f"Status: {out.get('status')}")
|
print(f"Status: {out.get('status')}")
|
||||||
@@ -198,6 +194,39 @@ if out.get("inputs"):
|
|||||||
| `InvalidTemplate` | The exact expression that failed and the null/wrong-type value |
|
| `InvalidTemplate` | The exact expression that failed and the null/wrong-type value |
|
||||||
| `BadRequest` | The request body that was sent and why the server rejected it |
|
| `BadRequest` | The request body that was sent and why the server rejected it |
|
||||||
|
|
||||||
|
### Foreach iterations
|
||||||
|
|
||||||
|
When `actionName` refers to an action inside a foreach, the output tool can
|
||||||
|
return every repetition of that action. Each item may include
|
||||||
|
`repetitionIndexes` with the loop name and zero-based `itemIndex`. Use
|
||||||
|
`iterationIndex` to inspect one iteration after you find the suspicious item:
|
||||||
|
|
||||||
|
```python
|
||||||
|
all_reps = mcp("get_live_flow_run_action_outputs",
|
||||||
|
environmentName=ENV,
|
||||||
|
flowName=FLOW_ID,
|
||||||
|
runName=RUN_ID,
|
||||||
|
actionName=root_action)
|
||||||
|
|
||||||
|
for rep in all_reps[:10]:
|
||||||
|
print(rep.get("repetitionIndexes"), rep.get("status"), rep.get("error"))
|
||||||
|
|
||||||
|
one_rep = mcp("get_live_flow_run_action_outputs",
|
||||||
|
environmentName=ENV,
|
||||||
|
flowName=FLOW_ID,
|
||||||
|
runName=RUN_ID,
|
||||||
|
actionName=root_action,
|
||||||
|
iterationIndex=3)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Evidence Compose Bookends
|
||||||
|
|
||||||
|
For uncertain connector work, add a `Compose_*_Request` before the risky action
|
||||||
|
and a `Compose_*_Result` after it, with the result action allowed on both
|
||||||
|
`Succeeded` and `Failed`. This gives future debugging a clean payload snapshot
|
||||||
|
without requiring another deploy. Do not include secrets or long binary payloads
|
||||||
|
in these bookends.
|
||||||
|
|
||||||
### Example: HTTP action returning 500
|
### Example: HTTP action returning 500
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -259,10 +288,9 @@ for action_name in [root_action, "Compose_WeekEnd", "HTTP_Get_Data"]:
|
|||||||
> ⚠️ Output payloads from array-processing actions can be very large.
|
> ⚠️ Output payloads from array-processing actions can be very large.
|
||||||
> Always slice (e.g. `[:500]`) before printing.
|
> Always slice (e.g. `[:500]`) before printing.
|
||||||
|
|
||||||
> **Tip**: Omit `actionName` to get ALL actions in a single call.
|
> **Tip**: Omit `actionName` to list top-level actions when you're not sure
|
||||||
> This returns every action's inputs/outputs — useful when you're not sure
|
> which action produced the bad data. Once you pick an action inside a foreach,
|
||||||
> which upstream action produced the bad data. But use 120s+ timeout as
|
> pass `iterationIndex` to avoid pulling every repetition into context.
|
||||||
> the response can be very large.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -317,9 +345,12 @@ is broken at the PA listEnum layer and always returns
|
|||||||
modifies an Outlook action via `update_live_flow` and tries to resolve a user
|
modifies an Outlook action via `update_live_flow` and tries to resolve a user
|
||||||
through dynamic options. **Don't fix it by retrying AadGraph** — switch to
|
through dynamic options. **Don't fix it by retrying AadGraph** — switch to
|
||||||
`shared_office365users.SearchUserV2` instead (returns the same AAD user shape).
|
`shared_office365users.SearchUserV2` instead (returns the same AAD user shape).
|
||||||
See the `power-automate-build` skill, **Step 3a — Resolving Dynamic Connector
|
Use `describe_live_connector` to confirm whether the affected parameter exposes
|
||||||
Values**, for the working pattern. `describe_live_connector` (v1.1.6+) returns
|
a structured `fallback`, then call `get_live_dynamic_options` against
|
||||||
this fallback as a structured `fallback` field on the affected parameter.
|
`shared_office365users.SearchUserV2` instead of the broken AadGraph operation.
|
||||||
|
For dynamic field schemas rather than dropdown options, use
|
||||||
|
`get_live_dynamic_properties` with the metadata returned by
|
||||||
|
`describe_live_connector`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -389,11 +420,17 @@ For flows with a `Request` (HTTP) trigger, use `trigger_live_flow` when you
|
|||||||
need to send a **different** payload than the original run:
|
need to send a **different** payload than the original run:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
# First inspect what the trigger expects
|
# First inspect what the trigger expects — read directly from the flow definition
|
||||||
schema = mcp("get_live_flow_http_schema",
|
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
|
||||||
environmentName=ENV, flowName=FLOW_ID)
|
triggers = defn["properties"]["definition"]["triggers"]
|
||||||
print("Expected body schema:", schema.get("requestSchema"))
|
manual = next(iter(triggers.values())) # usually the only trigger on HTTP flows
|
||||||
print("Response schemas:", schema.get("responseSchemas"))
|
request_schema = manual.get("inputs", {}).get("schema")
|
||||||
|
print("Expected body schema:", request_schema)
|
||||||
|
|
||||||
|
# Response schemas live on Response action(s) in the actions block
|
||||||
|
for name, act in defn["properties"]["definition"]["actions"].items():
|
||||||
|
if act.get("type") == "Response":
|
||||||
|
print(f"Response {name}:", act.get("inputs", {}).get("schema"))
|
||||||
|
|
||||||
# Trigger with a test payload
|
# Trigger with a test payload
|
||||||
result = mcp("trigger_live_flow",
|
result = mcp("trigger_live_flow",
|
||||||
@@ -433,5 +470,5 @@ print(f"Status: {result['responseStatus']}, Body: {result.get('responseBody')}")
|
|||||||
|
|
||||||
## Related Skills
|
## Related Skills
|
||||||
|
|
||||||
- `power-automate-mcp` — Foundation skill: connection setup, MCP helper, tool discovery
|
- `flowstudio-power-automate-mcp` — Foundation skill: connection setup, MCP helper, tool discovery
|
||||||
- `power-automate-build` — Build and deploy new flows
|
- `flowstudio-power-automate-build` — Build and deploy new flows
|
||||||
|
|||||||
@@ -149,6 +149,24 @@ iterations in parallel, causing write conflicts or undefined ordering.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
### Foreach Parent Failed After Handled Inner Failure
|
||||||
|
|
||||||
|
**Symptom**: Inner actions have failure handlers, but the parent `Foreach` still
|
||||||
|
shows `Failed`, and downstream actions such as `Response` are skipped.
|
||||||
|
|
||||||
|
**Root cause**: A handled child failure can still mark the loop container as
|
||||||
|
failed. Downstream `runAfter` that only accepts `Succeeded` will not run.
|
||||||
|
|
||||||
|
**Diagnosis**: Inspect the parent foreach with `get_live_flow_run_error`, then
|
||||||
|
inspect child action outputs for the iteration that failed.
|
||||||
|
|
||||||
|
**Fix**: If partial success is acceptable, allow the downstream join/response to
|
||||||
|
run after `Succeeded` and `Failed`, and include an explicit error summary in the
|
||||||
|
payload. If the loop must be all-or-nothing, wrap risky inner work in a Scope and
|
||||||
|
handle success/failure at the Scope boundary.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Update / Deploy Errors
|
## Update / Deploy Errors
|
||||||
|
|
||||||
### `update_live_flow` Returns No-Op
|
### `update_live_flow` Returns No-Op
|
||||||
@@ -186,3 +204,20 @@ values override new_data for matching records.
|
|||||||
Before: @sort(union(outputs('Old_Array'), body('New_Array')), 'Date')
|
Before: @sort(union(outputs('Old_Array'), body('New_Array')), 'Date')
|
||||||
After: @sort(union(body('New_Array'), outputs('Old_Array')), 'Date')
|
After: @sort(union(body('New_Array'), outputs('Old_Array')), 'Date')
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Null Cascade in Filter Array / Query
|
||||||
|
|
||||||
|
**Symptom**: A lookup/filter step returns the wrong record or a later expression
|
||||||
|
fails on null even though the filter action itself succeeded.
|
||||||
|
|
||||||
|
**Root cause**: The lookup key is null or empty. A condition such as
|
||||||
|
`equals(item()?['Email'], outputs('Lookup_Email'))` can accidentally match rows
|
||||||
|
where both sides are null, or can pass an empty array downstream.
|
||||||
|
|
||||||
|
**Diagnosis**: Inspect the action that creates the lookup key and the filter
|
||||||
|
output length. Confirm the key is non-empty before trusting the filter result.
|
||||||
|
|
||||||
|
**Fix**: Add a non-empty guard before the filter, normalize comparison values
|
||||||
|
with `trim()`/`toLower()`, and branch explicitly when no match is found.
|
||||||
|
|||||||
@@ -27,6 +27,9 @@ Flow is failing
|
|||||||
│ ├── error.code = "ActionFailed" + message mentions HTTP
|
│ ├── error.code = "ActionFailed" + message mentions HTTP
|
||||||
│ │ └── ► See: HTTP Action Workflow below
|
│ │ └── ► See: HTTP Action Workflow below
|
||||||
│ │
|
│ │
|
||||||
|
│ ├── parent action is Foreach / Apply to each
|
||||||
|
│ │ └── ► Inspect child actions; handled child failures can still fail the parent
|
||||||
|
│ │
|
||||||
│ └── Unknown / generic error
|
│ └── Unknown / generic error
|
||||||
│ └── ► Walk actions backwards (Step B below)
|
│ └── ► Walk actions backwards (Step B below)
|
||||||
│
|
│
|
||||||
@@ -113,6 +116,9 @@ Flow succeeds but output data is wrong
|
|||||||
│ ├── Check foreach condition — filter may be too strict
|
│ ├── Check foreach condition — filter may be too strict
|
||||||
│ └── Check if parallel foreach caused race condition (add Sequential)
|
│ └── Check if parallel foreach caused race condition (add Sequential)
|
||||||
│
|
│
|
||||||
|
├── Filter/Query result unexpectedly matches nulls or returns empty
|
||||||
|
│ └── Guard lookup keys before the filter; do not compare null-to-null
|
||||||
|
│
|
||||||
└── Date/time values wrong timezone
|
└── Date/time values wrong timezone
|
||||||
└── Use convertTimeZone() — utcNow() is always UTC
|
└── Use convertTimeZone() — utcNow() is always UTC
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -11,13 +11,6 @@ description: >-
|
|||||||
a compliance report, offboard a maker, or any task that involves writing
|
a compliance report, offboard a maker, or any task that involves writing
|
||||||
governance metadata to flows. Requires a FlowStudio for Teams or MCP Pro+
|
governance metadata to flows. Requires a FlowStudio for Teams or MCP Pro+
|
||||||
subscription — see https://mcp.flowstudio.app
|
subscription — see https://mcp.flowstudio.app
|
||||||
metadata:
|
|
||||||
openclaw:
|
|
||||||
requires:
|
|
||||||
env:
|
|
||||||
- FLOWSTUDIO_MCP_TOKEN
|
|
||||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
|
||||||
homepage: https://mcp.flowstudio.app
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Power Automate Governance with FlowStudio MCP
|
# Power Automate Governance with FlowStudio MCP
|
||||||
@@ -26,12 +19,12 @@ Classify, tag, and govern Power Automate flows at scale through the FlowStudio
|
|||||||
MCP **cached store** — without Dataverse, without the CoE Starter Kit, and
|
MCP **cached store** — without Dataverse, without the CoE Starter Kit, and
|
||||||
without the Power Automate portal.
|
without the Power Automate portal.
|
||||||
|
|
||||||
This skill uses the same `store_*` tool family as `power-automate-monitoring`,
|
This skill uses the same `store_*` tool family as `flowstudio-power-automate-monitoring`,
|
||||||
but with a different *intent*: governance writes metadata (`update_store_flow`)
|
but with a different *intent*: governance writes metadata (`update_store_flow`)
|
||||||
and reads for *audit and classification* outcomes. Monitoring reads the same
|
and reads for *audit and classification* outcomes. Monitoring reads the same
|
||||||
tools for *operational health* outcomes. Don't try to memorize which skill
|
tools for *operational health* outcomes. Don't try to memorize which skill
|
||||||
"owns" which tool — pick by what the user is doing. For health checks and
|
"owns" which tool — pick by what the user is doing. For health checks and
|
||||||
failure-rate dashboards, load `power-automate-monitoring` instead.
|
failure-rate dashboards, load `flowstudio-power-automate-monitoring` instead.
|
||||||
|
|
||||||
> **⚠️ Pro+ subscription required.** This skill calls `store_*` tools that
|
> **⚠️ Pro+ subscription required.** This skill calls `store_*` tools that
|
||||||
> only work for FlowStudio for Teams or MCP Pro+ subscribers.
|
> only work for FlowStudio for Teams or MCP Pro+ subscribers.
|
||||||
@@ -122,44 +115,19 @@ Required parameters: `environmentName`, `flowName`. All other fields optional.
|
|||||||
|
|
||||||
### 1. Compliance Detail Review
|
### 1. Compliance Detail Review
|
||||||
|
|
||||||
Identify flows missing required governance metadata — the equivalent of
|
Identify flows missing required governance metadata.
|
||||||
the CoE Starter Kit's Developer Compliance Center.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
1. Ask the user which compliance fields they require
|
1. Ask the user which compliance fields they require
|
||||||
(or use their organization's existing governance policy)
|
|
||||||
2. list_store_flows
|
2. list_store_flows
|
||||||
3. For each flow (skip entries without displayName or state=Deleted):
|
3. For each active flow: split id, call get_store_flow, check required fields
|
||||||
- Split id → environmentName, flowName
|
|
||||||
- get_store_flow(environmentName, flowName)
|
|
||||||
- Check which required fields are missing or empty
|
|
||||||
4. Report non-compliant flows with missing fields listed
|
4. Report non-compliant flows with missing fields listed
|
||||||
5. For each non-compliant flow:
|
5. For updates: ask for values, then update_store_flow(...provided fields)
|
||||||
- Ask the user for values
|
|
||||||
- update_store_flow(environmentName, flowName, ...provided fields)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Fields available for compliance checks:**
|
Common compliance fields: `description`, `businessImpact`,
|
||||||
|
`businessJustification`, `ownerTeam`, `supportEmail`, `monitor`,
|
||||||
| Field | Example policy |
|
`rule_notify_onfail`, `critical`. Ask for the user's policy before flagging.
|
||||||
|---|---|
|
|
||||||
| `description` | Every flow should be documented |
|
|
||||||
| `businessImpact` | Classify as Low / Medium / High / Critical |
|
|
||||||
| `businessJustification` | Required for High/Critical impact flows |
|
|
||||||
| `ownerTeam` | Every flow should have an accountable team |
|
|
||||||
| `supportEmail` | Required for production flows |
|
|
||||||
| `monitor` | Required for critical flows (note: standard plan includes 20 monitored flows) |
|
|
||||||
| `rule_notify_onfail` | Recommended for monitored flows |
|
|
||||||
| `critical` | Designate business-critical flows |
|
|
||||||
|
|
||||||
> Each organization defines their own compliance rules. The fields above are
|
|
||||||
> suggestions based on common Power Platform governance patterns (CoE Starter
|
|
||||||
> Kit). Ask the user what their requirements are before flagging flows as
|
|
||||||
> non-compliant.
|
|
||||||
>
|
|
||||||
> **Tip:** Flows created or updated via MCP already have `description`
|
|
||||||
> (auto-appended by `update_live_flow`). Flows created manually in the
|
|
||||||
> Power Automate portal are the ones most likely missing governance metadata.
|
|
||||||
|
|
||||||
### 2. Orphaned Resource Detection
|
### 2. Orphaned Resource Detection
|
||||||
|
|
||||||
@@ -168,69 +136,32 @@ Find flows owned by deleted or disabled Azure AD accounts.
|
|||||||
```
|
```
|
||||||
1. list_store_makers
|
1. list_store_makers
|
||||||
2. Filter where deleted=true AND ownerFlowCount > 0
|
2. Filter where deleted=true AND ownerFlowCount > 0
|
||||||
Note: deleted makers have NO displayName/mail — record their id (AAD OID)
|
|
||||||
3. list_store_flows → collect all flows
|
3. list_store_flows → collect all flows
|
||||||
4. For each flow (skip entries without displayName or state=Deleted):
|
4. For each active flow: split id, get_store_flow, parse owners JSON
|
||||||
- Split id → environmentName, flowName
|
5. Match owner principalId against orphaned maker id
|
||||||
- get_store_flow(environmentName, flowName)
|
6. Reassign governance contact or stop/tag for decommission
|
||||||
- Parse owners: json.loads(record["owners"])
|
|
||||||
- Check if any owner principalId matches an orphaned maker id
|
|
||||||
5. Report orphaned flows: maker id, flow name, flow state
|
|
||||||
6. For each orphaned flow:
|
|
||||||
- Reassign governance: update_store_flow(environmentName, flowName,
|
|
||||||
ownerTeam="NewTeam", supportEmail="new-owner@contoso.com")
|
|
||||||
- Or decommission: set_store_flow_state(environmentName, flowName,
|
|
||||||
state="Stopped")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> `update_store_flow` updates governance metadata in the cache only. To
|
`update_store_flow` does not transfer actual PA ownership; use the admin center
|
||||||
> transfer actual PA ownership, an admin must use the Power Platform admin
|
or PowerShell for that. Some orphaned-looking flows are system-generated; tag
|
||||||
> center or PowerShell.
|
them instead of reassigning when appropriate. Store coverage is only as fresh as
|
||||||
>
|
the latest scan.
|
||||||
> **Note:** Many orphaned flows are system-generated (created by
|
|
||||||
> `DataverseSystemUser` accounts for SLA monitoring, knowledge articles,
|
|
||||||
> etc.). These were never built by a person — consider tagging them
|
|
||||||
> rather than reassigning.
|
|
||||||
>
|
|
||||||
> **Coverage:** This workflow searches the cached store only, not the
|
|
||||||
> live PA API. Flows created after the last scan won't appear.
|
|
||||||
|
|
||||||
### 3. Archive Score Calculation
|
### 3. Archive Score Calculation
|
||||||
|
|
||||||
Compute an inactivity score (0-7) per flow to identify safe cleanup
|
Compute an inactivity score (0-7) per flow to identify cleanup candidates.
|
||||||
candidates. Aligns with the CoE Starter Kit's archive scoring.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
1. list_store_flows
|
1. list_store_flows
|
||||||
2. For each flow (skip entries without displayName or state=Deleted):
|
2. For each active flow: split id, get_store_flow
|
||||||
- Split id → environmentName, flowName
|
3. Add 1 point each: created≈modified, test/demo/temp/copy name, age >12mo,
|
||||||
- get_store_flow(environmentName, flowName)
|
stopped/suspended, no owners, no recent runs, complexity.actions < 5
|
||||||
3. Compute archive score (0-7), add 1 point for each:
|
4. Score 5-7: recommend archive; 3-4: tag #archive-review; 0-2: active
|
||||||
+1 lastModifiedTime within 24 hours of createdTime
|
5. For confirmed archive: set_live_flow_state(..., "Stopped") and append #archived
|
||||||
+1 displayName contains "test", "demo", "copy", "temp", or "backup"
|
|
||||||
(case-insensitive)
|
|
||||||
+1 createdTime is more than 12 months ago
|
|
||||||
+1 state is "Stopped" or "Suspended"
|
|
||||||
+1 json.loads(owners) is empty array []
|
|
||||||
+1 runPeriodTotal = 0 (never ran or no recent runs)
|
|
||||||
+1 parse json.loads(complexity) → actions < 5
|
|
||||||
4. Classify:
|
|
||||||
Score 5-7: Recommend archive — report to user for confirmation
|
|
||||||
Score 3-4: Flag for review →
|
|
||||||
Read existing tags from get_store_flow response, append #archive-review
|
|
||||||
update_store_flow(environmentName, flowName, tags="<existing> #archive-review")
|
|
||||||
Score 0-2: Active, no action
|
|
||||||
5. For user-confirmed archives:
|
|
||||||
set_store_flow_state(environmentName, flowName, state="Stopped")
|
|
||||||
Read existing tags, append #archived
|
|
||||||
update_store_flow(environmentName, flowName, tags="<existing> #archived")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> **What "archive" means:** Power Automate has no native archive feature.
|
Archive via MCP means stop the flow and tag it. Deletion requires the portal or
|
||||||
> Archiving via MCP means: (1) stop the flow so it can't run, and
|
admin PowerShell.
|
||||||
> (2) tag it `#archived` so it's discoverable for future cleanup.
|
|
||||||
> Actual deletion requires the Power Automate portal or admin PowerShell
|
|
||||||
> — it cannot be done via MCP tools.
|
|
||||||
|
|
||||||
### 4. Connector Audit
|
### 4. Connector Audit
|
||||||
|
|
||||||
@@ -239,35 +170,14 @@ impact analysis and premium license planning.
|
|||||||
|
|
||||||
```
|
```
|
||||||
1. list_store_flows(monitor=true)
|
1. list_store_flows(monitor=true)
|
||||||
(scope to monitored flows — auditing all 1000+ flows is expensive)
|
2. For each active flow: split id, get_store_flow, parse connections JSON
|
||||||
2. For each flow (skip entries without displayName or state=Deleted):
|
3. Group by apiName; flag Premium tier, HTTP connectors, custom connectors
|
||||||
- Split id → environmentName, flowName
|
|
||||||
- get_store_flow(environmentName, flowName)
|
|
||||||
- Parse connections: json.loads(record["connections"])
|
|
||||||
Returns array of objects with apiName, apiId, connectionName
|
|
||||||
- Note the flow-level tier field ("Standard" or "Premium")
|
|
||||||
3. Build connector inventory:
|
|
||||||
- Which apiNames are used and by how many flows
|
|
||||||
- Which flows have tier="Premium" (premium connector detected)
|
|
||||||
- Which flows use HTTP connectors (apiName contains "http")
|
|
||||||
- Which flows use custom connectors (non-shared_ prefix apiNames)
|
|
||||||
4. Report inventory to user
|
4. Report inventory to user
|
||||||
- For DLP analysis: user provides their DLP policy connector groups,
|
|
||||||
agent cross-references against the inventory
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Scope to monitored flows.** Each flow requires a `get_store_flow` call
|
Scope to monitored flows where possible; each `get_store_flow` call costs time.
|
||||||
> to read the `connections` JSON. Standard plans have ~20 monitored flows —
|
`list_store_connections` lists connection instances, not connector usage per
|
||||||
> manageable. Auditing all flows in a large tenant (1000+) would be very
|
flow. DLP policies are not exposed; ask the user for connector classifications.
|
||||||
> expensive in API calls.
|
|
||||||
>
|
|
||||||
> **`list_store_connections`** returns connection instances (who created
|
|
||||||
> which connection) but NOT connector types per flow. Use it for connection
|
|
||||||
> counts per environment, not for the connector audit.
|
|
||||||
>
|
|
||||||
> DLP policy definitions are not available via MCP. The agent builds the
|
|
||||||
> connector inventory; the user provides the DLP classification to
|
|
||||||
> cross-reference against.
|
|
||||||
|
|
||||||
### 5. Notification Rule Management
|
### 5. Notification Rule Management
|
||||||
|
|
||||||
@@ -276,36 +186,19 @@ Configure monitoring and alerting for flows at scale.
|
|||||||
```
|
```
|
||||||
Enable failure alerts on all critical flows:
|
Enable failure alerts on all critical flows:
|
||||||
1. list_store_flows(monitor=true)
|
1. list_store_flows(monitor=true)
|
||||||
2. For each flow (skip entries without displayName or state=Deleted):
|
2. For each active flow: split id, get_store_flow
|
||||||
- Split id → environmentName, flowName
|
3. If critical=true and rule_notify_onfail is false, update_store_flow(...,
|
||||||
- get_store_flow(environmentName, flowName)
|
rule_notify_onfail=true, rule_notify_email="oncall@contoso.com")
|
||||||
- If critical=true AND rule_notify_onfail is not true:
|
|
||||||
update_store_flow(environmentName, flowName,
|
|
||||||
rule_notify_onfail=true,
|
|
||||||
rule_notify_email="oncall@contoso.com")
|
|
||||||
- If NO flows have critical=true: this is a governance finding.
|
|
||||||
Recommend the user designate their most important flows as critical
|
|
||||||
using update_store_flow(critical=true) before configuring alerts.
|
|
||||||
|
|
||||||
Enable missing-run detection for scheduled flows:
|
Enable missing-run detection for scheduled flows:
|
||||||
1. list_store_flows(monitor=true)
|
1. list_store_flows(monitor=true)
|
||||||
2. For each flow where triggerType="Recurrence" (available on list response):
|
2. For active Recurrence flows: get_store_flow
|
||||||
- Skip flows with state="Stopped" or "Suspended" (not expected to run)
|
3. If rule_notify_onmissingdays is 0/missing, update_store_flow(...,
|
||||||
- Split id → environmentName, flowName
|
rule_notify_onmissingdays=2)
|
||||||
- get_store_flow(environmentName, flowName)
|
|
||||||
- If rule_notify_onmissingdays is 0 or not set:
|
|
||||||
update_store_flow(environmentName, flowName,
|
|
||||||
rule_notify_onmissingdays=2)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> `critical`, `rule_notify_onfail`, and `rule_notify_onmissingdays` are only
|
Check monitoring limits before bulk-enabling `monitor=true`. If no flows have
|
||||||
> available from `get_store_flow`, not from `list_store_flows`. The list call
|
`critical=true`, report that as a governance gap before configuring alerts.
|
||||||
> pre-filters to monitored flows; the detail call checks the notification fields.
|
|
||||||
>
|
|
||||||
> **Monitoring limit:** The standard plan (FlowStudio for Teams / MCP Pro+)
|
|
||||||
> includes 20 monitored flows. Before bulk-enabling `monitor=true`, check
|
|
||||||
> how many flows are already monitored:
|
|
||||||
> `len(list_store_flows(monitor=true))`
|
|
||||||
|
|
||||||
### 6. Classification and Tagging
|
### 6. Classification and Tagging
|
||||||
|
|
||||||
@@ -314,35 +207,13 @@ Bulk-classify flows by connector type, business function, or risk level.
|
|||||||
```
|
```
|
||||||
Auto-tag by connector:
|
Auto-tag by connector:
|
||||||
1. list_store_flows
|
1. list_store_flows
|
||||||
2. For each flow (skip entries without displayName or state=Deleted):
|
2. For each active flow: split id, get_store_flow, parse connections JSON
|
||||||
- Split id → environmentName, flowName
|
3. Map apiName values to tags (#sharepoint, #teams, #email, #custom-connector)
|
||||||
- get_store_flow(environmentName, flowName)
|
4. Read existing store tags, append new tags, update_store_flow(tags=...)
|
||||||
- Parse connections: json.loads(record["connections"])
|
|
||||||
- Build tags from apiName values:
|
|
||||||
shared_sharepointonline → #sharepoint
|
|
||||||
shared_teams → #teams
|
|
||||||
shared_office365 → #email
|
|
||||||
Custom connectors → #custom-connector
|
|
||||||
HTTP-related connectors → #http-external
|
|
||||||
- Read existing tags from get_store_flow response, append new tags
|
|
||||||
- update_store_flow(environmentName, flowName,
|
|
||||||
tags="<existing tags> #sharepoint #teams")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> **Two tag systems:** Tags shown in `list_store_flows` are auto-extracted
|
Store tags and description hashtags are separate systems. `tags=` overwrites
|
||||||
> from the flow's `description` field (e.g. a maker writes `#operations` in
|
store tags, so read/append/write. Avoid overriding computed `tier` unless asked.
|
||||||
> the PA portal description). Tags set via `update_store_flow(tags=...)`
|
|
||||||
> write to a separate field in the Azure Table cache. They are independent —
|
|
||||||
> writing store tags does not touch the description, and editing the
|
|
||||||
> description in the portal does not affect store tags.
|
|
||||||
>
|
|
||||||
> **Tag merge:** `update_store_flow(tags=...)` overwrites the store tags
|
|
||||||
> field. To avoid losing tags from other workflows, read the current store
|
|
||||||
> tags from `get_store_flow` first, append new ones, then write back.
|
|
||||||
>
|
|
||||||
> `get_store_flow` already has a `tier` field (Standard/Premium) computed
|
|
||||||
> by the scanning pipeline. Only use `update_store_flow(tier=...)` if you
|
|
||||||
> need to override it.
|
|
||||||
|
|
||||||
### 7. Maker Offboarding
|
### 7. Maker Offboarding
|
||||||
|
|
||||||
@@ -353,33 +224,18 @@ Flow Studio governance contacts and notification recipients.
|
|||||||
1. get_store_maker(makerKey="<departing-user-aad-oid>")
|
1. get_store_maker(makerKey="<departing-user-aad-oid>")
|
||||||
→ check ownerFlowCount, ownerAppCount, deleted status
|
→ check ownerFlowCount, ownerAppCount, deleted status
|
||||||
2. list_store_flows → collect all flows
|
2. list_store_flows → collect all flows
|
||||||
3. For each flow (skip entries without displayName or state=Deleted):
|
3. For each active flow: split id, get_store_flow, parse owners JSON
|
||||||
- Split id → environmentName, flowName
|
4. Flag flows whose owner principalId matches the departing user's OID
|
||||||
- get_store_flow(environmentName, flowName)
|
5. list_store_power_apps → filter ownerId
|
||||||
- Parse owners: json.loads(record["owners"])
|
6. For kept flows: update ownerTeam/supportEmail/rule_notify_email; consider
|
||||||
- If any principalId matches the departing user's OID → flag
|
add_live_flow_to_solution before account deletion
|
||||||
4. list_store_power_apps → filter where ownerId matches the OID
|
7. For retired flows: set_live_flow_state(..., "Stopped") and tag #decommissioned
|
||||||
5. For each flagged flow:
|
8. Report: flows reassigned, flows migrated to solutions, flows stopped,
|
||||||
- Check runPeriodTotal and runLast — is it still active?
|
apps needing manual reassignment
|
||||||
- If keeping:
|
|
||||||
update_store_flow(environmentName, flowName,
|
|
||||||
ownerTeam="NewTeam", supportEmail="new-owner@contoso.com")
|
|
||||||
- If decommissioning:
|
|
||||||
set_store_flow_state(environmentName, flowName, state="Stopped")
|
|
||||||
Read existing tags, append #decommissioned
|
|
||||||
update_store_flow(environmentName, flowName, tags="<existing> #decommissioned")
|
|
||||||
6. Report: flows reassigned, flows stopped, apps needing manual reassignment
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> **What "reassign" means here:** `update_store_flow` changes who Flow
|
This changes Flow Studio governance contacts, not actual PA ownership. Power
|
||||||
> Studio considers the governance contact and who receives Flow Studio
|
Apps ownership changes are manual/admin-center work.
|
||||||
> notifications. It does NOT transfer the actual Power Automate flow
|
|
||||||
> ownership — that requires the Power Platform admin center or PowerShell.
|
|
||||||
> Also update `rule_notify_email` so failure notifications go to the new
|
|
||||||
> team instead of the departing employee's email.
|
|
||||||
>
|
|
||||||
> Power Apps ownership cannot be changed via MCP tools. Report them for
|
|
||||||
> manual reassignment in the Power Apps admin center.
|
|
||||||
|
|
||||||
### 8. Security Review
|
### 8. Security Review
|
||||||
|
|
||||||
@@ -387,33 +243,14 @@ Review flows for potential security concerns using cached store data.
|
|||||||
|
|
||||||
```
|
```
|
||||||
1. list_store_flows(monitor=true)
|
1. list_store_flows(monitor=true)
|
||||||
2. For each flow (skip entries without displayName or state=Deleted):
|
2. For each active flow: split id, get_store_flow
|
||||||
- Split id → environmentName, flowName
|
3. Parse security/connections/referencedResources JSON; read sharingType top-level
|
||||||
- get_store_flow(environmentName, flowName)
|
4. Report findings; for reviewed flows append #security-reviewed tag
|
||||||
- Parse security: json.loads(record["security"])
|
|
||||||
- Parse connections: json.loads(record["connections"])
|
|
||||||
- Read sharingType directly (top-level field, NOT inside security JSON)
|
|
||||||
3. Report findings to user for review
|
|
||||||
4. For reviewed flows:
|
|
||||||
Read existing tags, append #security-reviewed
|
|
||||||
update_store_flow(environmentName, flowName, tags="<existing> #security-reviewed")
|
|
||||||
Do NOT overwrite the security field — it contains structured auth data
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Fields available for security review:**
|
Security signals: `security.triggerRequestAuthenticationType`, `sharingType`,
|
||||||
|
`connections`, `referencedResources`, `tier`. Never overwrite the structured
|
||||||
| Field | Where | What it tells you |
|
`security` field; tag reviewed flows instead.
|
||||||
|---|---|---|
|
|
||||||
| `security.triggerRequestAuthenticationType` | security JSON | `"All"` = HTTP trigger accepts unauthenticated requests |
|
|
||||||
| `sharingType` | top-level | `"Coauthor"` = shared with co-authors for editing |
|
|
||||||
| `connections` | connections JSON | Which connectors the flow uses (check for HTTP, custom) |
|
|
||||||
| `referencedResources` | JSON string | SharePoint sites, Teams channels, external URLs the flow accesses |
|
|
||||||
| `tier` | top-level | `"Premium"` = uses premium connectors |
|
|
||||||
|
|
||||||
> Each organization decides what constitutes a security concern. For example,
|
|
||||||
> an unauthenticated HTTP trigger is expected for webhook receivers (Stripe,
|
|
||||||
> GitHub) but may be a risk for internal flows. Review findings in context
|
|
||||||
> before flagging.
|
|
||||||
|
|
||||||
### 9. Environment Governance
|
### 9. Environment Governance
|
||||||
|
|
||||||
@@ -423,16 +260,11 @@ Audit environments for compliance and sprawl.
|
|||||||
1. list_store_environments
|
1. list_store_environments
|
||||||
Skip entries without displayName (tenant-level metadata rows)
|
Skip entries without displayName (tenant-level metadata rows)
|
||||||
2. Flag:
|
2. Flag:
|
||||||
- Developer environments (sku="Developer") — should be limited
|
- Developer environments
|
||||||
- Non-managed environments (isManagedEnvironment=false) — less governance
|
- Non-managed environments
|
||||||
- Note: isAdmin=false means the current service account lacks admin
|
- Environments where service account lacks admin access (isAdmin=false)
|
||||||
access to that environment, not that the environment has no admin
|
|
||||||
3. list_store_flows → group by environmentName
|
3. list_store_flows → group by environmentName
|
||||||
- Flow count per environment
|
|
||||||
- Failure rate analysis: runPeriodFailRate is on the list response —
|
|
||||||
no need for per-flow get_store_flow calls
|
|
||||||
4. list_store_connections → group by environmentName
|
4. list_store_connections → group by environmentName
|
||||||
- Connection count per environment
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 10. Governance Dashboard
|
### 10. Governance Dashboard
|
||||||
@@ -444,30 +276,13 @@ Efficient metrics (list calls only):
|
|||||||
1. total_flows = len(list_store_flows())
|
1. total_flows = len(list_store_flows())
|
||||||
2. monitored = len(list_store_flows(monitor=true))
|
2. monitored = len(list_store_flows(monitor=true))
|
||||||
3. with_onfail = len(list_store_flows(rule_notify_onfail=true))
|
3. with_onfail = len(list_store_flows(rule_notify_onfail=true))
|
||||||
4. makers = list_store_makers()
|
4. makers/apps/envs/conns = list_store_makers/list_store_power_apps/list_store_environments/list_store_connections
|
||||||
→ active = count where deleted=false
|
5. Compute monitoring %, notification %, orphan count, high-failure count
|
||||||
→ orphan_count = count where deleted=true AND ownerFlowCount > 0
|
|
||||||
5. apps = list_store_power_apps()
|
|
||||||
→ widely_shared = count where sharedUsersCount > 3
|
|
||||||
6. envs = list_store_environments() → count, group by sku
|
|
||||||
7. conns = list_store_connections() → count
|
|
||||||
|
|
||||||
Compute from list data:
|
|
||||||
- Monitoring %: monitored / total_flows
|
|
||||||
- Notification %: with_onfail / monitored
|
|
||||||
- Orphan count: from step 4
|
|
||||||
- High-risk count: flows with runPeriodFailRate > 0.2 (on list response)
|
|
||||||
|
|
||||||
Detailed metrics (require get_store_flow per flow — expensive for large tenants):
|
Detailed metrics (require get_store_flow per flow — expensive for large tenants):
|
||||||
- Compliance %: flows with businessImpact set / total active flows
|
- Compliance %: flows with businessImpact set / total active flows
|
||||||
- Undocumented count: flows without description
|
- Undocumented count: flows without description
|
||||||
- Tier breakdown: group by tier field
|
- Tier breakdown: group by tier field
|
||||||
|
|
||||||
For detailed metrics, iterate all flows in a single pass:
|
|
||||||
For each flow from list_store_flows (skip sparse entries):
|
|
||||||
Split id → environmentName, flowName
|
|
||||||
get_store_flow(environmentName, flowName)
|
|
||||||
→ accumulate businessImpact, description, tier
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -511,7 +326,7 @@ Fields marked with `*` are also available on `list_store_flows` (cheaper).
|
|||||||
|
|
||||||
## Related Skills
|
## Related Skills
|
||||||
|
|
||||||
- `power-automate-monitoring` — Health checks, failure rates, inventory (read-only)
|
- `flowstudio-power-automate-monitoring` — Health checks, failure rates, inventory (read-only)
|
||||||
- `power-automate-mcp` — Foundation skill: connection setup, MCP helper, tool discovery
|
- `flowstudio-power-automate-mcp` — Foundation skill: connection setup, MCP helper, tool discovery
|
||||||
- `power-automate-debug` — Deep diagnosis with action-level inputs/outputs
|
- `flowstudio-power-automate-debug` — Deep diagnosis with action-level inputs/outputs
|
||||||
- `power-automate-build` — Build and deploy flow definitions
|
- `flowstudio-power-automate-build` — Build and deploy flow definitions
|
||||||
|
|||||||
@@ -5,17 +5,10 @@ description: >-
|
|||||||
reusable MCP helper (Python + Node.js), tool discovery via `list_skills` /
|
reusable MCP helper (Python + Node.js), tool discovery via `list_skills` /
|
||||||
`tool_search`, and oversized-response handling. Load this skill first when
|
`tool_search`, and oversized-response handling. Load this skill first when
|
||||||
connecting an agent to Power Automate. For specialized workflows, load
|
connecting an agent to Power Automate. For specialized workflows, load
|
||||||
`power-automate-build`, `power-automate-debug`, `power-automate-monitoring`
|
`flowstudio-power-automate-build`, `flowstudio-power-automate-debug`, `flowstudio-power-automate-monitoring`
|
||||||
(Pro+), or `power-automate-governance` (Pro+) — each contains the workflow
|
(Pro+), or `flowstudio-power-automate-governance` (Pro+) — each contains the workflow
|
||||||
narrative, this skill provides the plumbing they all rely on. Requires a
|
narrative, this skill provides the plumbing they all rely on. Requires a
|
||||||
FlowStudio MCP subscription or compatible server — see https://mcp.flowstudio.app
|
FlowStudio MCP subscription or compatible server — see https://mcp.flowstudio.app
|
||||||
metadata:
|
|
||||||
openclaw:
|
|
||||||
requires:
|
|
||||||
env:
|
|
||||||
- FLOWSTUDIO_MCP_TOKEN
|
|
||||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
|
||||||
homepage: https://mcp.flowstudio.app
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Power Automate via FlowStudio MCP — Foundation
|
# Power Automate via FlowStudio MCP — Foundation
|
||||||
@@ -45,16 +38,16 @@ trying to accomplish.
|
|||||||
|
|
||||||
| The user wants to… | Load this skill |
|
| The user wants to… | Load this skill |
|
||||||
|---|---|
|
|---|---|
|
||||||
| Make or change a flow (build new, modify existing, fix a bug, deploy) | **`power-automate-build`** |
|
| Make or change a flow (build new, modify existing, fix a bug, deploy) | **`flowstudio-power-automate-build`** |
|
||||||
| Diagnose why a flow failed (root cause analysis on a failing run) | **`power-automate-debug`** |
|
| Diagnose why a flow failed (root cause analysis on a failing run) | **`flowstudio-power-automate-debug`** |
|
||||||
| See tenant-wide flow health, failure rates, asset inventory | **`power-automate-monitoring`** *(Pro+)* |
|
| See tenant-wide flow health, failure rates, asset inventory | **`flowstudio-power-automate-monitoring`** *(Pro+)* |
|
||||||
| Tag, audit, classify, score, or offboard flows | **`power-automate-governance`** *(Pro+)* |
|
| Tag, audit, classify, score, or offboard flows | **`flowstudio-power-automate-governance`** *(Pro+)* |
|
||||||
| Just connect, set up auth, write the helper, parse responses | this skill (foundation) |
|
| Just connect, set up auth, write the helper, parse responses | this skill (foundation) |
|
||||||
|
|
||||||
**Same tools, different lenses.** `power-automate-build` and `power-automate-debug`
|
**Same tools, different lenses.** `flowstudio-power-automate-build` and `flowstudio-power-automate-debug`
|
||||||
both call `update_live_flow`, `get_live_flow`, and the run-error tools — they
|
both call `update_live_flow`, `get_live_flow`, and the run-error tools — they
|
||||||
differ in *direction* (forward vs backward) and *intent* (compose vs diagnose).
|
differ in *direction* (forward vs backward) and *intent* (compose vs diagnose).
|
||||||
`power-automate-monitoring` and `power-automate-governance` both call the Store
|
`flowstudio-power-automate-monitoring` and `flowstudio-power-automate-governance` both call the Store
|
||||||
tools — they differ in *audience* (ops vs compliance) and *outcome* (read
|
tools — they differ in *audience* (ops vs compliance) and *outcome* (read
|
||||||
health vs write metadata). Don't try to memorize "which tools belong to which
|
health vs write metadata). Don't try to memorize "which tools belong to which
|
||||||
skill"; pick the skill by what the user is doing.
|
skill"; pick the skill by what the user is doing.
|
||||||
@@ -84,14 +77,14 @@ tool names.
|
|||||||
|
|
||||||
| Meta-tool | When to call |
|
| Meta-tool | When to call |
|
||||||
|---|---|
|
|---|---|
|
||||||
| `list_skills` | Cold start — see the available bundles (`build-flow`, `debug-flow`, `monitor-flow`, `discover`, `governance`) and pick one |
|
| `list_skills` | Cold start — see the available bundles (`build-flow`, `create-flow`, `debug-flow`, `monitor-flow`, `discover`, `governance`) and pick one |
|
||||||
| `tool_search` with `query: "skill:<name>"` | Load the full schema set for one bundle (e.g. `skill:debug-flow`) |
|
| `tool_search` with `query: "skill:<name>"` | Load the full schema set for one bundle (e.g. `skill:debug-flow`) |
|
||||||
| `tool_search` with `query: "select:tool1,tool2"` | Load specific tools by name (e.g. when chaining across bundles) |
|
| `tool_search` with `query: "select:tool1,tool2"` | Load specific tools by name (e.g. when chaining across bundles) |
|
||||||
| `tool_search` with `query: "<keywords>"` | Free-text search when the user request is ambiguous (e.g. `"cancel run"`) |
|
| `tool_search` with `query: "<keywords>"` | Free-text search when the user request is ambiguous (e.g. `"cancel run"`) |
|
||||||
|
|
||||||
The server's `tool_search` bundles are intentionally **narrower than this
|
The server's `tool_search` bundles are intentionally **narrower than this
|
||||||
skill family** — they're starter packs of the most-likely-needed tools per
|
skill family** — they're starter packs of the most-likely-needed tools per
|
||||||
intent. A workflow skill (e.g. `power-automate-debug`) may pull a bundle and
|
intent. A workflow skill (e.g. `flowstudio-power-automate-debug`) may pull a bundle and
|
||||||
then call `tool_search` again for additional tools as the workflow progresses.
|
then call `tool_search` again for additional tools as the workflow progresses.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@@ -104,6 +97,17 @@ skills = mcp("list_skills", {})
|
|||||||
debug_tools = mcp("tool_search", {"query": "skill:debug-flow"})
|
debug_tools = mcp("tool_search", {"query": "skill:debug-flow"})
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Current common bundles:
|
||||||
|
|
||||||
|
| Bundle | Use when |
|
||||||
|
|---|---|
|
||||||
|
| `create-flow` | Creating a brand-new flow; includes environment/connection discovery, connector description, dynamic options, and `update_live_flow` |
|
||||||
|
| `build-flow` | Reading or modifying an existing flow definition |
|
||||||
|
| `debug-flow` | Investigating failed runs and action-level inputs/outputs |
|
||||||
|
| `monitor-flow` | Starting/stopping, triggering, cancelling, or resubmitting runs |
|
||||||
|
| `discover` | Enumerating environments, flows, and connections |
|
||||||
|
| `governance` | Pro+ cached-store tagging, maker audit, and metadata updates |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Recommended Language: Python or Node.js
|
## Recommended Language: Python or Node.js
|
||||||
@@ -213,7 +217,7 @@ print(f"Connected — {len(skills)} skill bundles available:",
|
|||||||
Expected output:
|
Expected output:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
Connected — 5 skill bundles available: ['build-flow', 'debug-flow', 'monitor-flow', 'discover', 'governance']
|
Connected — 6 skill bundles available: ['build-flow', 'create-flow', 'debug-flow', 'monitor-flow', 'discover', 'governance']
|
||||||
```
|
```
|
||||||
|
|
||||||
If this fails, see the **Common auth errors** note above. If it succeeds, hand
|
If this fails, see the **Common auth errors** note above. If it succeeds, hand
|
||||||
@@ -228,7 +232,8 @@ Some MCP tool responses are large enough to overflow the agent's context window:
|
|||||||
| Tool | Typical size | Cause |
|
| Tool | Typical size | Cause |
|
||||||
|---|---|---|
|
|---|---|---|
|
||||||
| `describe_live_connector` | 100-600 KB | Full Swagger spec for a connector |
|
| `describe_live_connector` | 100-600 KB | Full Swagger spec for a connector |
|
||||||
| `get_live_flow_run_action_outputs` (no `actionName`) | 50 KB – several MB | All actions × all foreach iterations |
|
| `get_live_dynamic_properties` | 50-500 KB | Dynamic connector field schemas such as SharePoint list columns |
|
||||||
|
| `get_live_flow_run_action_outputs` (no `actionName`) | 50 KB – several MB | Top-level action outputs; with an action in a foreach, every repetition can be returned |
|
||||||
| `get_live_flow` (large flows) | 50-500 KB | Deeply nested branches |
|
| `get_live_flow` (large flows) | 50-500 KB | Deeply nested branches |
|
||||||
| `list_live_flows` (large tenants) | 50-200 KB | Hundreds of flow records |
|
| `list_live_flows` (large tenants) | 50-200 KB | Hundreds of flow records |
|
||||||
|
|
||||||
@@ -259,7 +264,7 @@ $payload = ((Get-Content $path -Raw | ConvertFrom-Json)[0].text) | ConvertFrom-J
|
|||||||
### Rules of thumb
|
### Rules of thumb
|
||||||
|
|
||||||
1. **Extract, don't echo.** Pull the specific field(s) you need (one `operationId`, one action's outputs) and discard the rest before reasoning about it.
|
1. **Extract, don't echo.** Pull the specific field(s) you need (one `operationId`, one action's outputs) and discard the rest before reasoning about it.
|
||||||
2. **Always pass `actionName` to `get_live_flow_run_action_outputs`.** Omitting it fetches every action × every iteration — fine for offline debug scripts, dangerous for an agent that ingests the whole response.
|
2. **Always pass `actionName` to `get_live_flow_run_action_outputs`.** Omitting it fetches all top-level actions. For actions inside a foreach, passing `actionName` without `iterationIndex` can return every repetition of that action.
|
||||||
3. **Reuse the spill file within a session.** Refetching the same connector swagger costs 30+ seconds and produces another spill — cache the path.
|
3. **Reuse the spill file within a session.** Refetching the same connector swagger costs 30+ seconds and produces another spill — cache the path.
|
||||||
4. **Don't grep the spill file for JSON keys directly.** Strings are JSON-escaped inside the file (`\"OperationId\":`), so a plain grep for `"OperationId":` will not match. Parse first, then filter.
|
4. **Don't grep the spill file for JSON keys directly.** Strings are JSON-escaped inside the file (`\"OperationId\":`), so a plain grep for `"OperationId":` will not match. Parse first, then filter.
|
||||||
5. **Summarize tool output to the user.** Echo `name + state + trigger` for flow lists and `actionName + status + code` for run errors — not raw JSON, unless asked.
|
5. **Summarize tool output to the user.** Echo `name + state + trigger` for flow lists and `actionName + status + code` for run errors — not raw JSON, unless asked.
|
||||||
|
|||||||
@@ -17,14 +17,38 @@ x-api-key: <token>
|
|||||||
User-Agent: FlowStudio-MCP/1.0 ← required, or Cloudflare blocks you
|
User-Agent: FlowStudio-MCP/1.0 ← required, or Cloudflare blocks you
|
||||||
```
|
```
|
||||||
|
|
||||||
## Step 1 — Discover Tools
|
## Step 1 — Discover Tool Bundles
|
||||||
|
|
||||||
|
Preferred cold-start call:
|
||||||
|
|
||||||
|
```json
|
||||||
|
POST {"jsonrpc":"2.0","id":1,"method":"tools/call",
|
||||||
|
"params":{"name":"list_skills","arguments":{}}}
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns the current bundles (`build-flow`, `create-flow`, `debug-flow`,
|
||||||
|
`monitor-flow`, `discover`, `governance`) and their member tool names. Free —
|
||||||
|
not counted against plan limits.
|
||||||
|
|
||||||
|
Then load the relevant schemas:
|
||||||
|
|
||||||
|
```json
|
||||||
|
POST {"jsonrpc":"2.0","id":2,"method":"tools/call",
|
||||||
|
"params":{"name":"tool_search","arguments":{"query":"skill:create-flow"}}}
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `query:"select:tool1,tool2"` to load exact tools and keyword search such as
|
||||||
|
`query:"send email"` when the user intent is ambiguous.
|
||||||
|
|
||||||
|
Fallback for very low-level MCP clients:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
POST {"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}
|
POST {"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}
|
||||||
```
|
```
|
||||||
|
|
||||||
Returns all tools with names, descriptions, and input schemas.
|
`tools/list` returns all tools with names, descriptions, and input schemas, but
|
||||||
Free — not counted against plan limits.
|
it is heavier and should not be the first choice for agents that know the
|
||||||
|
FlowStudio meta-tools.
|
||||||
|
|
||||||
## Step 2 — Call a Tool
|
## Step 2 — Call a Tool
|
||||||
|
|
||||||
@@ -50,4 +74,5 @@ Always parse `result.content[0].text` as JSON to get the actual data.
|
|||||||
`list_live_environments`, `list_live_connections`, `list_store_flows`,
|
`list_live_environments`, `list_live_connections`, `list_store_flows`,
|
||||||
`list_store_environments`, `list_store_makers`, `get_store_maker`,
|
`list_store_environments`, `list_store_makers`, `get_store_maker`,
|
||||||
`list_store_power_apps`, `list_store_connections`
|
`list_store_power_apps`, `list_store_connections`
|
||||||
- When in doubt, check the `required` array in each tool's schema from `tools/list`
|
- When in doubt, check the `required` array in each tool's schema from
|
||||||
|
`tool_search` (or `tools/list` as a fallback)
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ connections in the Power Platform. They are required whenever you call
|
|||||||
"definition": { ... },
|
"definition": { ... },
|
||||||
"connectionReferences": {
|
"connectionReferences": {
|
||||||
"shared_sharepointonline": {
|
"shared_sharepointonline": {
|
||||||
"connectionName": "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be",
|
"connectionName": "shared-sharepointonl-eeeeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee",
|
||||||
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||||
"displayName": "SharePoint"
|
"displayName": "SharePoint"
|
||||||
},
|
},
|
||||||
@@ -33,7 +33,35 @@ These match the `connectionName` field inside each action's `host` block.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Finding Connection GUIDs
|
## Finding Connection References
|
||||||
|
|
||||||
|
Preferred method: call `list_live_connections` in the target environment. Use
|
||||||
|
`search` to narrow results to the connector you need; newer MCP server versions
|
||||||
|
return paste-ready templates.
|
||||||
|
|
||||||
|
```python
|
||||||
|
matches = mcp("list_live_connections",
|
||||||
|
environmentName=ENV,
|
||||||
|
search="shared_sharepointonline")
|
||||||
|
|
||||||
|
conn = next(c for c in matches["connections"]
|
||||||
|
if c.get("overallStatus") == "Connected"
|
||||||
|
or c.get("statuses", [{}])[0].get("status") == "Connected")
|
||||||
|
|
||||||
|
conn_refs = {
|
||||||
|
"shared_sharepointonline": conn.get("connectionReferenceTemplate") or {
|
||||||
|
"connectionName": conn["id"],
|
||||||
|
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline",
|
||||||
|
"source": "Invoker"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
host = conn.get("hostTemplate") or {"connectionName": "shared_sharepointonline"}
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `host` as the action-side `inputs.host`. Use `conn_refs` as
|
||||||
|
`update_live_flow(connectionReferences=conn_refs)`.
|
||||||
|
|
||||||
|
Fallback method: copy from an existing flow.
|
||||||
|
|
||||||
Call `get_live_flow` on **any existing flow** that uses the same connection
|
Call `get_live_flow` on **any existing flow** that uses the same connection
|
||||||
and copy the `connectionReferences` block. The GUID after the connector prefix is
|
and copy the `connectionReferences` block. The GUID after the connector prefix is
|
||||||
@@ -43,7 +71,7 @@ the connection instance owned by the authenticating user.
|
|||||||
flow = mcp("get_live_flow", environmentName=ENV, flowName=EXISTING_FLOW_ID)
|
flow = mcp("get_live_flow", environmentName=ENV, flowName=EXISTING_FLOW_ID)
|
||||||
conn_refs = flow["properties"]["connectionReferences"]
|
conn_refs = flow["properties"]["connectionReferences"]
|
||||||
# conn_refs["shared_sharepointonline"]["connectionName"]
|
# conn_refs["shared_sharepointonline"]["connectionName"]
|
||||||
# → "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be"
|
# → "shared-sharepointonl-eeeeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee"
|
||||||
```
|
```
|
||||||
|
|
||||||
> ⚠️ Connection references are **user-scoped**. If a connection is owned
|
> ⚠️ Connection references are **user-scoped**. If a connection is owned
|
||||||
@@ -62,7 +90,7 @@ result = mcp("update_live_flow",
|
|||||||
definition=modified_definition,
|
definition=modified_definition,
|
||||||
connectionReferences={
|
connectionReferences={
|
||||||
"shared_sharepointonline": {
|
"shared_sharepointonline": {
|
||||||
"connectionName": "shared-sharepointonl-62599557c-1f33-4aec-b4c0-a6e4afcae3be",
|
"connectionName": "shared-sharepointonl-eeeeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee",
|
||||||
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline"
|
"id": "/providers/Microsoft.PowerApps/apis/shared_sharepointonline"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,9 +2,10 @@
|
|||||||
|
|
||||||
Response shapes and behavioral notes for the FlowStudio Power Automate MCP server.
|
Response shapes and behavioral notes for the FlowStudio Power Automate MCP server.
|
||||||
|
|
||||||
> **For tool names and parameters**: Always call `tools/list` on the server.
|
> **For tool names and parameters**: Prefer `list_skills` and `tool_search`.
|
||||||
> It returns the authoritative, up-to-date schema for every tool.
|
> They return focused, up-to-date schemas without loading every MCP tool at once.
|
||||||
> This document covers what `tools/list` does NOT tell you: **response shapes**
|
> Use `tools/list` only as a low-level fallback when the meta-tools are not available.
|
||||||
|
> This document covers what tool schemas do NOT tell you: **response shapes**
|
||||||
> and **non-obvious behaviors** discovered through real usage.
|
> and **non-obvious behaviors** discovered through real usage.
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -14,11 +15,11 @@ Response shapes and behavioral notes for the FlowStudio Power Automate MCP serve
|
|||||||
| Priority | Source | Covers |
|
| Priority | Source | Covers |
|
||||||
|----------|--------|--------|
|
|----------|--------|--------|
|
||||||
| 1 | **Real API response** | Always trust what the server actually returns |
|
| 1 | **Real API response** | Always trust what the server actually returns |
|
||||||
| 2 | **`tools/list`** | Tool names, parameter names, types, required flags |
|
| 2 | **`list_skills` / `tool_search`** | Tool names, parameter names, types, required flags |
|
||||||
| 3 | **This document** | Response shapes, behavioral notes, gotchas |
|
| 3 | **This document** | Response shapes, behavioral notes, gotchas |
|
||||||
|
|
||||||
> If this document disagrees with `tools/list` or real API behavior,
|
> If this document disagrees with `tool_search`, `tools/list`, or real API
|
||||||
> the API wins. Update this document accordingly.
|
> behavior, the API wins. Update this document accordingly.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -63,9 +64,20 @@ Response: wrapper object with `connections` array.
|
|||||||
"id": "shared-office365-9f9d2c8e-55f1-49c9-9f9c-1c45d1fbbdce",
|
"id": "shared-office365-9f9d2c8e-55f1-49c9-9f9c-1c45d1fbbdce",
|
||||||
"displayName": "user@contoso.com",
|
"displayName": "user@contoso.com",
|
||||||
"connectorName": "shared_office365",
|
"connectorName": "shared_office365",
|
||||||
|
"environment": "Default-26e65220-...",
|
||||||
"createdBy": "User Name",
|
"createdBy": "User Name",
|
||||||
|
"authenticatedUser": "user@contoso.com",
|
||||||
|
"overallStatus": "Connected",
|
||||||
"statuses": [{"status": "Connected"}],
|
"statuses": [{"status": "Connected"}],
|
||||||
"createdTime": "2024-03-12T21:23:55.206815Z"
|
"createdTime": "2024-03-12T21:23:55.206815Z",
|
||||||
|
"connectionReferenceTemplate": {
|
||||||
|
"connectionName": "shared-office365-9f9d2c8e-55f1-49c9-9f9c-1c45d1fbbdce",
|
||||||
|
"source": "Invoker",
|
||||||
|
"id": "/providers/Microsoft.PowerApps/apis/shared_office365"
|
||||||
|
},
|
||||||
|
"hostTemplate": {
|
||||||
|
"connectionName": "shared_office365"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"totalCount": 56,
|
"totalCount": 56,
|
||||||
@@ -78,11 +90,16 @@ Response: wrapper object with `connections` array.
|
|||||||
> **Key field**: `connectorName` maps to apiId:
|
> **Key field**: `connectorName` maps to apiId:
|
||||||
> `"/providers/Microsoft.PowerApps/apis/" + connectorName`
|
> `"/providers/Microsoft.PowerApps/apis/" + connectorName`
|
||||||
>
|
>
|
||||||
> Filter by status: `statuses[0].status == "Connected"`.
|
> Filter by status: prefer `overallStatus == "Connected"` when present; otherwise
|
||||||
|
> check `statuses[0].status == "Connected"`.
|
||||||
>
|
>
|
||||||
> **Note**: `tools/list` marks `environmentName` as optional, but the server
|
> For build workflows, pass `environmentName` to avoid using a connection from
|
||||||
> returns `MissingEnvironmentFilter` (HTTP 400) if you omit it. Always pass
|
> the wrong environment. Omit it only when intentionally inventorying connections
|
||||||
> `environmentName`.
|
> across all environments.
|
||||||
|
>
|
||||||
|
> Pass `search=<connector or account>` to narrow output and receive
|
||||||
|
> `connectionReferenceTemplate` plus `hostTemplate` values that can be copied
|
||||||
|
> directly into `update_live_flow`.
|
||||||
|
|
||||||
### `list_store_connections`
|
### `list_store_connections`
|
||||||
|
|
||||||
@@ -112,6 +129,7 @@ Response: wrapper object with `flows` array.
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"totalCount": 100,
|
"totalCount": 100,
|
||||||
|
"nextLink": null,
|
||||||
"error": null
|
"error": null
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -119,6 +137,14 @@ Response: wrapper object with `flows` array.
|
|||||||
> Access via `result["flows"]`. `id` is a plain UUID --- use directly as `flowName`.
|
> Access via `result["flows"]`. `id` is a plain UUID --- use directly as `flowName`.
|
||||||
>
|
>
|
||||||
> `mode` indicates the access scope used (`"owner"` or `"admin"`).
|
> `mode` indicates the access scope used (`"owner"` or `"admin"`).
|
||||||
|
>
|
||||||
|
> Parameters added in newer server versions:
|
||||||
|
> - `search`: filter by display name server-side.
|
||||||
|
> - `mode`: `owner` for flows owned by the MCP identity; `admin` for all flows
|
||||||
|
> visible to an admin account.
|
||||||
|
> - `timeoutSeconds`: return partial results with `nextLink` instead of waiting
|
||||||
|
> on very large environments.
|
||||||
|
> - `continuationUrl`: pass the previous `nextLink` to continue the same query.
|
||||||
|
|
||||||
### `list_store_flows`
|
### `list_store_flows`
|
||||||
|
|
||||||
@@ -217,12 +243,73 @@ Response:
|
|||||||
>
|
>
|
||||||
> On create: `created` is the new flow GUID (string). On update: `created` is `false`.
|
> On create: `created` is the new flow GUID (string). On update: `created` is `false`.
|
||||||
>
|
>
|
||||||
> `description` is **always required** (create and update).
|
> Required fields can vary by server version. Use `tool_search` with
|
||||||
|
> `select:update_live_flow` before creating or patching a flow; if a description
|
||||||
|
> is required, include either the new description or the existing one from
|
||||||
|
> `get_live_flow`.
|
||||||
|
>
|
||||||
|
> The flow description is part of the workflow definition (`definition.description`),
|
||||||
|
> not a top-level tool argument in current schemas.
|
||||||
|
|
||||||
### `add_live_flow_to_solution`
|
### `add_live_flow_to_solution`
|
||||||
|
|
||||||
Migrates a non-solution flow into a solution. Returns error if already in a solution.
|
Migrates a non-solution flow into a solution. Returns error if already in a solution.
|
||||||
|
|
||||||
|
Use this after creating a Copilot Studio Skills-triggered flow that must be
|
||||||
|
discoverable as an agent tool. Pass `solutionId` for the target solution. If the
|
||||||
|
server supports omitting `solutionId`, it uses the environment's default solution;
|
||||||
|
prefer an explicit unmanaged solution for production ALM.
|
||||||
|
|
||||||
|
This tool changes solution membership only. It does not validate the trigger
|
||||||
|
schema, publish a Copilot Studio agent, or prove that the flow is callable by the
|
||||||
|
agent.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Connector Operation Discovery
|
||||||
|
|
||||||
|
### `describe_live_connector`
|
||||||
|
|
||||||
|
Describes a connector/API and its operations. Use it before creating connector
|
||||||
|
actions instead of guessing operation JSON.
|
||||||
|
|
||||||
|
Common modes:
|
||||||
|
|
||||||
|
| Call shape | Use |
|
||||||
|
|---|---|
|
||||||
|
| `search="send email"` without `connectorName` | Search operations across connectors |
|
||||||
|
| `connectorName="shared_sharepointonline"` | Compact operation catalog for one connector |
|
||||||
|
| `operationId="GetItems"` | Expanded schema for one operation |
|
||||||
|
| `variant="flowbot_chat"` | Authored example for one operation variant |
|
||||||
|
|
||||||
|
The operation detail can include:
|
||||||
|
- `hint`: authored guidance from the connector hints table.
|
||||||
|
- `exampleDefinition`: copy-ready action/trigger shape when available.
|
||||||
|
- Dynamic metadata with `nextTool=get_live_dynamic_options` or
|
||||||
|
`nextTool=get_live_dynamic_properties`.
|
||||||
|
|
||||||
|
### `get_live_dynamic_options`
|
||||||
|
|
||||||
|
Resolves live dropdown/list options for connector parameters. Use this for
|
||||||
|
IDs selected from lists, such as SharePoint sites/lists, Teams teams/channels,
|
||||||
|
or other `x-ms-dynamic-list` / `x-ms-dynamic-values` parameters.
|
||||||
|
|
||||||
|
Pass the `dynamicMetadata` object returned by `describe_live_connector`, the
|
||||||
|
connection id from `list_live_connections`, and any already-resolved dependent
|
||||||
|
parameters.
|
||||||
|
|
||||||
|
### `get_live_dynamic_properties`
|
||||||
|
|
||||||
|
Resolves live schema/field properties for connector parameters. Use this for
|
||||||
|
dynamic field sets such as SharePoint list item columns after the site and list
|
||||||
|
are known.
|
||||||
|
|
||||||
|
Useful parameters:
|
||||||
|
- `parameters`: dependent values, for example `{ "dataset": "<site-url>",
|
||||||
|
"table": "<list-id>" }`.
|
||||||
|
- `propertyName`: request one field after inspecting the compact response.
|
||||||
|
- `includeRaw`: include raw connector schema only when needed; it can be large.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Run History & Monitoring
|
## Run History & Monitoring
|
||||||
@@ -298,8 +385,10 @@ Response: array of action detail objects.
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
> **`actionName` is optional**: omit it to return ALL actions in the run;
|
> **`actionName` is optional**: omit it to return top-level actions in the run.
|
||||||
> provide it to return a single-element array for that action only.
|
> Provide it for a specific action. If that action runs inside a foreach, the
|
||||||
|
> tool can return every repetition of that action across iterations; pass
|
||||||
|
> `iterationIndex` to pin to one zero-based iteration.
|
||||||
>
|
>
|
||||||
> Outputs can be very large (50 MB+) for bulk-data actions. Use 120s+ timeout.
|
> Outputs can be very large (50 MB+) for bulk-data actions. Use 120s+ timeout.
|
||||||
|
|
||||||
@@ -324,6 +413,9 @@ Cancels a `Running` flow run.
|
|||||||
|
|
||||||
### `get_live_flow_http_schema`
|
### `get_live_flow_http_schema`
|
||||||
|
|
||||||
|
Deprecated. Prefer `get_live_flow` and inspect the `Request` trigger's
|
||||||
|
`inputs.schema` plus any `Response` actions directly from the definition.
|
||||||
|
|
||||||
Response keys:
|
Response keys:
|
||||||
```
|
```
|
||||||
flowKey - Flow GUID
|
flowKey - Flow GUID
|
||||||
@@ -343,6 +435,9 @@ responseSchemaCount - Number of Response actions that define output schemas
|
|||||||
|
|
||||||
### `get_live_flow_trigger_url`
|
### `get_live_flow_trigger_url`
|
||||||
|
|
||||||
|
Deprecated. Prefer `trigger_live_flow` when you need to invoke an HTTP-triggered
|
||||||
|
flow; it fetches the current callback URL internally.
|
||||||
|
|
||||||
Returns the signed callback URL for HTTP-triggered flows. Response includes
|
Returns the signed callback URL for HTTP-triggered flows. Response includes
|
||||||
`flowKey`, `triggerName`, `triggerType`, `triggerKind`, `triggerMethod`, `triggerUrl`.
|
`flowKey`, `triggerName`, `triggerType`, `triggerKind`, `triggerMethod`, `triggerUrl`.
|
||||||
|
|
||||||
@@ -464,15 +559,18 @@ List all Power Apps canvas apps from the cache.
|
|||||||
## Behavioral Notes
|
## Behavioral Notes
|
||||||
|
|
||||||
Non-obvious behaviors discovered through real API usage. These are things
|
Non-obvious behaviors discovered through real API usage. These are things
|
||||||
`tools/list` cannot tell you.
|
tool schemas cannot tell you.
|
||||||
|
|
||||||
### `get_live_flow_run_action_outputs`
|
### `get_live_flow_run_action_outputs`
|
||||||
- **`actionName` is optional**: omit to get all actions, provide to get one.
|
- **`actionName` is optional**: omit to get top-level actions, provide to get one
|
||||||
This changes the response from N elements to 1 element (still an array).
|
action. For actions inside foreach loops, a named action may return multiple
|
||||||
|
repetitions; use `iterationIndex` to pin to one iteration.
|
||||||
- Outputs can be 50 MB+ for bulk-data actions --- always use 120s+ timeout.
|
- Outputs can be 50 MB+ for bulk-data actions --- always use 120s+ timeout.
|
||||||
|
|
||||||
### `update_live_flow`
|
### `update_live_flow`
|
||||||
- `description` is **always required** (create and update modes).
|
- Required fields can vary by server version; confirm with `tool_search`
|
||||||
|
(`select:update_live_flow`) before create/update. If `description` is required,
|
||||||
|
preserve the existing description when patching.
|
||||||
- `error` key is **always present** in response --- `null` means success.
|
- `error` key is **always present** in response --- `null` means success.
|
||||||
Do NOT check `if "error" in result`; check `result.get("error") is not None`.
|
Do NOT check `if "error" in result`; check `result.get("error") is not None`.
|
||||||
- On create, `created` = new flow GUID (string). On update, `created` = `false`.
|
- On create, `created` = new flow GUID (string). On update, `created` = `false`.
|
||||||
@@ -495,5 +593,9 @@ Non-obvious behaviors discovered through real API usage. These are things
|
|||||||
- `poster`: `"Flow bot"` for Workflows bot identity, `"User"` for user identity.
|
- `poster`: `"Flow bot"` for Workflows bot identity, `"User"` for user identity.
|
||||||
|
|
||||||
### `list_live_connections`
|
### `list_live_connections`
|
||||||
|
- For build workflows, pass `environmentName`; omitting it inventories
|
||||||
|
connections across environments.
|
||||||
|
- Use `search=<connector/account>` to get smaller output and paste-ready
|
||||||
|
`connectionReferenceTemplate` / `hostTemplate` values.
|
||||||
- `id` is the value you need for `connectionName` in `connectionReferences`.
|
- `id` is the value you need for `connectionName` in `connectionReferences`.
|
||||||
- `connectorName` maps to apiId: `"/providers/Microsoft.PowerApps/apis/" + connectorName`.
|
- `connectorName` maps to apiId: `"/providers/Microsoft.PowerApps/apis/" + connectorName`.
|
||||||
|
|||||||
@@ -1,27 +1,12 @@
|
|||||||
---
|
---
|
||||||
name: flowstudio-power-automate-monitoring
|
name: flowstudio-power-automate-monitoring
|
||||||
description: >-
|
description: >-
|
||||||
**Pro+ subscription required.** Tenant-wide Power Automate flow health
|
Pro+ subscription required. Tenant-wide Power Automate monitoring using the
|
||||||
monitoring, failure rate analytics, and asset inventory using the FlowStudio
|
FlowStudio MCP cached store: failure rates, run-health trends, maker/app
|
||||||
MCP cached store. Load this skill ONLY for tenant-wide aggregated views — not
|
inventory, inactive owners, and compliance/health reports. Use only for
|
||||||
for listing flows in a single environment or debugging a specific run (use
|
aggregated tenant views. For one environment, one flow, run control, or
|
||||||
power-automate-mcp or power-automate-debug for those). Not the same as the
|
root-cause debugging, use flowstudio-power-automate-mcp, flowstudio-power-automate-debug, or the
|
||||||
server's `monitor-flow` tool bundle (`tool_search query: "skill:monitor-flow"`)
|
server monitor-flow bundle. Requires FlowStudio for Teams or MCP Pro+.
|
||||||
— that bundle is for runtime control of a single flow (start/stop/trigger/
|
|
||||||
cancel/resubmit); this skill is for tenant-wide health analytics over the
|
|
||||||
cached store.
|
|
||||||
Load when asked to: monitor tenant health, get aggregated failure rates over
|
|
||||||
a time window, review tenant-wide error trends, find inactive makers across
|
|
||||||
the tenant, inventory all Power Apps in the tenant, compute governance scores,
|
|
||||||
generate a compliance report, or run a tenant-wide health overview. Requires
|
|
||||||
a FlowStudio for Teams or MCP Pro+ subscription — see https://mcp.flowstudio.app
|
|
||||||
metadata:
|
|
||||||
openclaw:
|
|
||||||
requires:
|
|
||||||
env:
|
|
||||||
- FLOWSTUDIO_MCP_TOKEN
|
|
||||||
primaryEnv: FLOWSTUDIO_MCP_TOKEN
|
|
||||||
homepage: https://mcp.flowstudio.app
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Power Automate Monitoring with FlowStudio MCP
|
# Power Automate Monitoring with FlowStudio MCP
|
||||||
@@ -39,13 +24,13 @@ enriched with governance metadata and remediation hints.
|
|||||||
> 2. Tell the user this feature requires a Pro+ subscription
|
> 2. Tell the user this feature requires a Pro+ subscription
|
||||||
> 3. Link them to https://mcp.flowstudio.app/pricing
|
> 3. Link them to https://mcp.flowstudio.app/pricing
|
||||||
> 4. If their question can be answered with live tools (e.g. "list flows in
|
> 4. If their question can be answered with live tools (e.g. "list flows in
|
||||||
> one environment"), offer to use the `power-automate-mcp` skill instead
|
> one environment"), offer to use the `flowstudio-power-automate-mcp` skill instead
|
||||||
>
|
>
|
||||||
> **Discovery:** load tool schemas via `tool_search` rather than `tools/list` —
|
> **Discovery:** load tool schemas via `tool_search` rather than `tools/list` —
|
||||||
> call with `query: "select:list_store_flows,get_store_flow_summary"` for the
|
> call with `query: "select:list_store_flows,get_store_flow_summary"` for the
|
||||||
> common monitoring tools, or load the full set with `query: "skill:governance"`
|
> common monitoring tools, or load the full set with `query: "skill:governance"`
|
||||||
> (the server's governance bundle covers most monitoring reads too — this skill
|
> (the server's governance bundle covers most monitoring reads too — this skill
|
||||||
> and `power-automate-governance` share the underlying tool family). This skill
|
> and `flowstudio-power-automate-governance` share the underlying tool family). This skill
|
||||||
> covers response shapes, behavioral notes, and workflow patterns — things
|
> covers response shapes, behavioral notes, and workflow patterns — things
|
||||||
> `tool_search` cannot tell you. If this document disagrees with a real API
|
> `tool_search` cannot tell you. If this document disagrees with a real API
|
||||||
> response, the API wins.
|
> response, the API wins.
|
||||||
@@ -62,8 +47,8 @@ the results. There are two levels:
|
|||||||
etc.). Environments, apps, connections, and makers are also scanned.
|
etc.). Environments, apps, connections, and makers are also scanned.
|
||||||
- **Monitored flows** (`monitor: true`) additionally get per-run detail:
|
- **Monitored flows** (`monitor: true`) additionally get per-run detail:
|
||||||
individual run records with status, duration, failed action names, and
|
individual run records with status, duration, failed action names, and
|
||||||
remediation hints. This is what populates `get_store_flow_runs`,
|
remediation hints. This is what populates `get_store_flow_runs` and
|
||||||
`get_store_flow_errors`, and `get_store_flow_summary`.
|
`get_store_flow_summary`.
|
||||||
|
|
||||||
**Data freshness:** Check the `scanned` field on `get_store_flow` to see when
|
**Data freshness:** Check the `scanned` field on `get_store_flow` to see when
|
||||||
a flow was last scanned. If stale, the scanning pipeline may not be running.
|
a flow was last scanned. If stale, the scanning pipeline may not be running.
|
||||||
@@ -83,12 +68,9 @@ rule management to auto-configure failure alerts on critical flows.
|
|||||||
| Tool | Purpose |
|
| Tool | Purpose |
|
||||||
|---|---|
|
|---|---|
|
||||||
| `list_store_flows` | List flows with failure rates and monitoring filters |
|
| `list_store_flows` | List flows with failure rates and monitoring filters |
|
||||||
| `get_store_flow` | Full cached record: run stats, owners, tier, connections, definition |
|
| `get_store_flow` | Full cached record: run stats, owners, tier, connections, definition (`triggerUrl` field included) |
|
||||||
| `get_store_flow_summary` | Aggregated run stats: success/fail rate, avg/max duration |
|
| `get_store_flow_summary` | Aggregated run stats: success/fail rate, avg/max duration |
|
||||||
| `get_store_flow_runs` | Per-run history with duration, status, failed actions, remediation |
|
| `get_store_flow_runs` | Per-run history with duration, status, failed actions, remediation (filter `status="Failed"` for errors-only view) |
|
||||||
| `get_store_flow_errors` | Failed-only runs with action names and remediation hints |
|
|
||||||
| `get_store_flow_trigger_url` | Trigger URL from cache (instant, no PA API call) |
|
|
||||||
| `set_store_flow_state` | Start or stop a flow and sync state back to cache |
|
|
||||||
| `update_store_flow` | Set monitor flag, notification rules, tags, governance metadata |
|
| `update_store_flow` | Set monitor flag, notification rules, tags, governance metadata |
|
||||||
| `list_store_environments` | All Power Platform environments |
|
| `list_store_environments` | All Power Platform environments |
|
||||||
| `list_store_connections` | All connections |
|
| `list_store_connections` | All connections |
|
||||||
@@ -96,6 +78,11 @@ rule management to auto-configure failure alerts on critical flows.
|
|||||||
| `get_store_maker` | Maker detail: flow/app counts, licenses, account status |
|
| `get_store_maker` | Maker detail: flow/app counts, licenses, account status |
|
||||||
| `list_store_power_apps` | All Power Apps canvas apps |
|
| `list_store_power_apps` | All Power Apps canvas apps |
|
||||||
|
|
||||||
|
> For start/stop, use `set_live_flow_state` from the `monitor-flow` bundle
|
||||||
|
> (`tool_search query: "select:set_live_flow_state"`) — the cache resyncs on
|
||||||
|
> the next scan. The previous `set_store_flow_state` convenience wrapper is
|
||||||
|
> deprecated.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Store vs Live
|
## Store vs Live
|
||||||
@@ -104,7 +91,7 @@ rule management to auto-configure failure alerts on critical flows.
|
|||||||
|---|---|---|
|
|---|---|---|
|
||||||
| How many flows are failing? | `list_store_flows` | — |
|
| How many flows are failing? | `list_store_flows` | — |
|
||||||
| What's the fail rate over 30 days? | `get_store_flow_summary` | — |
|
| What's the fail rate over 30 days? | `get_store_flow_summary` | — |
|
||||||
| Show error history for a flow | `get_store_flow_errors` | — |
|
| Show error history for a flow | `get_store_flow_runs` (filter `status="Failed"`) | — |
|
||||||
| Who built this flow? | `get_store_flow` → parse `owners` | — |
|
| Who built this flow? | `get_store_flow` → parse `owners` | — |
|
||||||
| Read the full flow definition | `get_store_flow` has it (JSON string) | `get_live_flow` (structured) |
|
| Read the full flow definition | `get_store_flow` has it (JSON string) | `get_live_flow` (structured) |
|
||||||
| Inspect action inputs/outputs from a run | — | `get_live_flow_run_action_outputs` |
|
| Inspect action inputs/outputs from a run | — | `get_live_flow_run_action_outputs` |
|
||||||
@@ -113,9 +100,9 @@ rule management to auto-configure failure alerts on critical flows.
|
|||||||
> Store tools answer "what happened?" and "how healthy is it?"
|
> Store tools answer "what happened?" and "how healthy is it?"
|
||||||
> Live tools answer "what exactly went wrong?" and "fix it now."
|
> Live tools answer "what exactly went wrong?" and "fix it now."
|
||||||
|
|
||||||
> If `get_store_flow_runs`, `get_store_flow_errors`, or `get_store_flow_summary`
|
> If `get_store_flow_runs` or `get_store_flow_summary` return empty results,
|
||||||
> return empty results, check: (1) is `monitor: true` on the flow? and
|
> check: (1) is `monitor: true` on the flow? and (2) is the `scanned` field
|
||||||
> (2) is the `scanned` field recent? Use `get_store_flow` to verify both.
|
> recent? Use `get_store_flow` to verify both.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -135,7 +122,7 @@ Direct array. Filters: `monitor` (bool), `rule_notify_onfail` (bool),
|
|||||||
"triggerType": "Request",
|
"triggerType": "Request",
|
||||||
"triggerUrl": "https://...",
|
"triggerUrl": "https://...",
|
||||||
"tags": ["#operations", "#sensitive"],
|
"tags": ["#operations", "#sensitive"],
|
||||||
"environmentName": "Default-26e65220-...",
|
"environmentName": "Default-aaaaaaaa-...",
|
||||||
"monitor": true,
|
"monitor": true,
|
||||||
"runPeriodFailRate": 0.012,
|
"runPeriodFailRate": 0.012,
|
||||||
"runPeriodTotal": 82,
|
"runPeriodTotal": 82,
|
||||||
@@ -199,52 +186,24 @@ Aggregated stats over a time window (default: last 7 days).
|
|||||||
> Returns all zeros when no run data exists for this flow in the window.
|
> Returns all zeros when no run data exists for this flow in the window.
|
||||||
> Use `startTime` and `endTime` (ISO 8601) parameters to change the window.
|
> Use `startTime` and `endTime` (ISO 8601) parameters to change the window.
|
||||||
|
|
||||||
### `get_store_flow_runs` / `get_store_flow_errors`
|
### `get_store_flow_runs`
|
||||||
|
|
||||||
Direct array. `get_store_flow_errors` filters to `status=Failed` only.
|
Direct array of cached run records. Parameters: `startTime`, `endTime`,
|
||||||
Parameters: `startTime`, `endTime`, `status` (array: `["Failed"]`,
|
`status` (array — pass `["Failed"]` for an errors-only view, `["Succeeded"]`,
|
||||||
`["Succeeded"]`, etc.).
|
or omit for all).
|
||||||
|
|
||||||
> Both return `[]` when no run data exists.
|
> Returns `[]` when no run data exists in the window.
|
||||||
|
|
||||||
### `get_store_flow_trigger_url`
|
### Trigger URL
|
||||||
|
|
||||||
```json
|
Read the `triggerUrl` field directly from `get_store_flow` (cached) or
|
||||||
{
|
`get_live_flow` (live). It is `null` for non-HTTP triggers.
|
||||||
"flowKey": "Default-<envGuid>.<flowGuid>",
|
|
||||||
"displayName": "Stripe subscription updated",
|
|
||||||
"triggerType": "Request",
|
|
||||||
"triggerKind": "Http",
|
|
||||||
"triggerUrl": "https://..."
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
> `triggerUrl` is null for non-HTTP triggers.
|
### Starting / stopping a flow
|
||||||
|
|
||||||
### `set_store_flow_state`
|
Use `set_live_flow_state` from the `monitor-flow` server bundle. The cache
|
||||||
|
catches up on the next daily scan; if you need cache freshness sooner, call
|
||||||
Calls the live PA API then syncs state to the cache and returns the
|
`get_live_flow` after the state change to confirm and let the next scan sync.
|
||||||
full updated record.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"flowKey": "Default-<envGuid>.<flowGuid>",
|
|
||||||
"requestedState": "Stopped",
|
|
||||||
"currentState": "Stopped",
|
|
||||||
"flow": { /* full gFlows record, same shape as get_store_flow */ }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
> The embedded `flow` object reflects the new state immediately — no
|
|
||||||
> follow-up `get_store_flow` call needed. Useful for governance workflows
|
|
||||||
> that stop a flow and then read its tags/monitor/owner metadata in the
|
|
||||||
> same turn.
|
|
||||||
>
|
|
||||||
> Functionally equivalent to `set_live_flow_state` for changing state,
|
|
||||||
> but `set_live_flow_state` only returns `{flowName, environmentName,
|
|
||||||
> requestedState, actualState}` and doesn't sync the cache. Prefer
|
|
||||||
> `set_live_flow_state` when you only need to toggle state and don't
|
|
||||||
> care about cache freshness.
|
|
||||||
|
|
||||||
### `update_store_flow`
|
### `update_store_flow`
|
||||||
|
|
||||||
@@ -265,7 +224,7 @@ Direct array.
|
|||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"id": "Default-26e65220-...",
|
"id": "Default-aaaaaaaa-...",
|
||||||
"displayName": "Flow Studio (default)",
|
"displayName": "Flow Studio (default)",
|
||||||
"sku": "Default",
|
"sku": "Default",
|
||||||
"type": "NotSpecified",
|
"type": "NotSpecified",
|
||||||
@@ -306,8 +265,8 @@ Direct array.
|
|||||||
[
|
[
|
||||||
{
|
{
|
||||||
"id": "09dbe02f-...",
|
"id": "09dbe02f-...",
|
||||||
"displayName": "Catherine Han",
|
"displayName": "Sample Maker",
|
||||||
"mail": "catherine.han@flowstudio.app",
|
"mail": "maker@contoso.com",
|
||||||
"deleted": false,
|
"deleted": false,
|
||||||
"ownerFlowCount": 199,
|
"ownerFlowCount": 199,
|
||||||
"ownerAppCount": 209,
|
"ownerAppCount": 209,
|
||||||
@@ -365,7 +324,7 @@ Direct array.
|
|||||||
```
|
```
|
||||||
1. get_store_flow → check scanned (freshness), runPeriodFailRate, runPeriodTotal
|
1. get_store_flow → check scanned (freshness), runPeriodFailRate, runPeriodTotal
|
||||||
2. get_store_flow_summary → aggregated stats with optional time window
|
2. get_store_flow_summary → aggregated stats with optional time window
|
||||||
3. get_store_flow_errors → per-run failure detail with remediation hints
|
3. get_store_flow_runs(status=["Failed"]) → per-run failure detail with remediation hints
|
||||||
4. If deeper diagnosis needed → switch to live tools:
|
4. If deeper diagnosis needed → switch to live tools:
|
||||||
get_live_flow_runs → get_live_flow_run_action_outputs
|
get_live_flow_runs → get_live_flow_run_action_outputs
|
||||||
```
|
```
|
||||||
@@ -384,7 +343,7 @@ Direct array.
|
|||||||
1. list_store_flows
|
1. list_store_flows
|
||||||
2. Flag flows with runPeriodFailRate > 0.2 and runPeriodTotal >= 3
|
2. Flag flows with runPeriodFailRate > 0.2 and runPeriodTotal >= 3
|
||||||
3. Flag monitored flows with state="Stopped" (may indicate auto-suspension)
|
3. Flag monitored flows with state="Stopped" (may indicate auto-suspension)
|
||||||
4. For critical failures → get_store_flow_errors for remediation hints
|
4. For critical failures → get_store_flow_runs(status=["Failed"]) for remediation hints
|
||||||
```
|
```
|
||||||
|
|
||||||
### Maker audit
|
### Maker audit
|
||||||
@@ -408,7 +367,7 @@ Direct array.
|
|||||||
|
|
||||||
## Related Skills
|
## Related Skills
|
||||||
|
|
||||||
- `power-automate-mcp` — Foundation skill: connection setup, MCP helper, tool discovery
|
- `flowstudio-power-automate-mcp` — Foundation skill: connection setup, MCP helper, tool discovery
|
||||||
- `power-automate-debug` — Deep diagnosis with action-level inputs/outputs (live API)
|
- `flowstudio-power-automate-debug` — Deep diagnosis with action-level inputs/outputs (live API)
|
||||||
- `power-automate-build` — Build and deploy flow definitions
|
- `flowstudio-power-automate-build` — Build and deploy flow definitions
|
||||||
- `power-automate-governance` — Governance metadata, tagging, notification rules, CoE patterns
|
- `flowstudio-power-automate-governance` — Governance metadata, tagging, notification rules, CoE patterns
|
||||||
|
|||||||
Reference in New Issue
Block a user