Update FlowStudio Power Automate skills (#1664)

* feat(flowstudio): align Power Automate skills with MCP server v1.1.6

Foundation skill (flowstudio-power-automate-mcp) rewritten to use the
server's new tool_search and list_skills meta-tools (v1.1.5+) for
discovery instead of cataloging every tool by hand. Cut from 519 to
295 lines. New "Which Skill to Use When" intent-keyed decision tree
points at the four specialized skills.

Build/debug/governance/monitoring updated for use-case framing. Tools
that genuinely cross tiers (e.g. debug skill borrowing
get_store_flow_summary) are correct when the workflow needs them — the
split between skills is by use-case intent, not by tool partition.

Build skill: new Step 3a Resolving Dynamic Connector Values covers
get_live_dynamic_options outer-parameter auto-bridge (v1.1.6+) and the
AadGraph user-picker fallback via shared_office365users.SearchUserV2
(replaces broken builtInOperation:AadGraph.GetUsers).

Debug skill: Outlook user-picker failure note pointing at the fallback.

Monitoring skill description disambiguates from the server's monitor-flow
tool bundle (runtime control of a single flow) — this skill is
tenant-wide health analytics over the cached store.

All 5 skills validate via npm run skill:validate; line endings LF only;
codespell clean; auto-regenerated docs/README.skills.md included.

* fix(flowstudio): remove deprecated tool references

The v1.1.5 MCP server release marked 5 tools [DEPRECATED] but the
previous alignment commit missed them. Replacements per server source:

- get_live_flow_http_schema → read trigger.inputs.schema from get_live_flow
- get_live_flow_trigger_url → read trigger.metadata.callbackUrl from get_live_flow
- get_store_flow_trigger_url → get_store_flow.triggerUrl field
- get_store_flow_errors → get_store_flow_runs(status=["Failed"])
- set_store_flow_state → set_live_flow_state

Touches build, debug, governance, monitoring SKILL.md and the
foundation skill's tool-reference.md. Remaining mentions of the
deprecated names are intentional — they live in deprecation notices
naming the obsolete wrapper alongside its replacement.

* Update FlowStudio Power Automate skills

* Cover latest FlowStudio MCP actions

* Trim FlowStudio Power Automate skills

* Number FlowStudio build workflow steps
This commit is contained in:
Catherine Han
2026-05-11 11:28:29 +10:00
committed by GitHub
parent cf6bbba6fc
commit e67c66c441
17 changed files with 730 additions and 775 deletions
+176 -204
View File
@@ -9,13 +9,6 @@ description: >-
JSON, update an existing flow's actions, patch a flow definition, add actions
to a flow, wire up connections, or generate a workflow definition from scratch.
Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
metadata:
openclaw:
requires:
env:
- FLOWSTUDIO_MCP_TOKEN
primaryEnv: FLOWSTUDIO_MCP_TOKEN
homepage: https://mcp.flowstudio.app
---
# Build & Deploy Power Automate Flows with FlowStudio MCP
@@ -24,18 +17,28 @@ Step-by-step guide for constructing and deploying Power Automate cloud flows
programmatically through the FlowStudio MCP server.
**Prerequisite**: A FlowStudio MCP server must be reachable with a valid JWT.
See the `power-automate-mcp` skill for connection setup.
See the `flowstudio-power-automate-mcp` skill for connection setup.
Subscribe at https://mcp.flowstudio.app
Workflow:
1. Load current build tools.
2. Check for an existing flow.
3. Resolve connection references.
4. Build the definition.
5. Deploy.
6. Verify.
7. Test.
---
## Source of Truth
> **Always call `tools/list` first** to confirm available tool names and their
> parameter schemas. Tool names and parameters may change between server versions.
> **Always call `list_skills` / `tool_search` first** to confirm available tool
> names and parameter schemas. Tool names and parameters may change between
> server versions.
> This skill covers response shapes, behavioral notes, and build patterns —
> things `tools/list` cannot tell you. If this document disagrees with `tools/list`
> or a real API response, the API wins.
> things tool schemas cannot tell you. If this document disagrees with
> `tool_search` or a real API response, the API wins.
---
@@ -68,14 +71,38 @@ ENV = "<environment-id>" # e.g. Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
---
## Step 1 — Safety Check: Does the Flow Already Exist?
## 0. Load the Current Build Tools
For a brand-new flow, load the server's `create-flow` bundle. For editing an
existing flow, load `build-flow`. This keeps the agent aligned with the MCP
server's current schema before constructing JSON.
```python
schemas = mcp("tool_search", query="skill:create-flow")
# Includes list_live_environments, list_live_connections,
# describe_live_connector, get_live_dynamic_options, update_live_flow.
```
If you need a tool outside the bundle, load it explicitly:
```python
mcp("tool_search", query="select:get_live_dynamic_properties")
```
---
## 1. Safety Check: Does the Flow Already Exist?
Always look before you build to avoid duplicates:
```python
results = mcp("list_live_flows", environmentName=ENV)
results = mcp("list_live_flows",
environmentName=ENV,
mode="owner",
search="My New Flow",
top=20)
# list_live_flows returns { "flows": [...] }
# list_live_flows returns { "flows": [...], "mode": "...", ... }
matches = [f for f in results["flows"]
if "My New Flow".lower() in f["displayName"].lower()]
@@ -89,9 +116,14 @@ else:
FLOW_ID = None
```
For very large environments, `list_live_flows` may return a continuation URL.
Pass it back as `continuationUrl` with the same `mode` to retrieve the next
batch. Use `mode="admin"` only when the user needs all environment flows and
the MCP identity has admin rights.
---
## Step 2 — Obtain Connection References
## 2. Obtain Connection References
Every connector action needs a `connectionName` that points to a key in the
flow's `connectionReferences` map. That key links to an authenticated connection
@@ -101,103 +133,72 @@ in the environment.
> user for connection names or GUIDs. The API returns the exact values you need.
> Only prompt the user if the API confirms that required connections are missing.
### 2a — Always call `list_live_connections` first
### 2a — Find active connections
```python
conns = mcp("list_live_connections", environmentName=ENV)
# Filter to connected (authenticated) connections only
active = [c for c in conns["connections"]
if c["statuses"][0]["status"] == "Connected"]
conn_map = {c["connectorName"]: c["id"] for c in active}
```
# Build a lookup: connectorName → connectionName (id)
conn_map = {}
for c in active:
conn_map[c["connectorName"]] = c["id"]
For a known connector, pass `search` to reduce output and get paste-ready
`connectionReferenceTemplate` and `hostTemplate` values:
print(f"Found {len(active)} active connections")
print("Available connectors:", list(conn_map.keys()))
```python
sp_conns = mcp("list_live_connections",
environmentName=ENV,
search="shared_sharepointonline")
```
### 2b — Determine which connectors the flow needs
Based on the flow you are building, identify which connectors are required.
Common connector API names:
Common connector API names: SharePoint `shared_sharepointonline`, Outlook
`shared_office365`, Teams `shared_teams`, Approvals `shared_approvals`,
OneDrive `shared_onedriveforbusiness`, Excel `shared_excelonlinebusiness`,
Dataverse `shared_commondataserviceforapps`, Forms `shared_microsoftforms`.
| Connector | API name |
|---|---|
| SharePoint | `shared_sharepointonline` |
| Outlook / Office 365 | `shared_office365` |
| Teams | `shared_teams` |
| Approvals | `shared_approvals` |
| OneDrive for Business | `shared_onedriveforbusiness` |
| Excel Online (Business) | `shared_excelonlinebusiness` |
| Dataverse | `shared_commondataserviceforapps` |
| Microsoft Forms | `shared_microsoftforms` |
> **Flows that need NO connections** (e.g. Recurrence + Compose + HTTP only)
> can skip the rest of Step 2 — omit `connectionReferences` from the deploy call.
Flows that need no connectors, such as Recurrence + Compose + HTTP only, can
omit `connectionReferences`.
### 2c — If connections are missing, guide the user
```python
connectors_needed = ["shared_sharepointonline", "shared_office365"] # adjust per flow
missing = [c for c in connectors_needed if c not in conn_map]
if not missing:
print("✅ All required connections are available — proceeding to build")
else:
# ── STOP: connections must be created interactively ──
# Connections require OAuth consent in a browser — no API can create them.
print("⚠️ The following connectors have no active connection in this environment:")
for c in missing:
friendly = c.replace("shared_", "").replace("onlinebusiness", " Online (Business)")
print(f"{friendly} (API name: {c})")
print()
print("Please create the missing connections:")
print(" 1. Open https://make.powerautomate.com/connections")
print(" 2. Select the correct environment from the top-right picker")
print(" 3. Click '+ New connection' for each missing connector listed above")
print(" 4. Sign in and authorize when prompted")
print(" 5. Tell me when done — I will re-check and continue building")
# DO NOT proceed to Step 3 until the user confirms.
# After user confirms, re-run Step 2a to refresh conn_map.
if missing:
# STOP: connections require browser OAuth consent.
# Ask the user to create the missing connector connections in the
# selected environment, then re-run list_live_connections.
raise Exception(f"Missing active connections: {missing}")
```
### 2d — Build the connectionReferences block
Only execute this after 2c confirms no missing connectors:
```python
connection_references = {}
host_templates = {}
for connector in connectors_needed:
connection_references[connector] = {
"connectionName": conn_map[connector], # the GUID from list_live_connections
c = next(c for c in active if c["connectorName"] == connector)
connection_references[connector] = c.get("connectionReferenceTemplate") or {
"connectionName": c["id"], # the connection id from list_live_connections
"source": "Invoker",
"id": f"/providers/Microsoft.PowerApps/apis/{connector}"
}
host_templates[connector] = c.get("hostTemplate") or {
"connectionName": connector
}
```
> **IMPORTANT — `host.connectionName` in actions**: When building actions in
> Step 3, set `host.connectionName` to the **key** from this map (e.g.
> `shared_teams`), NOT the connection GUID. The GUID only goes inside the
> `connectionReferences` entry. The engine matches the action's
> `host.connectionName` to the key to find the right connection.
> **Alternative** — if you already have a flow using the same connectors,
> you can extract `connectionReferences` from its definition:
> ```python
> ref_flow = mcp("get_live_flow", environmentName=ENV, flowName="<existing-flow-id>")
> connection_references = ref_flow["properties"]["connectionReferences"]
> ```
See the `power-automate-mcp` skill's **connection-references.md** reference
for the full connection reference structure.
In Step 3 action JSON, `inputs.host.connectionName` must be the map key such as
`shared_teams`, not the GUID. The GUID belongs only inside the
`connectionReferences[connector].connectionName` value. If an existing flow uses
the same connectors, you may also copy its `properties.connectionReferences`
from `get_live_flow`.
---
## Step 3 — Build the Flow Definition
## 3. Build the Flow Definition
Construct the definition object. See [flow-schema.md](references/flow-schema.md)
for the full schema and these action pattern references for copy-paste templates:
@@ -217,62 +218,72 @@ definition = {
> See [build-patterns.md](references/build-patterns.md) for complete, ready-to-use
> flow definitions covering Recurrence+SharePoint+Teams, HTTP triggers, and more.
---
### Discover connector operations before guessing JSON
## Step 3a — Resolving Dynamic Connector Values
When an action input needs a value picked from a connector dropdown (e.g. a
SharePoint list ID, a Dataverse table name, a user's Azure AD UPN), use
`get_live_dynamic_options` to resolve it via MCP rather than hardcoding GUIDs.
For connector-backed triggers/actions, prefer the live connector describer over
hand-written shapes. It can return authored hints, canonical examples, variant
keys, inputs/outputs, and dynamic metadata pointers.
```python
# Resolve a SharePoint list by site
opts = mcp("get_live_dynamic_options",
# Search across connectors when you know the user's intent but not the API.
matches = mcp("describe_live_connector",
environmentName=ENV,
search="send email",
top=5)
# Describe a specific operation before copying an exampleDefinition.
op = mcp("describe_live_connector",
environmentName=ENV,
connectorName="shared_office365",
operationId="SendEmailV2")
print(op.get("hint"))
```
When an operation has multiple authored variants, request the variant the flow
needs:
```python
teams_chat = mcp("describe_live_connector",
environmentName=ENV,
connectorName="shared_teams",
operationId="PostMessageToConversation",
variant="flowbot_chat")
```
When the operation description says a parameter has dynamic options or dynamic
properties, call the indicated next tool:
```python
sp_op = mcp("describe_live_connector",
environmentName=ENV,
connectorName="shared_sharepointonline",
operationId="GetTables",
parameters={"dataset": "https://contoso.sharepoint.com/sites/HR"})
# opts["value"] → [{"Name": "<list-guid>", "DisplayName": "Employees"}, ...]
```
operationId="GetItems")
> **Outer-parameter auto-bridge** (server v1.1.6+): you can pass arbitrary outer
> parameters directly in `parameters` — the server now synthesizes the
> `parameterReference` mapping that PA's listEnum requires. Before 1.1.6 you had
> to declare `dynamicMetadata.parameters: {paramName: {parameterReference: "name"}}`
> manually or get `IncorrectDynamicInvokeParameter`. This makes it practical to
> invoke arbitrary connector operations through the dynamic-options pipeline
> (e.g. `shared_office365users.SearchUserV2` for AAD user lookup).
### AadGraph user-picker fallback
For Outlook actions like `GetEmailsV3` (parameters `mailboxAddress`, `to`, `cc`,
`from`), PA's listEnum uses `builtInOperation:AadGraph.GetUsers` — which is
broken and returns `DynamicListValuesUndefinedOrInvalid` for every call.
`describe_live_connector` (v1.1.6+) detects these parameters and returns a
structured `fallback` field on each affected parameter pointing at a working
alternative. **Use `shared_office365users.SearchUserV2`** to resolve the same
AAD user shape `{value: [{id, displayName, mail, userPrincipalName, ...}]}`:
```python
# Borrow a shared_office365users connection (any active one will do)
conn = next(c for c in conn_map if "office365users" in c)
users = mcp("get_live_dynamic_options",
sites = mcp("get_live_dynamic_options",
environmentName=ENV,
connectorName="shared_office365users",
connectionName=conn_map[conn], # see Step 2a
operationId="SearchUserV2",
parameters={"searchTerm": "john", "top": 10})
# users["value"] → [{"Id": "...", "DisplayName": "John Smith", "Mail": "..."}, ...]
connectorName="shared_sharepointonline",
connectionName=conn_map["shared_sharepointonline"],
operationId="GetItems",
parameterName="dataset",
dynamicMetadata=sp_op["dynamicParameters"]["dataset"])
fields = mcp("get_live_dynamic_properties",
environmentName=ENV,
connectorName="shared_sharepointonline",
connectionName=conn_map["shared_sharepointonline"],
operationId="GetItems",
parameterName="item",
parameters={"dataset": "<site-url>", "table": "<list-id>"},
dynamicMetadata=sp_op["dynamicProperties"]["item"])
```
Then plug the resolved `Mail` value into the Outlook action's parameter — no
need to call `AadGraph.GetUsers` directly.
Use dynamic options for dropdown IDs such as SharePoint sites/lists and Teams
teams/channels. Use dynamic properties for schema/field shapes such as
SharePoint list item columns.
---
## Step 4 — Deploy (Create or Update)
## 4. Deploy (Create or Update)
`update_live_flow` handles both creation and updates in a single tool.
@@ -281,13 +292,14 @@ need to call `AadGraph.GetUsers` directly.
Omit `flowName` — the server generates a new GUID and creates via PUT:
```python
definition["description"] = "Weekly SharePoint → Teams notification flow, built by agent"
result = mcp("update_live_flow",
environmentName=ENV,
# flowName omitted → creates a new flow
definition=definition,
connectionReferences=connection_references,
displayName="Overdue Invoice Notifications",
description="Weekly SharePoint → Teams notification flow, built by agent"
displayName="Overdue Invoice Notifications"
)
if result.get("error") is not None:
@@ -303,13 +315,16 @@ else:
Provide `flowName` to PATCH:
```python
definition["description"] = (
"Updated by agent on " + __import__('datetime').datetime.utcnow().isoformat()
)
result = mcp("update_live_flow",
environmentName=ENV,
flowName=FLOW_ID,
definition=definition,
connectionReferences=connection_references,
displayName="My Updated Flow",
description="Updated by agent on " + __import__('datetime').datetime.utcnow().isoformat()
displayName="My Updated Flow"
)
if result.get("error") is not None:
@@ -321,7 +336,9 @@ else:
> ⚠️ `update_live_flow` always returns an `error` key.
> `null` (Python `None`) means success — do not treat the presence of the key as failure.
>
> ⚠️ `description` is required for both create and update.
> ⚠️ Flow description lives at `definition["description"]`. The current server
> appends `#flowstudio-mcp` for usage tracking. Do not pass a top-level
> `description` argument unless `tool_search` shows one in the active schema.
### Common deployment errors
@@ -334,7 +351,7 @@ else:
---
## Step 5 — Verify the Deployment
## 5. Verify the Deployment
```python
check = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
@@ -351,7 +368,7 @@ print("Actions:", list(acts.keys()))
---
## Step 6 — Test the Flow
## 6. Test the Flow
> **MANDATORY**: Before triggering any test run, **ask the user for confirmation**.
> Running a flow has real side effects — it may send emails, post Teams messages,
@@ -382,9 +399,10 @@ than the original run. For verifying a fix, `resubmit_live_flow_run` is
better because it uses the exact data that caused the failure.
```python
schema = mcp("get_live_flow_http_schema",
environmentName=ENV, flowName=FLOW_ID)
print("Expected body:", schema.get("requestSchema"))
defn = mcp("get_live_flow", environmentName=ENV, flowName=FLOW_ID)
triggers = defn["properties"]["definition"]["triggers"]
manual = next(iter(triggers.values()))
print("Expected body:", manual.get("inputs", {}).get("schema"))
result = mcp("trigger_live_flow",
environmentName=ENV, flowName=FLOW_ID,
@@ -399,95 +417,43 @@ resubmit and no HTTP endpoint to call. This is the ONLY scenario where you
need the temporary HTTP trigger approach below. **Deploy with a temporary
HTTP trigger first, test the actions, then swap to the production trigger.**
#### 7a — Save the real trigger, deploy with a temporary HTTP trigger
Compact recipe:
```python
# Save the production trigger you built in Step 3
production_trigger = definition["triggers"]
# Replace with a temporary HTTP trigger
definition["triggers"] = {
"manual": {
"type": "Request",
"kind": "Http",
"inputs": {
"schema": {}
}
}
"manual": {"type": "Request", "kind": "Http", "inputs": {"schema": {}}}
}
# Deploy (create or update) with the temp trigger
result = mcp("update_live_flow",
environmentName=ENV,
flowName=FLOW_ID, # omit if creating new
definition=definition,
connectionReferences=connection_references,
displayName="Overdue Invoice Notifications",
description="Deployed with temp HTTP trigger for testing")
displayName="Overdue Invoice Notifications")
FLOW_ID = FLOW_ID or result["created"]
if result.get("error") is not None:
print("Deploy failed:", result["error"])
else:
if not FLOW_ID:
FLOW_ID = result["created"]
print(f"✅ Deployed with temp HTTP trigger: {FLOW_ID}")
```
test = mcp("trigger_live_flow", environmentName=ENV, flowName=FLOW_ID,
body={"sample": "payload"})
runs = mcp("get_live_flow_runs", environmentName=ENV, flowName=FLOW_ID, top=1)
#### 7b — Fire the flow and check the result
```python
# Trigger the flow
test = mcp("trigger_live_flow",
environmentName=ENV, flowName=FLOW_ID)
print(f"Trigger response status: {test['status']}")
# Wait for the run to complete
import time; time.sleep(15)
# Check the run result
runs = mcp("get_live_flow_runs",
environmentName=ENV, flowName=FLOW_ID, top=1)
run = runs[0]
print(f"Run {run['name']}: {run['status']}")
if run["status"] == "Failed":
if runs[0]["status"] == "Failed":
err = mcp("get_live_flow_run_error",
environmentName=ENV, flowName=FLOW_ID, runName=run["name"])
root = err["failedActions"][-1]
print(f"Root cause: {root['actionName']}{root.get('code')}")
# Debug and fix the definition before proceeding
# See power-automate-debug skill for full diagnosis workflow
```
environmentName=ENV, flowName=FLOW_ID, runName=runs[0]["name"])
raise Exception(err["failedActions"][-1])
#### 7c — Swap to the production trigger
Once the test run succeeds, replace the temporary HTTP trigger with the real one:
```python
# Restore the production trigger
definition["triggers"] = production_trigger
result = mcp("update_live_flow",
mcp("update_live_flow",
environmentName=ENV,
flowName=FLOW_ID,
definition=definition,
connectionReferences=connection_references,
description="Swapped to production trigger after successful test")
if result.get("error") is not None:
print("Trigger swap failed:", result["error"])
else:
print("✅ Production trigger deployed — flow is live")
connectionReferences=connection_references)
```
> **Why this works**: The trigger is just the entry point — the actions are
> identical regardless of how the flow starts. Testing via HTTP trigger
> exercises all the same Compose, SharePoint, Teams, etc. actions.
>
> **Connector triggers** (e.g. "When an item is created in SharePoint"):
> If actions reference `triggerBody()` or `triggerOutputs()`, pass a
> representative test payload in `trigger_live_flow`'s `body` parameter
> that matches the shape the connector trigger would produce.
The trigger is only the entry point; testing through HTTP still exercises the
same actions. If actions use `triggerBody()` or `triggerOutputs()`, pass a
representative `trigger_live_flow.body` shaped like the production trigger
payload.
---
@@ -502,6 +468,12 @@ else:
| Checking `result["error"]` exists | Always present; true error is `!= null` | Use `result.get("error") is not None` |
| Flow deployed but state is "Stopped" | Flow won't run on schedule | Call `set_live_flow_state` with `state: "Started"` — do **not** use `update_live_flow` for state changes |
| Teams "Chat with Flow bot" recipient as object | 400 `GraphUserDetailNotFound` | Use plain string with trailing semicolon (see below) |
| Copilot/Skills flow not in a solution | Copilot Studio may not discover it as an agent tool | After deploy, call `add_live_flow_to_solution` with the target `solutionId` |
| Button/Skills trigger used for MCP testing | MCP cannot directly fire the production trigger | Test the same actions through a temporary HTTP twin, then swap the trigger back |
| Connector action missing `metadata.operationMetadataId` | Designer/run-only UI can behave inconsistently | Preserve existing IDs; add stable GUIDs for new connector actions |
| Placeholder Excel `scriptId` | Dynamic validation fails at save time | Resolve the real Office Script ID before deploying |
| SharePoint `PatchItem` omits required fields | Save can fail even if the field is not changing | Echo unchanged required fields such as `item/Title` |
| Copilot Studio connector calls a draft agent | Connector invocation can fail or hit stale behavior | Publish the agent before testing/resubmitting the flow |
### Teams `PostMessageToConversation` — Recipient Formats
@@ -528,5 +500,5 @@ The `body/recipient` parameter format depends on the `location` value:
## Related Skills
- `power-automate-mcp`Foundation skill: connection setup, MCP helper, tool discovery
- `power-automate-debug` — Debug failing flows after deployment
- `flowstudio-power-automate-mcp`Core connection setup and tool reference
- `flowstudio-power-automate-debug` — Debug failing flows after deployment