Files
awesome-copilot/skills/flowstudio-power-automate-build/references/action-patterns-core.md
T
Catherine Han e67c66c441 Update FlowStudio Power Automate skills (#1664)
* feat(flowstudio): align Power Automate skills with MCP server v1.1.6

Foundation skill (flowstudio-power-automate-mcp) rewritten to use the
server's new tool_search and list_skills meta-tools (v1.1.5+) for
discovery instead of cataloging every tool by hand. Cut from 519 to
295 lines. New "Which Skill to Use When" intent-keyed decision tree
points at the four specialized skills.

Build/debug/governance/monitoring updated for use-case framing. Tools
that genuinely cross tiers (e.g. debug skill borrowing
get_store_flow_summary) are correct when the workflow needs them — the
split between skills is by use-case intent, not by tool partition.

Build skill: new Step 3a Resolving Dynamic Connector Values covers
get_live_dynamic_options outer-parameter auto-bridge (v1.1.6+) and the
AadGraph user-picker fallback via shared_office365users.SearchUserV2
(replaces broken builtInOperation:AadGraph.GetUsers).

Debug skill: Outlook user-picker failure note pointing at the fallback.

Monitoring skill description disambiguates from the server's monitor-flow
tool bundle (runtime control of a single flow) — this skill is
tenant-wide health analytics over the cached store.

All 5 skills validate via npm run skill:validate; line endings LF only;
codespell clean; auto-regenerated docs/README.skills.md included.

* fix(flowstudio): remove deprecated tool references

The v1.1.5 MCP server release marked 5 tools [DEPRECATED] but the
previous alignment commit missed them. Replacements per server source:

- get_live_flow_http_schema → read trigger.inputs.schema from get_live_flow
- get_live_flow_trigger_url → read trigger.metadata.callbackUrl from get_live_flow
- get_store_flow_trigger_url → get_store_flow.triggerUrl field
- get_store_flow_errors → get_store_flow_runs(status=["Failed"])
- set_store_flow_state → set_live_flow_state

Touches build, debug, governance, monitoring SKILL.md and the
foundation skill's tool-reference.md. Remaining mentions of the
deprecated names are intentional — they live in deprecation notices
naming the obsolete wrapper alongside its replacement.

* Update FlowStudio Power Automate skills

* Cover latest FlowStudio MCP actions

* Trim FlowStudio Power Automate skills

* Number FlowStudio build workflow steps
2026-05-11 11:28:29 +10:00

16 KiB
Raw Blame History

FlowStudio MCP — Action Patterns: Core

Variables, control flow, and expression patterns for Power Automate flow definitions.

All examples assume "runAfter" is set appropriately. Replace <connectionName> with the key you used in your connectionReferences map (e.g. shared_teams, shared_office365) — NOT the connection GUID.


Data & Variables

Compose (Store a Value)

"Compose_My_Value": {
  "type": "Compose",
  "runAfter": {},
  "inputs": "@variables('myVar')"
}

Reference: @outputs('Compose_My_Value')


Initialize Variable

"Init_Counter": {
  "type": "InitializeVariable",
  "runAfter": {},
  "inputs": {
    "variables": [{
      "name": "counter",
      "type": "Integer",
      "value": 0
    }]
  }
}

Types: "Integer", "Float", "Boolean", "String", "Array", "Object"


Set Variable

"Set_Counter": {
  "type": "SetVariable",
  "runAfter": {},
  "inputs": {
    "name": "counter",
    "value": "@add(variables('counter'), 1)"
  }
}

Append to Array Variable

"Collect_Item": {
  "type": "AppendToArrayVariable",
  "runAfter": {},
  "inputs": {
    "name": "resultArray",
    "value": "@item()"
  }
}

Increment Variable

"Increment_Counter": {
  "type": "IncrementVariable",
  "runAfter": {},
  "inputs": {
    "name": "counter",
    "value": 1
  }
}

Use IncrementVariable (not SetVariable with add()) for counters inside loops — it is atomic and avoids expression errors when the variable is used elsewhere in the same iteration. value can be any integer or expression, e.g. @mul(item()?['Interval'], 60) to advance a Unix timestamp cursor by N minutes.


Control Flow

Condition (If/Else)

"Check_Status": {
  "type": "If",
  "runAfter": {},
  "expression": {
    "and": [{ "equals": ["@item()?['Status']", "Active"] }]
  },
  "actions": {
    "Handle_Active": {
      "type": "Compose",
      "runAfter": {},
      "inputs": "Active user: @{item()?['Name']}"
    }
  },
  "else": {
    "actions": {
      "Handle_Inactive": {
        "type": "Compose",
        "runAfter": {},
        "inputs": "Inactive user"
      }
    }
  }
}

Comparison operators: equals, not, greater, greaterOrEquals, less, lessOrEquals, contains
Logical: and: [...], or: [...]


Switch

"Route_By_Type": {
  "type": "Switch",
  "runAfter": {},
  "expression": "@triggerBody()?['type']",
  "cases": {
    "Case_Email": {
      "case": "email",
      "actions": { "Process_Email": { "type": "Compose", "runAfter": {}, "inputs": "email" } }
    },
    "Case_Teams": {
      "case": "teams",
      "actions": { "Process_Teams": { "type": "Compose", "runAfter": {}, "inputs": "teams" } }
    }
  },
  "default": {
    "actions": { "Unknown_Type": { "type": "Compose", "runAfter": {}, "inputs": "unknown" } }
  }
}

Scope (Grouping / Try-Catch)

Wrap related actions in a Scope to give them a shared name, collapse them in the designer, and — most importantly — handle their errors as a unit.

"Scope_Get_Customer": {
  "type": "Scope",
  "runAfter": {},
  "actions": {
    "HTTP_Get_Customer": {
      "type": "Http",
      "runAfter": {},
      "inputs": {
        "method": "GET",
        "uri": "https://api.example.com/customers/@{variables('customerId')}"
      }
    },
    "Compose_Email": {
      "type": "Compose",
      "runAfter": { "HTTP_Get_Customer": ["Succeeded"] },
      "inputs": "@outputs('HTTP_Get_Customer')?['body/email']"
    }
  }
},
"Handle_Scope_Error": {
  "type": "Compose",
  "runAfter": { "Scope_Get_Customer": ["Failed", "TimedOut"] },
  "inputs": "Scope failed: @{result('Scope_Get_Customer')?[0]?['error']?['message']}"
}

Reference scope results: @result('Scope_Get_Customer') returns an array of action outcomes. Use runAfter: {"MyScope": ["Failed", "TimedOut"]} on a follow-up action to create try/catch semantics without a Terminate.


Foreach (Sequential)

"Process_Each_Item": {
  "type": "Foreach",
  "runAfter": {},
  "foreach": "@outputs('Get_Items')?['body/value']",
  "operationOptions": "Sequential",
  "actions": {
    "Handle_Item": {
      "type": "Compose",
      "runAfter": {},
      "inputs": "@item()?['Title']"
    }
  }
}

Always include "operationOptions": "Sequential" unless parallel is intentional.


Foreach (Parallel with Concurrency Limit)

"Process_Each_Item_Parallel": {
  "type": "Foreach",
  "runAfter": {},
  "foreach": "@body('Get_SP_Items')?['value']",
  "runtimeConfiguration": {
    "concurrency": {
      "repetitions": 20
    }
  },
  "actions": {
    "HTTP_Upsert": {
      "type": "Http",
      "runAfter": {},
      "inputs": {
        "method": "POST",
        "uri": "https://api.example.com/contacts/@{item()?['Email']}"
      }
    }
  }
}

Set repetitions to control how many items are processed simultaneously. Practical values: 510 for external API calls (respect rate limits), 2050 for internal/fast operations. Omit runtimeConfiguration.concurrency entirely for the platform default (currently 50). Do NOT use "operationOptions": "Sequential" and concurrency together.


Wait (Delay)

"Delay_10_Minutes": {
  "type": "Wait",
  "runAfter": {},
  "inputs": {
    "interval": {
      "count": 10,
      "unit": "Minute"
    }
  }
}

Valid unit values: "Second", "Minute", "Hour", "Day"

Use a Delay + re-fetch as a deduplication guard: wait for any competing process to complete, then re-read the record before acting. This avoids double-processing when multiple triggers or manual edits can race on the same item.


Terminate (Success or Failure)

"Terminate_Success": {
  "type": "Terminate",
  "runAfter": {},
  "inputs": {
    "runStatus": "Succeeded"
  }
},
"Terminate_Failure": {
  "type": "Terminate",
  "runAfter": { "Risky_Action": ["Failed"] },
  "inputs": {
    "runStatus": "Failed",
    "runError": {
      "code": "StepFailed",
      "message": "@{outputs('Get_Error_Message')}"
    }
  }
}

Do Until (Loop Until Condition)

Repeats a block of actions until an exit condition becomes true. Use when the number of iterations is not known upfront (e.g. paginating an API, walking a time range, polling until a status changes).

"Do_Until_Done": {
  "type": "Until",
  "runAfter": {},
  "expression": "@greaterOrEquals(variables('cursor'), variables('endValue'))",
  "limit": {
    "count": 5000,
    "timeout": "PT5H"
  },
  "actions": {
    "Do_Work": {
      "type": "Compose",
      "runAfter": {},
      "inputs": "@variables('cursor')"
    },
    "Advance_Cursor": {
      "type": "IncrementVariable",
      "runAfter": { "Do_Work": ["Succeeded"] },
      "inputs": {
        "name": "cursor",
        "value": 1
      }
    }
  }
}

Always set limit.count and limit.timeout explicitly — the platform defaults are low (60 iterations, 1 hour). For time-range walkers use limit.count: 5000 and limit.timeout: "PT5H" (ISO 8601 duration).

The exit condition is evaluated before each iteration. Initialise your cursor variable before the loop so the condition can evaluate correctly on the first pass.


Agent Retry Loop

When a flow calls an AI or Copilot-style agent until it reaches a terminal outcome, keep the loop state explicit:

  • Initialize variables such as agentStatus, attempt, and finalPayload before the Until.
  • Inside the loop, call the agent, validate the response, update the status, and delay/retry only when the status is non-terminal.
  • Put final dispatch actions such as email, SharePoint update, or Teams post after the loop so retries do not duplicate side effects.
  • If the platform rejects a complex Switch nested inside Until, keep the loop body to simple validation and state updates, then route with Switch after the loop.

Async Polling with RequestId Correlation

When an API starts a long-running job asynchronously (e.g. Power BI dataset refresh, report generation, batch export), the trigger call returns a request ID. Capture it from the response header, then poll a status endpoint filtering by that exact ID:

"Start_Job": {
  "type": "Http",
  "inputs": { "method": "POST", "uri": "https://api.example.com/jobs" }
},
"Capture_Request_ID": {
  "type": "Compose",
  "runAfter": { "Start_Job": ["Succeeded"] },
  "inputs": "@outputs('Start_Job')?['headers/X-Request-Id']"
},
"Initialize_Status": {
  "type": "InitializeVariable",
  "inputs": { "variables": [{ "name": "jobStatus", "type": "String", "value": "Running" }] }
},
"Poll_Until_Done": {
  "type": "Until",
  "expression": "@not(equals(variables('jobStatus'), 'Running'))",
  "limit": { "count": 60, "timeout": "PT30M" },
  "actions": {
    "Delay": { "type": "Wait", "inputs": { "interval": { "count": 20, "unit": "Second" } } },
    "Get_History": {
      "type": "Http",
      "runAfter": { "Delay": ["Succeeded"] },
      "inputs": { "method": "GET", "uri": "https://api.example.com/jobs/history" }
    },
    "Filter_This_Job": {
      "type": "Query",
      "runAfter": { "Get_History": ["Succeeded"] },
      "inputs": {
        "from": "@outputs('Get_History')?['body/items']",
        "where": "@equals(item()?['requestId'], outputs('Capture_Request_ID'))"
      }
    },
    "Set_Status": {
      "type": "SetVariable",
      "runAfter": { "Filter_This_Job": ["Succeeded"] },
      "inputs": {
        "name": "jobStatus",
        "value": "@first(body('Filter_This_Job'))?['status']"
      }
    }
  }
},
"Handle_Failure": {
  "type": "If",
  "runAfter": { "Poll_Until_Done": ["Succeeded"] },
  "expression": { "equals": ["@variables('jobStatus')", "Failed"] },
  "actions": { "Terminate_Failed": { "type": "Terminate", "inputs": { "runStatus": "Failed" } } },
  "else": { "actions": {} }
}

Access response headers: @outputs('Start_Job')?['headers/X-Request-Id']

Status variable initialisation: set a sentinel value ("Running", "Unknown") before the loop. The exit condition tests for any value other than the sentinel. This way an empty poll result (job not yet in history) leaves the variable unchanged and the loop continues — it doesn't accidentally exit on null.

Filter before extracting: always Filter Array the history to your specific request ID before calling first(). History endpoints return all jobs; without filtering, status from a different concurrent job can corrupt your poll.


runAfter Fallback (Failed → Alternative Action)

Route to a fallback action when a primary action fails — without a Condition block. Simply set runAfter on the fallback to accept ["Failed"] from the primary:

"HTTP_Get_Hi_Res": {
  "type": "Http",
  "runAfter": {},
  "inputs": { "method": "GET", "uri": "https://api.example.com/data?resolution=hi-res" }
},
"HTTP_Get_Low_Res": {
  "type": "Http",
  "runAfter": { "HTTP_Get_Hi_Res": ["Failed"] },
  "inputs": { "method": "GET", "uri": "https://api.example.com/data?resolution=low-res" }
}

Actions that follow can use runAfter accepting both ["Succeeded", "Skipped"] to handle either path — see Fan-In Join Gate below.


Fan-In Join Gate (Merge Two Mutually Exclusive Branches)

When two branches are mutually exclusive (only one can succeed per run), use a single downstream action that accepts ["Succeeded", "Skipped"] from both branches. The gate fires exactly once regardless of which branch ran:

"Increment_Count": {
  "type": "IncrementVariable",
  "runAfter": {
    "Update_Hi_Res_Metadata":  ["Succeeded", "Skipped"],
    "Update_Low_Res_Metadata": ["Succeeded", "Skipped"]
  },
  "inputs": { "name": "LoopCount", "value": 1 }
}

This avoids duplicating the downstream action in each branch. The key insight: whichever branch was skipped reports Skipped — the gate accepts that state and fires once. Only works cleanly when the two branches are truly mutually exclusive (e.g. one is runAfter: [...Failed] of the other).


Expressions

Common Expression Patterns

Null-safe field access:    @item()?['FieldName']
Null guard:                @coalesce(item()?['Name'], 'Unknown')
String format:             @{variables('firstName')} @{variables('lastName')}
Date today:                @utcNow()
Formatted date:            @formatDateTime(utcNow(), 'dd/MM/yyyy')
Add days:                  @addDays(utcNow(), 7)
Array length:              @length(variables('myArray'))
Filter array:              Use the "Filter array" action (no inline filter expression exists in PA)
Union (new wins):          @union(body('New_Data'), outputs('Old_Data'))
Sort:                      @sort(variables('myArray'), 'Date')
Unix timestamp → date:     @formatDateTime(addseconds('1970-1-1', triggerBody()?['created']), 'yyyy-MM-dd')
Date → Unix milliseconds:  @div(sub(ticks(startOfDay(item()?['Created'])), ticks(formatDateTime('1970-01-01Z','o'))), 10000)
Date → Unix seconds:       @div(sub(ticks(item()?['Start']), ticks('1970-01-01T00:00:00Z')), 10000000)
Unix seconds → datetime:   @addSeconds('1970-01-01T00:00:00Z', int(variables('Unix')))
Coalesce as no-else:       @coalesce(outputs('Optional_Step'), outputs('Default_Step'))
Flow elapsed minutes:      @div(float(sub(ticks(utcNow()), ticks(outputs('Flow_Start')))), 600000000)
HH:mm time string:         @formatDateTime(outputs('Local_Datetime'), 'HH:mm')
Response header:           @outputs('HTTP_Action')?['headers/X-Request-Id']
Array max (by field):      @reverse(sort(body('Select_Items'), 'Date'))[0]
Integer day span:          @int(split(dateDifference(outputs('Start'), outputs('End')), '.')[0])
ISO week number:           @div(add(dayofyear(addDays(subtractFromTime(date, sub(dayofweek(date),1), 'Day'), 3)), 6), 7)
Join errors to string:     @if(equals(length(variables('Errors')),0), null, concat(join(variables('Errors'),', '),' not found.'))
Normalize before compare:  @replace(coalesce(outputs('Value'),''),'_',' ')
Robust non-empty check:    @greater(length(trim(coalesce(string(outputs('Val')), ''))), 0)

Unsupported / Risky Expression Assumptions

Power Automate expressions are Workflow Definition Language, not JavaScript. These patterns often look plausible but do not deploy or do not behave as agents expect:

Goal Avoid Use instead
Build an object inline createObject(...) A Compose action with a JSON object literal
Transform an array inline select(...) inside an expression Data Operations Select action
Filter an array inline filter(...) inside an expression Data Operations Filter array action
Find an array item index indexOf(array, item) Foreach with a counter variable, or build a keyed object map

Newlines in Expressions

\n does NOT produce a newline inside Power Automate expressions. It is treated as a literal backslash + n and will either appear verbatim or cause a validation error.

Use decodeUriComponent('%0a') wherever you need a newline character:

Newline (LF):   decodeUriComponent('%0a')
CRLF:           decodeUriComponent('%0d%0a')

Example — multi-line Teams or email body via concat():

"Compose_Message": {
  "type": "Compose",
  "inputs": "@concat('Hi ', outputs('Get_User')?['body/displayName'], ',', decodeUriComponent('%0a%0a'), 'Your report is ready.', decodeUriComponent('%0a'), '- The Team')"
}

Example — join() with newline separator:

"Compose_List": {
  "type": "Compose",
  "inputs": "@join(body('Select_Names'), decodeUriComponent('%0a'))"
}

This is the only reliable way to embed newlines in dynamically built strings in Power Automate flow definitions (confirmed against Logic Apps runtime).


Sum an array (XPath trick)

Power Automate has no native sum() function. Use XPath on XML instead:

"Prepare_For_Sum": {
  "type": "Compose",
  "runAfter": {},
  "inputs": { "root": { "numbers": "@body('Select_Amounts')" } }
},
"Sum": {
  "type": "Compose",
  "runAfter": { "Prepare_For_Sum": ["Succeeded"] },
  "inputs": "@xpath(xml(outputs('Prepare_For_Sum')), 'sum(/root/numbers)')"
}

Select_Amounts must output a flat array of numbers (use a Select action to extract a single numeric field first). The result is a number you can use directly in conditions or calculations.

This is the only way to aggregate (sum/min/max) an array without a loop in Power Automate.