Files
metabuilder/workflow/plugins/python/test/test_run_suite.py
johndoe6345789 5ac579a2ed feat: Add Python plugins from AutoMetabuilder + restructure workflow folder
Restructure workflow/ for multi-language plugin support:
- Rename src/ to core/ (engine code: DAG executor, registry, types)
- Create executor/{cpp,python,ts}/ for language-specific runtimes
- Consolidate plugins to plugins/{ts,python}/ by language then category

Add 80+ Python plugins from AutoMetabuilder in 14 categories:
- control: bot control, switch logic, state management
- convert: type conversions (json, boolean, dict, list, number, string)
- core: AI requests, context management, tool calls
- dict: dictionary operations (get, set, keys, values, merge)
- list: list operations (concat, find, sort, slice, filter)
- logic: boolean logic (and, or, xor, equals, comparisons)
- math: arithmetic operations (add, subtract, multiply, power, etc.)
- string: string manipulation (concat, split, replace, format)
- notifications: Slack, Discord integrations
- test: assertion helpers and test suite runner
- tools: file operations, git, docker, testing utilities
- utils: filtering, mapping, reducing, condition branching
- var: variable store operations (get, set, delete, exists)
- web: Flask server, environment variables, JSON handling

Add language executor runtimes:
- TypeScript: direct import execution (default, fast startup)
- Python: child process with JSON stdin/stdout communication
- C++: placeholder for native FFI bindings (Phase 3)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 16:08:08 +00:00

64 lines
1.9 KiB
Python

"""Workflow plugin: run a suite of test assertions and report results."""
def run(_runtime, inputs):
"""Run a suite of test assertions and aggregate results.
Inputs:
- results: Array of test result objects (each with 'passed' field)
- suite_name: Optional name for the test suite
Outputs:
- passed: Boolean indicating if all tests passed
- total: Total number of tests
- passed_count: Number of tests that passed
- failed_count: Number of tests that failed
- failures: Array of failed test details
"""
results = inputs.get("results", [])
suite_name = inputs.get("suite_name", "Test Suite")
if not isinstance(results, list):
return {
"passed": False,
"error": "results must be an array",
"total": 0,
"passed_count": 0,
"failed_count": 0,
"failures": []
}
total = len(results)
passed_count = 0
failed_count = 0
failures = []
for i, result in enumerate(results):
if isinstance(result, dict) and result.get("passed") is True:
passed_count += 1
else:
failed_count += 1
failure_info = {
"test_index": i,
"error": result.get("error", "Unknown error") if isinstance(result, dict) else str(result)
}
if isinstance(result, dict):
failure_info.update({
"expected": result.get("expected"),
"actual": result.get("actual")
})
failures.append(failure_info)
all_passed = failed_count == 0 and total > 0
summary = f"{suite_name}: {passed_count}/{total} tests passed"
return {
"passed": all_passed,
"total": total,
"passed_count": passed_count,
"failed_count": failed_count,
"failures": failures,
"summary": summary
}