Introduce AutoMetabuilder core components and workflow packages:

- Implement core components: CLI argument parsing, environment loading, GitHub service creation, and logging configuration.
- Add support for OpenAI client setup and model resolution.
- Develop SDLC context loader from GitHub and repository files.
- Implement workflow context and engine builders.
- Introduce major workflow packages: `game_tick_loop` and `contextual_iterative_loop`.
- Update localization files with new package descriptions and labels.
- Streamline web navigation by loading items from a dedicated JSON file.
This commit is contained in:
2026-01-10 02:04:19 +00:00
parent 9bac720d73
commit af98717aad
2 changed files with 15 additions and 1 deletions

View File

@@ -71,3 +71,8 @@
- [ ] **Testing & Quality**: Expand Playwright suites to cover internationalization/localization, workflow templates, and AJAX-driven navigation; continue running unit tests, static analysis, linters, and e2e jobs to close the testing triangle.
- [ ] **Styling & Tooling**: Investigate SASS adoption for the Material UI theme, keep component files ≤100 LOC, and enforce plugin/service/controller patterns with DI so styles stay modular.
- [ ] **Component Decomposition**: Audit remaining Jinja templates and Next.js components so each file owns a single macro/component, loops over declarative data, and delegates translation lookups to shared helpers.
## Phase 11: Technical Debt
- [x] **Structured workflow logging**: Add debug/trace warnings when parsing workflow definitions so graph builders surface malformed JSON and unbound bindings.
- [ ] **Route modularization**: Split `backend/autometabuilder/web/server.py` into focused route modules or blueprints so each file stays under 100 LOC and supports DI of helpers.
- [ ] **AJAX contract tests**: Expand the backend test suite to cover `/api/workflow/graph`, `/api/workflow/plugins`, and nav/translation payloads with mocked metadata so API drift is caught early.

View File

@@ -2,10 +2,13 @@
from __future__ import annotations
import json
import logging
from typing import Any, Dict, Iterable, List
from .data import get_workflow_content, load_metadata
logger = logging.getLogger(__name__)
def _parse_workflow_definition() -> Dict[str, Any]:
payload = get_workflow_content()
@@ -13,7 +16,8 @@ def _parse_workflow_definition() -> Dict[str, Any]:
return {"nodes": []}
try:
parsed = json.loads(payload)
except json.JSONDecodeError:
except json.JSONDecodeError as exc:
logger.warning("Invalid workflow JSON: %s", exc)
return {"nodes": []}
return parsed if isinstance(parsed, dict) else {"nodes": []}
@@ -45,6 +49,8 @@ def _build_edges(nodes: Iterable[Dict[str, Any]]) -> List[Dict[str, str]]:
outputs = node.get("outputs", {})
for value in outputs.values():
if isinstance(value, str):
if value in producers:
logger.debug("Variable %s already produced by %s; overwriting with %s", value, producers[value], node["id"])
producers[value] = node["id"]
edges: List[Dict[str, str]] = []
for node in nodes:
@@ -55,6 +61,8 @@ def _build_edges(nodes: Iterable[Dict[str, Any]]) -> List[Dict[str, str]]:
source = producers.get(variable)
if source:
edges.append({"from": source, "to": node["id"], "var": variable, "port": port})
else:
logger.debug("No producer found for %s referenced by %s.%s", variable, node["id"], port)
return edges
@@ -63,6 +71,7 @@ def build_workflow_graph() -> Dict[str, Any]:
plugin_map = load_metadata().get("workflow_plugins", {})
nodes = _gather_nodes(definition.get("nodes", []), plugin_map)
edges = _build_edges(nodes)
logger.debug("Built workflow graph with %d nodes and %d edges", len(nodes), len(edges))
return {
"nodes": nodes,
"edges": edges,