Add Web UI with FastAPI for managing bot tasks and logs, introduce file operations, enhance error handling with logging, and integrate Slack/Discord notifications

This commit is contained in:
2026-01-09 14:15:26 +00:00
parent ea48390d9e
commit 33c23f1797
18 changed files with 1954 additions and 62 deletions

23
Dockerfile Normal file
View File

@@ -0,0 +1,23 @@
# Use an official Python runtime as a parent image
FROM python:3.11-slim
# Set the working directory in the container
WORKDIR /app
# Install git and other necessary dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Install poetry
RUN pip install poetry
# Copy the current directory contents into the container at /app
COPY . /app
# Install project dependencies
RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-ansi
# Run the application
CMD ["python", "src/autometabuilder/main.py"]

View File

@@ -19,8 +19,18 @@
- [x] **CI/CD Integration**: Github Actions to run AutoMetabuilder on schedule or trigger.
## Phase 4: Optimization & Scalability
- [/] **Dockerization**: Provide a Dockerfile and docker-compose for easy environment setup. Added `run_docker_task` tool.
- [ ] **Extended Toolset**: Add tools for dependency management (poetry) and file manipulation (read/write/edit).
- [ ] **Self-Improvement**: Allow the bot to suggest and apply changes to its own `prompt.yml` or `tools.json`.
- [ ] **Robust Error Handling**: Implement exponential backoff for API calls and better error recovery.
- [ ] **Monitoring & Logging**: Structured logging and status reporting for long-running tasks.
- [x] **Dockerization**: Provide a Dockerfile and docker-compose for easy environment setup. Added `run_docker_task` tool.
- [x] **Extended Toolset**: Add tools for dependency management (poetry) and file manipulation (read/write/edit).
- [x] **Self-Improvement**: Allow the bot to suggest and apply changes to its own `prompt.yml` or `tools.json`.
- [x] **Robust Error Handling**: Implement exponential backoff for API calls and better error recovery.
- [x] **Monitoring & Logging**: Structured logging and status reporting for long-running tasks.
## Phase 5: Ecosystem & User Experience
- [x] **Web UI**: A simple dashboard to monitor tasks and approve tool executions. Enhanced with settings and translation management.
- [x] **Plugin System**: Allow users to add custom tools via a plugin directory.
- [x] **Slack/Discord Integration**: Command and notify the bot from chat platforms.
## Phase 6: Advanced Web UI & Remote Control
- [ ] **Remote Command Execution**: Trigger bot runs from the Web UI.
- [ ] **User Authentication**: Secure the Web UI with login.
- [ ] **Visual Task Progress**: Real-time progress bars for long-running tasks.

9
docker-compose.yml Normal file
View File

@@ -0,0 +1,9 @@
services:
autometabuilder:
build: .
env_file:
- .env
volumes:
- .:/app
stdin_open: true
tty: true

View File

@@ -3,3 +3,5 @@ GITHUB_REPOSITORY=owner/repo
PROMPT_PATH=prompt.yml
GITHUB_MODELS_ENDPOINT=https://models.github.ai/inference
APP_LANG=en # Supported: en, es, fr, nl, pirate
WEB_USER=admin
WEB_PASSWORD=password

1365
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -10,6 +10,7 @@ messages:
5. Following security best practices and performance optimization.
6. Adhering to project-specific coding standards and linting rules.
7. Keeping documentation (README, ROADMAP, etc.) up to date.
8. Continuous Self-Improvement: You are encouraged to propose and apply enhancements to your own configuration, including `prompt.yml` and `tools.json`, to improve your effectiveness and reasoning.
- role: user
content: >-
Analyze the current state of the repository, including open issues,

View File

@@ -13,6 +13,12 @@ pyyaml = "^6.0.1"
python-dotenv = "^1.0.0"
openai = "^1.0.0"
PyGithub = "^2.1.1"
tenacity = "^9.1.2"
fastapi = "^0.128.0"
uvicorn = "^0.40.0"
jinja2 = "^3.1.6"
slack-sdk = "^3.39.0"
discord-py = "^2.6.4"
[build-system]
requires = ["poetry-core"]

View File

@@ -1,5 +1,8 @@
import subprocess
import os
import logging
logger = logging.getLogger("autometabuilder.docker")
def run_command_in_docker(image: str, command: str, volumes: dict = None, workdir: str = None):
"""
@@ -23,12 +26,12 @@ def run_command_in_docker(image: str, command: str, volumes: dict = None, workdi
docker_command.append(image)
docker_command.extend(["sh", "-c", command])
print(f"Executing in Docker ({image}): {command}")
logger.info(f"Executing in Docker ({image}): {command}")
result = subprocess.run(docker_command, capture_output=True, text=True, check=False)
output = result.stdout
if result.stderr:
output += "\n" + result.stderr
print(output)
logger.info(output)
return output

View File

@@ -5,6 +5,7 @@ import os
from github import Github
from github.Issue import Issue
from github.PullRequest import PullRequest
from tenacity import retry, stop_after_attempt, wait_exponential
from . import load_messages
@@ -16,14 +17,17 @@ class GitHubIntegration:
self.github = Github(token)
self.repo = self.github.get_repo(repo_name)
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
def get_open_issues(self):
"""Get open issues from the repository."""
return self.repo.get_issues(state='open')
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
def get_issue(self, issue_number: int) -> Issue:
"""Get a specific issue by number."""
return self.repo.get_issue(number=issue_number)
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
def create_branch(self, branch_name: str, base_branch: str = "main"):
"""Create a new branch from a base branch."""
base_ref = self.repo.get_git_ref(f"heads/{base_branch}")
@@ -31,6 +35,7 @@ class GitHubIntegration:
ref=f"refs/heads/{branch_name}", sha=base_ref.object.sha
)
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
def create_pull_request(
self,
title: str,
@@ -43,10 +48,12 @@ class GitHubIntegration:
title=title, body=body, head=head_branch, base=base_branch
)
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
def get_pull_requests(self, state: str = "open"):
"""Get pull requests from the repository."""
return self.repo.get_pulls(state=state)
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
def get_pull_request_comments(self, pr_number: int):
"""Get comments from a specific pull request."""
pr = self.repo.get_pull(pr_number)

View File

@@ -0,0 +1,55 @@
import os
import logging
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
import discord
import asyncio
logger = logging.getLogger("autometabuilder.notifications")
def send_slack_notification(message: str):
token = os.environ.get("SLACK_BOT_TOKEN")
channel = os.environ.get("SLACK_CHANNEL")
if not token or not channel:
logger.warning("Slack notification skipped: SLACK_BOT_TOKEN or SLACK_CHANNEL missing.")
return
client = WebClient(token=token)
try:
client.chat_postMessage(channel=channel, text=message)
logger.info("Slack notification sent successfully.")
except SlackApiError as e:
logger.error(f"Error sending Slack notification: {e}")
async def send_discord_notification_async(message: str):
token = os.environ.get("DISCORD_BOT_TOKEN")
channel_id = os.environ.get("DISCORD_CHANNEL_ID")
if not token or not channel_id:
logger.warning("Discord notification skipped: DISCORD_BOT_TOKEN or DISCORD_CHANNEL_ID missing.")
return
intents = discord.Intents.default()
client = discord.Client(intents=intents)
@client.event
async def on_ready():
channel = client.get_channel(int(channel_id))
if channel:
await channel.send(message)
logger.info("Discord notification sent successfully.")
await client.close()
try:
await client.start(token)
except Exception as e:
logger.error(f"Error sending Discord notification: {e}")
def send_discord_notification(message: str):
try:
asyncio.run(send_discord_notification_async(message))
except Exception as e:
logger.error(f"Error running Discord notification: {e}")
def notify_all(message: str):
send_slack_notification(message)
send_discord_notification(message)

View File

@@ -6,16 +6,34 @@ import json
import subprocess
import argparse
import yaml
import logging
import importlib
import inspect
from tenacity import retry, stop_after_attempt, wait_exponential
from dotenv import load_dotenv
from openai import OpenAI
from . import load_messages
from .github_integration import GitHubIntegration, get_repo_name_from_env
from .docker_utils import run_command_in_docker
from .web.server import start_web_ui
from .integrations.notifications import notify_all
load_dotenv()
# Set up logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler("autometabuilder.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger("autometabuilder")
DEFAULT_PROMPT_PATH = "prompt.yml"
DEFAULT_ENDPOINT = "https://models.github.ai/inference"
DEFAULT_MODEL = "openai/gpt-4o"
def load_prompt_yaml() -> dict:
@@ -55,7 +73,7 @@ def get_sdlc_context(gh: GitHubIntegration, msgs: dict) -> str:
if pr_list:
sdlc_context += f"\n{msgs['open_prs_label']}\n{pr_list}"
except Exception as e: # pylint: disable=broad-exception-caught
print(msgs["error_sdlc_context"].format(error=e))
logger.error(msgs["error_sdlc_context"].format(error=e))
return sdlc_context
@@ -63,7 +81,7 @@ def update_roadmap(content: str):
"""Update ROADMAP.md with new content."""
with open("ROADMAP.md", "w", encoding="utf-8") as f:
f.write(content)
print("ROADMAP.md updated successfully.")
logger.info("ROADMAP.md updated successfully.")
def list_files(directory: str = "."):
@@ -76,27 +94,27 @@ def list_files(directory: str = "."):
files_list.append(os.path.join(root, file))
result = "\n".join(files_list)
print(f"Indexing repository files in {directory}...")
logger.info(f"Indexing repository files in {directory}...")
return result
def run_tests(path: str = "tests"):
"""Run tests using pytest."""
print(f"Running tests in {path}...")
logger.info(f"Running tests in {path}...")
result = subprocess.run(["pytest", path], capture_output=True, text=True, check=False)
print(result.stdout)
logger.info(result.stdout)
if result.stderr:
print(result.stderr)
logger.error(result.stderr)
return result.stdout
def run_lint(path: str = "src"):
"""Run linting using pylint."""
print(f"Running linting in {path}...")
logger.info(f"Running linting in {path}...")
result = subprocess.run(["pylint", path], capture_output=True, text=True, check=False)
print(result.stdout)
logger.info(result.stdout)
if result.stderr:
print(result.stderr)
logger.error(result.stderr)
return result.stdout
@@ -109,25 +127,83 @@ def run_docker_task(image: str, command: str, workdir: str = "/workspace"):
return run_command_in_docker(image, command, volumes=volumes, workdir=workdir)
def handle_tool_calls(resp_msg, gh: GitHubIntegration, msgs: dict, dry_run: bool = False, yolo: bool = False) -> list:
def read_file(path: str) -> str:
"""Read the content of a file."""
try:
with open(path, 'r', encoding='utf-8') as f:
return f.read()
except Exception as e:
return f"Error reading file {path}: {e}"
def write_file(path: str, content: str) -> str:
"""Write content to a file."""
try:
with open(path, 'w', encoding='utf-8') as f:
f.write(content)
return f"Successfully wrote to {path}"
except Exception as e:
return f"Error writing to file {path}: {e}"
def edit_file(path: str, search: str, replace: str) -> str:
"""Edit a file using search and replace."""
try:
with open(path, 'r', encoding='utf-8') as f:
content = f.read()
if search not in content:
return f"Error: '{search}' not found in {path}"
new_content = content.replace(search, replace)
with open(path, 'w', encoding='utf-8') as f:
f.write(new_content)
return f"Successfully edited {path}"
except Exception as e:
return f"Error editing file {path}: {e}"
def load_plugins(tool_map: dict, tools: list):
"""Load custom tools from the plugins directory."""
plugins_dir = os.path.join(os.path.dirname(__file__), "plugins")
if not os.path.exists(plugins_dir):
return
for filename in os.listdir(plugins_dir):
if filename.endswith(".py") and filename != "__init__.py":
module_name = f".plugins.{filename[:-3]}"
try:
module = importlib.import_module(module_name, package="autometabuilder")
for name, obj in inspect.getmembers(module):
if inspect.isfunction(obj) and hasattr(obj, "tool_metadata"):
tool_metadata = getattr(obj, "tool_metadata")
tool_map[name] = obj
tools.append(tool_metadata)
logger.info(f"Loaded plugin tool: {name}")
except Exception as e:
logger.error(f"Failed to load plugin {filename}: {e}")
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
def get_completion(client, model, messages, tools):
"""Get completion from OpenAI with retry logic."""
return client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
tool_choice="auto",
temperature=1.0,
top_p=1.0,
)
def handle_tool_calls(resp_msg, tool_map: dict, gh: GitHubIntegration, msgs: dict, dry_run: bool = False, yolo: bool = False) -> list:
"""Process tool calls from the AI response and return results for the assistant."""
if not resp_msg.tool_calls:
return []
# Declarative mapping of tool names to functions
tool_map = {
"create_branch": gh.create_branch if gh else None,
"create_pull_request": gh.create_pull_request if gh else None,
"get_pull_request_comments": gh.get_pull_request_comments if gh else None,
"update_roadmap": update_roadmap,
"list_files": list_files,
"run_tests": run_tests,
"run_lint": run_lint,
"run_docker_task": run_docker_task,
}
# Tools that modify state and should be skipped in dry-run
modifying_tools = {"create_branch", "create_pull_request", "update_roadmap"}
modifying_tools = {"create_branch", "create_pull_request", "update_roadmap", "write_file", "edit_file"}
tool_results = []
for tool_call in resp_msg.tool_calls:
@@ -140,7 +216,7 @@ def handle_tool_calls(resp_msg, gh: GitHubIntegration, msgs: dict, dry_run: bool
if not yolo:
confirm = input(msgs.get("confirm_tool_execution", "Do you want to execute {name} with {args}? [y/N]: ").format(name=function_name, args=args))
if confirm.lower() != 'y':
print(msgs.get("info_tool_skipped", "Skipping tool: {name}").format(name=function_name))
logger.info(msgs.get("info_tool_skipped", "Skipping tool: {name}").format(name=function_name))
tool_results.append({
"tool_call_id": call_id,
"role": "tool",
@@ -150,7 +226,7 @@ def handle_tool_calls(resp_msg, gh: GitHubIntegration, msgs: dict, dry_run: bool
continue
if dry_run and function_name in modifying_tools:
print(msgs.get("info_dry_run_skipping", "DRY RUN: Skipping state-modifying tool {name}").format(name=function_name))
logger.info(msgs.get("info_dry_run_skipping", "DRY RUN: Skipping state-modifying tool {name}").format(name=function_name))
tool_results.append({
"tool_call_id": call_id,
"role": "tool",
@@ -159,7 +235,7 @@ def handle_tool_calls(resp_msg, gh: GitHubIntegration, msgs: dict, dry_run: bool
})
continue
print(msgs.get("info_executing_tool", "Executing tool: {name}").format(name=function_name))
logger.info(msgs.get("info_executing_tool", "Executing tool: {name}").format(name=function_name))
try:
result = handler(**args)
content = str(result) if result is not None else "Success"
@@ -167,9 +243,9 @@ def handle_tool_calls(resp_msg, gh: GitHubIntegration, msgs: dict, dry_run: bool
# Handle iterables (like PyGithub PaginatedList)
items = list(result)[:5]
content = "\n".join([f"- {item}" for item in items])
print(content)
logger.info(content)
elif result is not None:
print(result)
logger.info(result)
tool_results.append({
"tool_call_id": call_id,
@@ -179,7 +255,7 @@ def handle_tool_calls(resp_msg, gh: GitHubIntegration, msgs: dict, dry_run: bool
})
except Exception as e:
error_msg = f"Error executing {function_name}: {e}"
print(error_msg)
logger.error(error_msg)
tool_results.append({
"tool_call_id": call_id,
"role": "tool",
@@ -188,7 +264,7 @@ def handle_tool_calls(resp_msg, gh: GitHubIntegration, msgs: dict, dry_run: bool
})
else:
msg = msgs.get("error_tool_not_found", "Tool {name} not found or unavailable.").format(name=function_name)
print(msg)
logger.error(msg)
tool_results.append({
"tool_call_id": call_id,
"role": "tool",
@@ -204,12 +280,18 @@ def main():
parser.add_argument("--dry-run", action="store_true", help="Do not execute state-modifying tools.")
parser.add_argument("--yolo", action="store_true", help="Execute tools without confirmation.")
parser.add_argument("--once", action="store_true", help="Run a single full iteration (AI -> Tool -> AI).")
parser.add_argument("--web", action="store_true", help="Start the Web UI.")
args = parser.parse_args()
if args.web:
logger.info("Starting Web UI...")
start_web_ui()
return
msgs = load_messages()
token = os.environ.get("GITHUB_TOKEN")
if not token:
print(msgs["error_github_token_missing"])
logger.error(msgs["error_github_token_missing"])
return
# Initialize GitHub Integration
@@ -217,9 +299,9 @@ def main():
try:
repo_name = get_repo_name_from_env()
gh = GitHubIntegration(token, repo_name)
print(msgs["info_integrated_repo"].format(repo_name=repo_name))
logger.info(msgs["info_integrated_repo"].format(repo_name=repo_name))
except Exception as e: # pylint: disable=broad-exception-caught
print(msgs["warn_github_init_failed"].format(error=e))
logger.warning(msgs["warn_github_init_failed"].format(error=e))
client = OpenAI(
base_url=os.environ.get("GITHUB_MODELS_ENDPOINT", DEFAULT_ENDPOINT),
@@ -233,6 +315,24 @@ def main():
with open(tools_path, "r", encoding="utf-8") as f:
tools = json.load(f)
# Declarative mapping of tool names to functions
tool_map = {
"create_branch": gh.create_branch if gh else None,
"create_pull_request": gh.create_pull_request if gh else None,
"get_pull_request_comments": gh.get_pull_request_comments if gh else None,
"update_roadmap": update_roadmap,
"list_files": list_files,
"run_tests": run_tests,
"run_lint": run_lint,
"run_docker_task": run_docker_task,
"read_file": read_file,
"write_file": write_file,
"edit_file": edit_file,
}
# Load plugins and update tool_map and tools list
load_plugins(tool_map, tools)
# Add SDLC Context if available
sdlc_context_val = get_sdlc_context(gh, msgs)
@@ -248,44 +348,35 @@ def main():
# Add runtime request
messages.append({"role": "user", "content": msgs["user_next_step"]})
response = client.chat.completions.create(
model=os.environ.get("LLM_MODEL", prompt.get("model", "openai/gpt-4.1")),
messages=messages,
tools=tools,
tool_choice="auto",
temperature=1.0,
top_p=1.0,
)
model_name = os.environ.get("LLM_MODEL", prompt.get("model", DEFAULT_MODEL))
response = get_completion(client, model_name, messages, tools)
resp_msg = response.choices[0].message
print(
logger.info(
resp_msg.content
if resp_msg.content
else msgs["info_tool_call_requested"]
)
# Handle tool calls
tool_results = handle_tool_calls(resp_msg, gh, msgs, dry_run=args.dry_run, yolo=args.yolo)
tool_results = handle_tool_calls(resp_msg, tool_map, gh, msgs, dry_run=args.dry_run, yolo=args.yolo)
if args.once and tool_results:
print(msgs.get("info_second_pass", "Performing second pass with tool results..."))
logger.info(msgs.get("info_second_pass", "Performing second pass with tool results..."))
messages.append(resp_msg)
messages.extend(tool_results)
response = client.chat.completions.create(
model=os.environ.get("LLM_MODEL", prompt.get("model", "openai/gpt-4.1")),
messages=messages,
tools=tools,
tool_choice="auto",
temperature=1.0,
top_p=1.0,
)
response = get_completion(client, model_name, messages, tools)
final_msg = response.choices[0].message
print(final_msg.content if final_msg.content else msgs["info_tool_call_requested"])
logger.info(final_msg.content if final_msg.content else msgs["info_tool_call_requested"])
# Notify about task completion
notify_all(f"AutoMetabuilder task complete: {final_msg.content[:100]}...")
# In a multi-iteration loop, we would call handle_tool_calls again here.
# For --once, we just do one more pass.
if final_msg.tool_calls:
handle_tool_calls(final_msg, gh, msgs, dry_run=args.dry_run, yolo=args.yolo)
handle_tool_calls(final_msg, tool_map, gh, msgs, dry_run=args.dry_run, yolo=args.yolo)
if __name__ == "__main__":

View File

View File

@@ -0,0 +1,15 @@
def hello_plugin():
"""A simple plugin that returns a greeting."""
return "Hello from the plugin system!"
hello_plugin.tool_metadata = {
"type": "function",
"function": {
"name": "hello_plugin",
"description": "A simple greeting from the plugin system.",
"parameters": {
"type": "object",
"properties": {}
}
}
}

View File

@@ -174,5 +174,77 @@
]
}
}
},
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read the content of a file",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The path to the file to read"
}
},
"required": [
"path"
]
}
}
},
{
"type": "function",
"function": {
"name": "write_file",
"description": "Write content to a file",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The path to the file to write"
},
"content": {
"type": "string",
"description": "The content to write to the file"
}
},
"required": [
"path",
"content"
]
}
}
},
{
"type": "function",
"function": {
"name": "edit_file",
"description": "Edit a file using search and replace",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The path to the file to edit"
},
"search": {
"type": "string",
"description": "The exact string to search for"
},
"replace": {
"type": "string",
"description": "The string to replace it with"
}
},
"required": [
"path",
"search",
"replace"
]
}
}
}
]

View File

View File

@@ -0,0 +1,151 @@
import os
import json
import secrets
from fastapi import FastAPI, Request, Form, BackgroundTasks, Depends, HTTPException, status
from fastapi.responses import HTMLResponse, RedirectResponse
from fastapi.templating import Jinja2Templates
from fastapi.security import HTTPBasic, HTTPBasicCredentials
from dotenv import load_dotenv, set_key
import subprocess
import sys
app = FastAPI()
security = HTTPBasic()
def get_current_user(credentials: HTTPBasicCredentials = Depends(security)):
correct_username = os.environ.get("WEB_USER")
correct_password = os.environ.get("WEB_PASSWORD")
# If no credentials are set in env, allow access (for backward compatibility/easier setup)
if not correct_username or not correct_password:
return credentials.username
is_correct_username = secrets.compare_digest(credentials.username, correct_username)
is_correct_password = secrets.compare_digest(credentials.password, correct_password)
if not (is_correct_username and is_correct_password):
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect email or password",
headers={"WWW-Authenticate": "Basic"},
)
return credentials.username
# Global variable to track if a bot is running
bot_process = None
# Setup templates
templates_dir = os.path.join(os.path.dirname(__file__), "templates")
templates = Jinja2Templates(directory=templates_dir)
def run_bot_task():
global bot_process
try:
# Run main.py as a subprocess with --yolo --once
cmd = [sys.executable, "-m", "autometabuilder.main", "--yolo", "--once"]
bot_process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
bot_process.wait()
finally:
bot_process = None
def get_recent_logs(n=50):
log_file = "autometabuilder.log"
if not os.path.exists(log_file):
return "No logs found."
with open(log_file, "r") as f:
lines = f.readlines()
return "".join(lines[-n:])
def get_env_vars():
env_path = ".env"
if not os.path.exists(env_path):
return {}
with open(env_path, "r") as f:
lines = f.readlines()
env_vars = {}
for line in lines:
if "=" in line and not line.startswith("#"):
key, value = line.strip().split("=", 1)
env_vars[key] = value
return env_vars
def get_translations():
pkg_dir = os.path.dirname(os.path.dirname(__file__))
files = [f for f in os.listdir(pkg_dir) if f.startswith("messages_") and f.endswith(".json")]
translations = {}
for f in files:
lang = f[len("messages_"):-len(".json")]
translations[lang] = f
return translations
def get_prompt_content():
prompt_path = os.environ.get("PROMPT_PATH", "prompt.yml")
if not os.path.exists(prompt_path):
return ""
with open(prompt_path, "r", encoding="utf-8") as f:
return f.read()
@app.get("/", response_class=HTMLResponse)
async def read_item(request: Request):
logs = get_recent_logs()
env_vars = get_env_vars()
translations = get_translations()
prompt_content = get_prompt_content()
is_running = bot_process is not None
return templates.TemplateResponse("index.html", {
"request": request,
"logs": logs,
"env_vars": env_vars,
"translations": translations,
"prompt_content": prompt_content,
"is_running": is_running
})
@app.post("/run")
async def run_bot(background_tasks: BackgroundTasks):
global bot_process
if bot_process is None:
background_tasks.add_task(run_bot_task)
return RedirectResponse(url="/", status_code=303)
@app.post("/prompt")
async def update_prompt(content: str = Form(...)):
prompt_path = os.environ.get("PROMPT_PATH", "prompt.yml")
with open(prompt_path, "w", encoding="utf-8") as f:
f.write(content)
return RedirectResponse(url="/", status_code=303)
@app.post("/settings")
async def update_settings(request: Request):
form_data = await request.form()
env_path = ".env"
for key, value in form_data.items():
if key.startswith("env_"):
env_key = key[4:]
set_key(env_path, env_key, value)
# Handle new setting
new_key = form_data.get("new_env_key")
new_value = form_data.get("new_env_value")
if new_key and new_value:
set_key(env_path, new_key, new_value)
return RedirectResponse(url="/", status_code=303)
@app.post("/translations")
async def create_translation(lang: str = Form(...)):
pkg_dir = os.path.dirname(os.path.dirname(__file__))
en_path = os.path.join(pkg_dir, "messages_en.json")
new_path = os.path.join(pkg_dir, f"messages_{lang}.json")
if not os.path.exists(new_path):
with open(en_path, "r", encoding="utf-8") as f:
content = json.load(f)
with open(new_path, "w", encoding="utf-8") as f:
json.dump(content, f, indent=2)
return RedirectResponse(url="/", status_code=303)
def start_web_ui(host="0.0.0.0", port=8000):
import uvicorn
uvicorn.run(app, host=host, port=port)

View File

@@ -0,0 +1,84 @@
<!DOCTYPE html>
<html>
<head>
<title>AutoMetabuilder Dashboard</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css">
</head>
<body class="container mt-5">
<h1>AutoMetabuilder Dashboard</h1>
<div class="row mt-4">
<div class="col-md-8">
<h2>Recent Logs</h2>
<pre id="logs" class="bg-light p-3 border" style="max-height: 400px; overflow-y: scroll;">{{ logs }}</pre>
</div>
<div class="col-md-4">
<h2>System Status</h2>
<p>Status:
{% if is_running %}
<span class="badge bg-warning text-dark">Bot Running...</span>
{% else %}
<span class="badge bg-success">Idle</span>
{% endif %}
</p>
<form action="/run" method="post" class="mt-2">
<button type="submit" class="btn btn-danger w-100" {% if is_running %}disabled{% endif %}>
Run Bot One Iteration
</button>
</form>
<h2 class="mt-4">Translations</h2>
<ul class="list-group">
{% for lang, file in translations.items() %}
<li class="list-group-item">{{ lang }} ({{ file }})</li>
{% endfor %}
</ul>
<form action="/translations" method="post" class="mt-2">
<div class="input-group">
<input type="text" name="lang" class="form-control" placeholder="New lang code (e.g. de)">
<button class="btn btn-primary" type="submit">Add</button>
</div>
</form>
</div>
</div>
<div class="row mt-4">
<div class="col-md-6">
<h2>Settings (.env)</h2>
<form action="/settings" method="post">
<table class="table table-bordered">
<thead>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
</thead>
<tbody>
{% for key, value in env_vars.items() %}
<tr>
<td>{{ key }}</td>
<td>
<input type="text" name="env_{{ key }}" value="{{ value }}" class="form-control">
</td>
</tr>
{% endfor %}
<tr>
<td><input type="text" name="new_env_key" class="form-control" placeholder="NEW_KEY"></td>
<td><input type="text" name="new_env_value" class="form-control" placeholder="New Value"></td>
</tr>
</tbody>
</table>
<button type="submit" class="btn btn-success">Save Settings</button>
</form>
</div>
<div class="col-md-6">
<h2>Prompt (prompt.yml)</h2>
<form action="/prompt" method="post">
<textarea name="content" class="form-control" rows="15">{{ prompt_content }}</textarea>
<button type="submit" class="btn btn-success mt-2">Save Prompt</button>
</form>
</div>
</div>
</body>
</html>