mirror of
https://github.com/johndoe6345789/metabuilder.git
synced 2026-04-24 13:54:57 +00:00
refactor(deployment): remove 10 redundant shell scripts replaced by Python CLI
All deployment commands now go through deployment.py. Deleted: build-base-images.sh, build-apps.sh, build-testcontainers.sh, deploy.sh, start-stack.sh, release.sh, nexus-ci-init.sh, push-to-nexus.sh, populate-nexus.sh, publish-npm-patches.sh. Kept nexus-init.sh and artifactory-init.sh (Docker container entrypoints). Updated all references in CLAUDE.md, README.md, AGENTS.md, ROADMAP.md, deployment docs, Dockerfiles, and docker-compose comments. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -51,7 +51,7 @@ POST /pastebin/pastebin/User
|
|||||||
| `frontends/pastebin/backend/app.py` | Flask JWT auth + Python runner |
|
| `frontends/pastebin/backend/app.py` | Flask JWT auth + Python runner |
|
||||||
| `frontends/pastebin/src/` | Next.js React app |
|
| `frontends/pastebin/src/` | Next.js React app |
|
||||||
| `deployment/docker-compose.stack.yml` | Full stack compose |
|
| `deployment/docker-compose.stack.yml` | Full stack compose |
|
||||||
| `deployment/build-apps.sh` | Build + deploy helper |
|
| `deployment/deployment.py` | Python CLI for all build/deploy/stack commands |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -79,7 +79,7 @@ docker logs -f metabuilder-pastebin-backend
|
|||||||
cd deployment
|
cd deployment
|
||||||
|
|
||||||
# Full rebuild + restart
|
# Full rebuild + restart
|
||||||
./build-apps.sh --force dbal pastebin
|
python3 deployment.py build apps --force dbal pastebin
|
||||||
docker compose -f docker-compose.stack.yml up -d
|
docker compose -f docker-compose.stack.yml up -d
|
||||||
|
|
||||||
# Flask backend (separate from Next.js)
|
# Flask backend (separate from Next.js)
|
||||||
@@ -163,7 +163,7 @@ Context variable resolution: `"${var_name}"`, `"${event.userId}"`, `"prefix-${na
|
|||||||
3. **Seed data in `dbal/shared/seeds/`** — never hardcode in Flask Python or C++.
|
3. **Seed data in `dbal/shared/seeds/`** — never hardcode in Flask Python or C++.
|
||||||
4. **No hardcoded entity names** — loaded from schema JSON.
|
4. **No hardcoded entity names** — loaded from schema JSON.
|
||||||
5. **Call `ensureClient()` before any DB op in `registerRoutes()`** — `dbal_client_` starts null.
|
5. **Call `ensureClient()` before any DB op in `registerRoutes()`** — `dbal_client_` starts null.
|
||||||
6. **`build-apps.sh pastebin` ≠ Flask** — that only rebuilds Next.js. Flask needs `docker compose build pastebin-backend`.
|
6. **`deployment.py build apps pastebin` ≠ Flask** — that only rebuilds Next.js. Flask needs `docker compose build pastebin-backend`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ All documentation is executable code. No separate markdown docs.
|
|||||||
./gameengine/gameengine.py --help # Game engine
|
./gameengine/gameengine.py --help # Game engine
|
||||||
./postgres/postgres.py --help # PostgreSQL dashboard
|
./postgres/postgres.py --help # PostgreSQL dashboard
|
||||||
./mojo/mojo.py --help # Mojo compiler
|
./mojo/mojo.py --help # Mojo compiler
|
||||||
./deployment/build-base-images.sh --list # Docker base images
|
cd deployment && python3 deployment.py build base --list # Docker base images
|
||||||
|
|
||||||
# Documentation (SQLite3 + FTS5 full-text search)
|
# Documentation (SQLite3 + FTS5 full-text search)
|
||||||
cd txt && python3 reports.py search "query" # 212 reports
|
cd txt && python3 reports.py search "query" # 212 reports
|
||||||
@@ -227,13 +227,13 @@ Frontends (CLI C++ | Qt6 QML | Next.js React)
|
|||||||
```bash
|
```bash
|
||||||
npm run dev / build / typecheck / lint / test:e2e
|
npm run dev / build / typecheck / lint / test:e2e
|
||||||
npm run build --workspaces
|
npm run build --workspaces
|
||||||
cd deployment && ./build-base-images.sh # Build Docker base images
|
cd deployment && python3 deployment.py build base # Build Docker base images
|
||||||
|
|
||||||
# Deploy full stack
|
# Deploy full stack
|
||||||
cd deployment && docker compose -f docker-compose.stack.yml up -d
|
cd deployment && docker compose -f docker-compose.stack.yml up -d
|
||||||
|
|
||||||
# Build & deploy specific apps
|
# Build & deploy specific apps
|
||||||
./build-apps.sh --force dbal pastebin # Next.js frontend only
|
python3 deployment.py build apps --force dbal pastebin # Next.js frontend only
|
||||||
docker compose -f docker-compose.stack.yml build pastebin-backend # Flask backend
|
docker compose -f docker-compose.stack.yml build pastebin-backend # Flask backend
|
||||||
|
|
||||||
# DBAL logs / seed verification
|
# DBAL logs / seed verification
|
||||||
@@ -336,7 +336,7 @@ Multi-version peer deps. React 18/19, TypeScript 5.9.3, Next.js 14-16, @reduxjs/
|
|||||||
| nlohmann/json iterators | Use `it.value()` not `it->second` (std::map syntax fails) |
|
| nlohmann/json iterators | Use `it.value()` not `it->second` (std::map syntax fails) |
|
||||||
| dbal-init volume stale | Rebuild with `docker compose build dbal-init` when schema file extensions change |
|
| dbal-init volume stale | Rebuild with `docker compose build dbal-init` when schema file extensions change |
|
||||||
| `.dockerignore` excludes `dbal/` | Whitelist specific subdirs: `!dbal/shared/seeds/database` |
|
| `.dockerignore` excludes `dbal/` | Whitelist specific subdirs: `!dbal/shared/seeds/database` |
|
||||||
| `build-apps.sh pastebin` ≠ Flask backend | Use `docker compose build pastebin-backend` for Flask |
|
| `deployment.py build apps pastebin` ≠ Flask backend | Use `docker compose build pastebin-backend` for Flask |
|
||||||
| `ensureClient()` before startup DB ops | `dbal_client_` is null in `registerRoutes()` — must call `ensureClient()` first |
|
| `ensureClient()` before startup DB ops | `dbal_client_` is null in `registerRoutes()` — must call `ensureClient()` first |
|
||||||
| Seed data in Flask Python | NEVER — declarative seed data belongs in `dbal/shared/seeds/database/*.json` |
|
| Seed data in Flask Python | NEVER — declarative seed data belongs in `dbal/shared/seeds/database/*.json` |
|
||||||
| Werkzeug scrypt on macOS Python | Generate hashes inside running container: `docker exec metabuilder-pastebin-backend python3 -c "..."` |
|
| Werkzeug scrypt on macOS Python | Generate hashes inside running container: `docker exec metabuilder-pastebin-backend python3 -c "..."` |
|
||||||
|
|||||||
@@ -35,10 +35,10 @@ cd deployment
|
|||||||
docker compose -f docker-compose.stack.yml up -d
|
docker compose -f docker-compose.stack.yml up -d
|
||||||
|
|
||||||
# Build & deploy a specific app
|
# Build & deploy a specific app
|
||||||
./build-apps.sh --force dbal pastebin
|
python3 deployment.py build apps --force dbal pastebin
|
||||||
|
|
||||||
# Rebuild base images (rare)
|
# Rebuild base images (rare)
|
||||||
./build-base-images.sh
|
python3 deployment.py build base
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -2562,7 +2562,7 @@ docker-compose -f deployment/docker-compose.production.yml up -d
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Deploy everything (PostgreSQL, DBAL, Next.js, Media daemon, Redis, Nginx)
|
# Deploy everything (PostgreSQL, DBAL, Next.js, Media daemon, Redis, Nginx)
|
||||||
./deployment/deploy.sh all --bootstrap
|
cd deployment && python3 deployment.py deploy --all
|
||||||
```
|
```
|
||||||
|
|
||||||
### Cloud Platforms
|
### Cloud Platforms
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
# Usage:
|
# Usage:
|
||||||
# cp .env.example .env
|
# cp .env.example .env
|
||||||
# # Edit values as needed
|
# # Edit values as needed
|
||||||
# ./start-stack.sh
|
# python3 deployment.py stack up
|
||||||
#
|
#
|
||||||
# All values below are defaults. Only override what you need to change.
|
# All values below are defaults. Only override what you need to change.
|
||||||
|
|
||||||
|
|||||||
@@ -2,10 +2,12 @@
|
|||||||
|
|
||||||
Build and deploy the full MetaBuilder stack locally using Docker.
|
Build and deploy the full MetaBuilder stack locally using Docker.
|
||||||
|
|
||||||
|
All commands go through a single Python CLI: `python3 deployment.py --help`
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
- Docker Desktop with BuildKit enabled
|
- Docker Desktop with BuildKit enabled
|
||||||
- Bash 4+ (macOS: `brew install bash`)
|
- Python 3.9+
|
||||||
- Add `localhost:5050` to Docker Desktop insecure registries:
|
- Add `localhost:5050` to Docker Desktop insecure registries:
|
||||||
Settings → Docker Engine → `"insecure-registries": ["localhost:5050"]`
|
Settings → Docker Engine → `"insecure-registries": ["localhost:5050"]`
|
||||||
|
|
||||||
@@ -23,8 +25,8 @@ docker compose -f docker-compose.nexus.yml up -d
|
|||||||
Wait ~2 minutes for init containers to finish, then populate:
|
Wait ~2 minutes for init containers to finish, then populate:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./push-to-nexus.sh # Docker images → Nexus
|
python3 deployment.py nexus push # Docker images → Nexus
|
||||||
./publish-npm-patches.sh # Patched npm packages → Nexus
|
python3 deployment.py npm publish-patches # Patched npm packages → Nexus
|
||||||
conan remote add artifactory http://localhost:8092/artifactory/api/conan/conan-local
|
conan remote add artifactory http://localhost:8092/artifactory/api/conan/conan-local
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -39,10 +41,10 @@ conan remote add artifactory http://localhost:8092/artifactory/api/conan/conan-l
|
|||||||
### Step 2 — Build Base Images
|
### Step 2 — Build Base Images
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./build-base-images.sh # Build all (skips existing)
|
python3 deployment.py build base # Build all (skips existing)
|
||||||
./build-base-images.sh --force # Rebuild all
|
python3 deployment.py build base --force # Rebuild all
|
||||||
./build-base-images.sh node-deps # Build a specific image
|
python3 deployment.py build base node-deps # Build a specific image
|
||||||
./build-base-images.sh --list # List available images
|
python3 deployment.py build base --list # List available images
|
||||||
```
|
```
|
||||||
|
|
||||||
Build order (dependencies respected automatically):
|
Build order (dependencies respected automatically):
|
||||||
@@ -57,19 +59,19 @@ Build order (dependencies respected automatically):
|
|||||||
### Step 3 — Build App Images
|
### Step 3 — Build App Images
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./build-apps.sh # Build all (skips existing)
|
python3 deployment.py build apps # Build all (skips existing)
|
||||||
./build-apps.sh --force # Rebuild all
|
python3 deployment.py build apps --force # Rebuild all
|
||||||
./build-apps.sh workflowui # Build specific app
|
python3 deployment.py build apps workflowui # Build specific app
|
||||||
./build-apps.sh --sequential # Lower RAM usage
|
python3 deployment.py build apps --sequential # Lower RAM usage
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4 — Start the Stack
|
### Step 4 — Start the Stack
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./start-stack.sh # Core services
|
python3 deployment.py stack up # Core services
|
||||||
./start-stack.sh --monitoring # + Prometheus, Grafana, Loki
|
python3 deployment.py stack up --monitoring # + Prometheus, Grafana, Loki
|
||||||
./start-stack.sh --media # + Media daemon, Icecast, HLS
|
python3 deployment.py stack up --media # + Media daemon, Icecast, HLS
|
||||||
./start-stack.sh --all # Everything
|
python3 deployment.py stack up --all # Everything
|
||||||
```
|
```
|
||||||
|
|
||||||
Portal: http://localhost (nginx welcome page with links to all apps)
|
Portal: http://localhost (nginx welcome page with links to all apps)
|
||||||
@@ -77,9 +79,25 @@ Portal: http://localhost (nginx welcome page with links to all apps)
|
|||||||
### Quick Deploy (rebuild + restart specific apps)
|
### Quick Deploy (rebuild + restart specific apps)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./deploy.sh codegen # Build and deploy codegen
|
python3 deployment.py deploy codegen # Build and deploy codegen
|
||||||
./deploy.sh codegen pastebin # Multiple apps
|
python3 deployment.py deploy codegen pastebin # Multiple apps
|
||||||
./deploy.sh --all # All apps
|
python3 deployment.py deploy --all # All apps
|
||||||
|
```
|
||||||
|
|
||||||
|
## CLI Command Reference
|
||||||
|
|
||||||
|
```
|
||||||
|
deployment.py build base [--force] [--list] [images...]
|
||||||
|
deployment.py build apps [--force] [--sequential] [apps...]
|
||||||
|
deployment.py build testcontainers [--skip-native] [--skip-sidecar]
|
||||||
|
deployment.py deploy [apps...] [--all] [--no-cache]
|
||||||
|
deployment.py stack up|down|build|logs|ps|clean [--monitoring] [--media] [--all]
|
||||||
|
deployment.py release <app> [patch|minor|major|x.y.z]
|
||||||
|
deployment.py nexus init [--ci]
|
||||||
|
deployment.py nexus push [--tag TAG] [--src SRC] [--pull]
|
||||||
|
deployment.py nexus populate [--skip-heavy]
|
||||||
|
deployment.py npm publish-patches [--nexus] [--verdaccio]
|
||||||
|
deployment.py artifactory init
|
||||||
```
|
```
|
||||||
|
|
||||||
## Compose Files
|
## Compose Files
|
||||||
@@ -91,22 +109,14 @@ Portal: http://localhost (nginx welcome page with links to all apps)
|
|||||||
| `docker-compose.test.yml` | Integration test services |
|
| `docker-compose.test.yml` | Integration test services |
|
||||||
| `docker-compose.smoke.yml` | Smoke test environment |
|
| `docker-compose.smoke.yml` | Smoke test environment |
|
||||||
|
|
||||||
## Scripts Reference
|
## Remaining Shell Scripts
|
||||||
|
|
||||||
| Script | Purpose |
|
| Script | Purpose |
|
||||||
|--------|---------|
|
|--------|---------|
|
||||||
| `build-base-images.sh` | Build base Docker images |
|
| `nexus-init.sh` | Nexus repository setup (mounted into Docker container) |
|
||||||
| `build-apps.sh` | Build app Docker images |
|
| `artifactory-init.sh` | Artifactory repository setup (mounted into Docker container) |
|
||||||
| `build-testcontainers.sh` | Build test container images |
|
|
||||||
| `start-stack.sh` | Start the full stack |
|
These are container entrypoints used by `docker-compose.nexus.yml`, not user-facing scripts.
|
||||||
| `deploy.sh` | Quick build + deploy for specific apps |
|
|
||||||
| `push-to-nexus.sh` | Push Docker images to Nexus |
|
|
||||||
| `publish-npm-patches.sh` | Publish patched npm packages to Nexus |
|
|
||||||
| `populate-nexus.sh` | Populate Nexus with all artifacts |
|
|
||||||
| `nexus-init.sh` | Nexus repository setup (runs automatically) |
|
|
||||||
| `nexus-ci-init.sh` | Nexus setup for CI environments |
|
|
||||||
| `artifactory-init.sh` | Artifactory repository setup (runs automatically) |
|
|
||||||
| `release.sh` | Release workflow |
|
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
|
|||||||
@@ -111,11 +111,11 @@ RUN npm config set fetch-retries 5 \
|
|||||||
echo ""; \
|
echo ""; \
|
||||||
echo " Nexus (recommended for desktops):"; \
|
echo " Nexus (recommended for desktops):"; \
|
||||||
echo " cd deployment && docker compose -f docker-compose.nexus.yml up -d"; \
|
echo " cd deployment && docker compose -f docker-compose.nexus.yml up -d"; \
|
||||||
echo " ./publish-npm-patches.sh"; \
|
echo " python3 deployment.py npm publish-patches"; \
|
||||||
echo ""; \
|
echo ""; \
|
||||||
echo " Verdaccio (lightweight, for CI runners):"; \
|
echo " Verdaccio (lightweight, for CI runners):"; \
|
||||||
echo " npx verdaccio --config deployment/verdaccio.yaml &"; \
|
echo " npx verdaccio --config deployment/verdaccio.yaml &"; \
|
||||||
echo " ./publish-npm-patches.sh --verdaccio"; \
|
echo " python3 deployment.py npm publish-patches --verdaccio"; \
|
||||||
echo ""; \
|
echo ""; \
|
||||||
echo " Then rebuild this image."; \
|
echo " Then rebuild this image."; \
|
||||||
echo "========================================================"; \
|
echo "========================================================"; \
|
||||||
|
|||||||
@@ -1,202 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# Build Docker images for all MetaBuilder web applications.
|
|
||||||
# Uses multi-stage Dockerfiles — no local pre-building required.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./build-apps.sh Build missing app images (skip existing)
|
|
||||||
# ./build-apps.sh --force Rebuild all app images
|
|
||||||
# ./build-apps.sh workflowui Build specific app image
|
|
||||||
# ./build-apps.sh --sequential Build sequentially (less RAM)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
COMPOSE_FILE="$SCRIPT_DIR/docker-compose.stack.yml"
|
|
||||||
|
|
||||||
# Ensure base-node-deps exists — all frontend Dockerfiles depend on it.
|
|
||||||
# Other base images (apt, conan, pip, android-sdk) are only needed for
|
|
||||||
# C++ daemons, dev containers, and workflow plugins.
|
|
||||||
ensure_node_deps_base() {
|
|
||||||
if docker image inspect "metabuilder/base-node-deps:latest" &>/dev/null; then
|
|
||||||
echo -e "${GREEN}Base image metabuilder/base-node-deps:latest exists${NC}"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Building metabuilder/base-node-deps (required by all Node.js frontends)...${NC}"
|
|
||||||
local REPO_ROOT
|
|
||||||
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
||||||
docker build \
|
|
||||||
-f "$SCRIPT_DIR/base-images/Dockerfile.node-deps" \
|
|
||||||
-t metabuilder/base-node-deps:latest \
|
|
||||||
"$REPO_ROOT"
|
|
||||||
|
|
||||||
if [ $? -ne 0 ]; then
|
|
||||||
echo -e "${RED}Failed to build base-node-deps — cannot proceed with app builds${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo -e "${GREEN}Built metabuilder/base-node-deps:latest${NC}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check optional base images (warn only, don't block)
|
|
||||||
check_optional_bases() {
|
|
||||||
local missing=()
|
|
||||||
local bases=(
|
|
||||||
"metabuilder/base-apt:latest"
|
|
||||||
"metabuilder/base-conan-deps:latest"
|
|
||||||
"metabuilder/base-pip-deps:latest"
|
|
||||||
"metabuilder/base-android-sdk:latest"
|
|
||||||
)
|
|
||||||
for img in "${bases[@]}"; do
|
|
||||||
if ! docker image inspect "$img" &>/dev/null; then
|
|
||||||
missing+=("$img")
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
if [ ${#missing[@]} -gt 0 ]; then
|
|
||||||
echo -e "${YELLOW}Optional base images not built (C++ daemons, dev container):${NC}"
|
|
||||||
for img in "${missing[@]}"; do
|
|
||||||
echo " - $img"
|
|
||||||
done
|
|
||||||
echo -e "${YELLOW}Build with:${NC} ./build-base-images.sh"
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
ensure_node_deps_base
|
|
||||||
check_optional_bases
|
|
||||||
|
|
||||||
PARALLEL=true
|
|
||||||
FORCE=false
|
|
||||||
TARGETS=()
|
|
||||||
|
|
||||||
for arg in "$@"; do
|
|
||||||
case "$arg" in
|
|
||||||
--sequential) PARALLEL=false ;;
|
|
||||||
--force) FORCE=true ;;
|
|
||||||
*) TARGETS+=("$arg") ;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# Note: media-daemon excluded — C++ source not yet complete (WIP: tv, radio, retro gaming)
|
|
||||||
ALL_APPS=(workflowui codegen pastebin postgres emailclient exploded-diagrams storybook frontend-app dbal)
|
|
||||||
|
|
||||||
# Map friendly name to docker compose service name
|
|
||||||
resolve_service() {
|
|
||||||
case "$1" in
|
|
||||||
workflowui) echo "workflowui" ;;
|
|
||||||
codegen) echo "codegen" ;;
|
|
||||||
pastebin) echo "pastebin" ;;
|
|
||||||
postgres) echo "postgres-dashboard" ;;
|
|
||||||
emailclient) echo "emailclient-app" ;;
|
|
||||||
exploded-diagrams) echo "exploded-diagrams" ;;
|
|
||||||
storybook) echo "storybook" ;;
|
|
||||||
frontend-app) echo "frontend-app" ;;
|
|
||||||
dbal) echo "dbal" ;;
|
|
||||||
*) echo "" ;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
# If no targets specified, build all
|
|
||||||
if [ ${#TARGETS[@]} -eq 0 ]; then
|
|
||||||
TARGETS=("${ALL_APPS[@]}")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Resolve service names
|
|
||||||
SERVICES=()
|
|
||||||
for target in "${TARGETS[@]}"; do
|
|
||||||
service="$(resolve_service "$target")"
|
|
||||||
if [ -z "$service" ]; then
|
|
||||||
echo -e "${RED}Unknown target: $target${NC}"
|
|
||||||
echo "Available: ${ALL_APPS[*]}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
SERVICES+=("$service")
|
|
||||||
done
|
|
||||||
|
|
||||||
# Skip services whose images already exist (unless --force)
|
|
||||||
if [[ "$FORCE" != "true" ]]; then
|
|
||||||
NEEDS_BUILD=()
|
|
||||||
NEEDS_BUILD_NAMES=()
|
|
||||||
for i in "${!TARGETS[@]}"; do
|
|
||||||
target="${TARGETS[$i]}"
|
|
||||||
service="${SERVICES[$i]}"
|
|
||||||
img="deployment-${service}"
|
|
||||||
if docker image inspect "$img" &>/dev/null; then
|
|
||||||
echo -e "${GREEN}Skipping $target${NC} — image $img already exists (use --force to rebuild)"
|
|
||||||
else
|
|
||||||
NEEDS_BUILD_NAMES+=("$target")
|
|
||||||
NEEDS_BUILD+=("$service")
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
if [ ${#NEEDS_BUILD[@]} -eq 0 ]; then
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}All app images already built! Use --force to rebuild.${NC}"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
TARGETS=("${NEEDS_BUILD_NAMES[@]}")
|
|
||||||
SERVICES=("${NEEDS_BUILD[@]}")
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Building: ${TARGETS[*]}${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Pre-pull base images that app Dockerfiles depend on (with retry for flaky connections)
|
|
||||||
echo -e "${YELLOW}Pre-pulling base images for app builds...${NC}"
|
|
||||||
for img in "node:20-alpine" "node:22-alpine" "python:3.11-slim" "python:3.12-slim" "alpine:3.19"; do
|
|
||||||
if ! docker image inspect "$img" &>/dev/null; then
|
|
||||||
echo " Pulling $img..."
|
|
||||||
for i in 1 2 3 4 5; do
|
|
||||||
docker pull "$img" && break \
|
|
||||||
|| (echo " Retry $i/5..." && sleep $((i * 10)))
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
MAX_BUILD_ATTEMPTS=5
|
|
||||||
BUILD_ATTEMPT=0
|
|
||||||
BUILD_OK=false
|
|
||||||
|
|
||||||
while [ $BUILD_ATTEMPT -lt $MAX_BUILD_ATTEMPTS ]; do
|
|
||||||
BUILD_ATTEMPT=$((BUILD_ATTEMPT + 1))
|
|
||||||
[ $BUILD_ATTEMPT -gt 1 ] && echo -e "${YELLOW}Build attempt $BUILD_ATTEMPT/$MAX_BUILD_ATTEMPTS...${NC}"
|
|
||||||
|
|
||||||
if [ "$PARALLEL" = true ]; then
|
|
||||||
echo -e "${YELLOW}Parallel build (uses more RAM)...${NC}"
|
|
||||||
docker compose -f "$COMPOSE_FILE" build --parallel "${SERVICES[@]}" && BUILD_OK=true && break
|
|
||||||
else
|
|
||||||
# Build each service individually to avoid bandwidth contention
|
|
||||||
ALL_OK=true
|
|
||||||
for svc in "${SERVICES[@]}"; do
|
|
||||||
echo -e "${YELLOW}Building $svc...${NC}"
|
|
||||||
if ! docker compose -f "$COMPOSE_FILE" build "$svc"; then
|
|
||||||
echo -e "${RED}Failed: $svc${NC}"
|
|
||||||
ALL_OK=false
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
echo -e "${GREEN}Done: $svc${NC}"
|
|
||||||
done
|
|
||||||
[ "$ALL_OK" = true ] && BUILD_OK=true && break
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $BUILD_ATTEMPT -lt $MAX_BUILD_ATTEMPTS ]; then
|
|
||||||
WAIT=$(( BUILD_ATTEMPT * 10 ))
|
|
||||||
echo -e "${YELLOW}Build failed (attempt $BUILD_ATTEMPT/$MAX_BUILD_ATTEMPTS), retrying in ${WAIT}s...${NC}"
|
|
||||||
sleep $WAIT
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ "$BUILD_OK" != "true" ]; then
|
|
||||||
echo -e "${RED}Build failed after $MAX_BUILD_ATTEMPTS attempts${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}Build complete!${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "Start with: ./start-stack.sh"
|
|
||||||
echo "Or: docker compose -f $COMPOSE_FILE up -d ${SERVICES[*]}"
|
|
||||||
@@ -1,195 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# Build MetaBuilder base Docker images.
|
|
||||||
#
|
|
||||||
# These are built ONCE (or when dependency manifests change) and cached locally.
|
|
||||||
# App image builds then have zero downloads — they just inherit from these bases.
|
|
||||||
#
|
|
||||||
# Build order matters:
|
|
||||||
# 1. base-apt (no deps)
|
|
||||||
# 2. base-conan-deps (needs base-apt)
|
|
||||||
# 3. base-android-sdk (needs base-apt)
|
|
||||||
# 4. base-node-deps (standalone — node:20-alpine)
|
|
||||||
# 5. base-pip-deps (standalone — python:3.11-slim)
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./build-base-images.sh Build missing base images (skip existing)
|
|
||||||
# ./build-base-images.sh --force Rebuild all base images
|
|
||||||
# ./build-base-images.sh apt node Build specific images (skip if exist)
|
|
||||||
# ./build-base-images.sh --list List available images
|
|
||||||
|
|
||||||
# Require bash 4+ for associative arrays (macOS ships 3.2)
|
|
||||||
if ((BASH_VERSINFO[0] < 4)); then
|
|
||||||
for candidate in /opt/homebrew/bin/bash /usr/local/bin/bash; do
|
|
||||||
if [[ -x "$candidate" ]] && "$candidate" -c '((BASH_VERSINFO[0]>=4))' 2>/dev/null; then
|
|
||||||
exec "$candidate" "$0" "$@"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
echo "Error: bash 4+ required (found bash $BASH_VERSION)"
|
|
||||||
echo "Install with: brew install bash"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
||||||
BASE_DIR="$SCRIPT_DIR/base-images"
|
|
||||||
|
|
||||||
# ── Helpers ───────────────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
log_info() { echo -e "${BLUE}[base]${NC} $*"; }
|
|
||||||
log_ok() { echo -e "${GREEN}[base]${NC} $*"; }
|
|
||||||
log_warn() { echo -e "${YELLOW}[base]${NC} $*"; }
|
|
||||||
log_err() { echo -e "${RED}[base]${NC} $*"; }
|
|
||||||
|
|
||||||
# Build one image with retry (handles flaky network during FROM pulls).
|
|
||||||
build_with_retry() {
|
|
||||||
local tag="$1"
|
|
||||||
local dockerfile="$2"
|
|
||||||
local context="${3:-$PROJECT_ROOT}"
|
|
||||||
local max=5
|
|
||||||
|
|
||||||
log_info "Building $tag ..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
for i in $(seq 1 $max); do
|
|
||||||
if docker build \
|
|
||||||
--network=host \
|
|
||||||
--file "$BASE_DIR/$dockerfile" \
|
|
||||||
--tag "$tag" \
|
|
||||||
--tag "${tag%:*}:$(date +%Y%m%d)" \
|
|
||||||
"$context"; then
|
|
||||||
echo ""
|
|
||||||
log_ok "$tag built successfully"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$i" -lt "$max" ]; then
|
|
||||||
local wait=$(( i * 15 ))
|
|
||||||
log_warn "Build failed (attempt $i/$max), retrying in ${wait}s ..."
|
|
||||||
sleep "$wait"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
log_err "Failed to build $tag after $max attempts"
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# ── Image definitions (order = build order) ───────────────────────────────────
|
|
||||||
|
|
||||||
declare -A IMAGE_FILE=(
|
|
||||||
[apt]="Dockerfile.apt"
|
|
||||||
[conan-deps]="Dockerfile.conan-deps"
|
|
||||||
[node-deps]="Dockerfile.node-deps"
|
|
||||||
[pip-deps]="Dockerfile.pip-deps"
|
|
||||||
[android-sdk]="Dockerfile.android-sdk"
|
|
||||||
[devcontainer]="Dockerfile.devcontainer"
|
|
||||||
)
|
|
||||||
|
|
||||||
declare -A IMAGE_TAG=(
|
|
||||||
[apt]="metabuilder/base-apt:latest"
|
|
||||||
[conan-deps]="metabuilder/base-conan-deps:latest"
|
|
||||||
[node-deps]="metabuilder/base-node-deps:latest"
|
|
||||||
[pip-deps]="metabuilder/base-pip-deps:latest"
|
|
||||||
[android-sdk]="metabuilder/base-android-sdk:latest"
|
|
||||||
[devcontainer]="metabuilder/devcontainer:latest"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Build context overrides (default: $PROJECT_ROOT).
|
|
||||||
# Images that don't COPY project files can use a minimal context for speed.
|
|
||||||
declare -A IMAGE_CONTEXT=(
|
|
||||||
[android-sdk]="$BASE_DIR"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Build order respects dependencies:
|
|
||||||
# base-apt → conan-deps, android-sdk
|
|
||||||
# conan-deps + node-deps + pip-deps + android-sdk → devcontainer
|
|
||||||
BUILD_ORDER=(apt conan-deps android-sdk node-deps pip-deps devcontainer)
|
|
||||||
|
|
||||||
# ── Argument parsing ──────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
if [[ "$1" == "--list" ]]; then
|
|
||||||
echo "Available base images:"
|
|
||||||
for name in "${BUILD_ORDER[@]}"; do
|
|
||||||
echo " $name → ${IMAGE_TAG[$name]}"
|
|
||||||
done
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
FORCE=false
|
|
||||||
TARGETS=()
|
|
||||||
for arg in "$@"; do
|
|
||||||
if [[ "$arg" == "--force" ]]; then
|
|
||||||
FORCE=true
|
|
||||||
elif [[ -v IMAGE_FILE[$arg] ]]; then
|
|
||||||
TARGETS+=("$arg")
|
|
||||||
else
|
|
||||||
log_err "Unknown image: $arg"
|
|
||||||
echo "Available: ${BUILD_ORDER[*]}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Default: build all (in dependency order)
|
|
||||||
if [ ${#TARGETS[@]} -eq 0 ]; then
|
|
||||||
TARGETS=("${BUILD_ORDER[@]}")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ── Build ─────────────────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${BLUE}MetaBuilder Base Image Builder${NC}"
|
|
||||||
echo -e "Building: ${TARGETS[*]}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
FAILED=()
|
|
||||||
SKIPPED=()
|
|
||||||
for name in "${BUILD_ORDER[@]}"; do
|
|
||||||
# Skip if not in TARGETS
|
|
||||||
[[ " ${TARGETS[*]} " == *" $name "* ]] || continue
|
|
||||||
|
|
||||||
# Skip if image already exists (unless --force)
|
|
||||||
if [[ "$FORCE" != "true" ]] && docker image inspect "${IMAGE_TAG[$name]}" &>/dev/null; then
|
|
||||||
SKIPPED+=("$name")
|
|
||||||
log_ok "Skipping $name — ${IMAGE_TAG[$name]} already exists (use --force to rebuild)"
|
|
||||||
echo ""
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
local_context="${IMAGE_CONTEXT[$name]:-}"
|
|
||||||
if ! build_with_retry "${IMAGE_TAG[$name]}" "${IMAGE_FILE[$name]}" ${local_context:+"$local_context"}; then
|
|
||||||
FAILED+=("$name")
|
|
||||||
log_warn "Continuing with remaining images..."
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
done
|
|
||||||
|
|
||||||
# ── Summary ───────────────────────────────────────────────────────────────────
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
if [ ${#FAILED[@]} -eq 0 ]; then
|
|
||||||
echo -e "${GREEN}All base images built successfully!${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "Built images:"
|
|
||||||
for name in "${BUILD_ORDER[@]}"; do
|
|
||||||
[[ " ${TARGETS[*]} " == *" $name "* ]] || continue
|
|
||||||
SIZE=$(docker image inspect "${IMAGE_TAG[$name]}" \
|
|
||||||
--format '{{.Size}}' 2>/dev/null \
|
|
||||||
| awk '{printf "%.1f GB", $1/1073741824}')
|
|
||||||
echo -e " ${GREEN}✓${NC} ${IMAGE_TAG[$name]} ($SIZE)"
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
echo "Now run: cd deployment && ./build-apps.sh"
|
|
||||||
else
|
|
||||||
echo -e "${RED}Some images failed to build:${NC} ${FAILED[*]}"
|
|
||||||
echo "Re-run to retry only failed images:"
|
|
||||||
echo " ./build-base-images.sh ${FAILED[*]}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
@@ -1,105 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# build-testcontainers.sh — builds the testcontainers Conan packages and uploads to Nexus.
|
|
||||||
#
|
|
||||||
# Builds:
|
|
||||||
# - testcontainers-native/0.1.0 (C shared library, wraps testcontainers-go)
|
|
||||||
# - testcontainers-sidecar/0.1.0 (Go binary sidecar for DBAL integration tests)
|
|
||||||
#
|
|
||||||
# Prerequisites:
|
|
||||||
# - Go 1.21+ (brew install go)
|
|
||||||
# - Conan 2.x (pip install conan)
|
|
||||||
# - Nexus running (docker compose -f deployment/docker-compose.nexus.yml up -d)
|
|
||||||
# - Nexus init (./deployment/nexus-init.sh)
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./deployment/build-testcontainers.sh [--skip-native] [--skip-sidecar]
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
||||||
RECIPES_DIR="$REPO_ROOT/dbal/production/build-config/conan-recipes"
|
|
||||||
|
|
||||||
NEXUS_URL="${NEXUS_URL:-http://localhost:8091/repository/conan-hosted/}"
|
|
||||||
NEXUS_USER="${NEXUS_USER:-admin}"
|
|
||||||
NEXUS_PASS="${NEXUS_PASS:-nexus}"
|
|
||||||
|
|
||||||
SKIP_NATIVE=false
|
|
||||||
SKIP_SIDECAR=false
|
|
||||||
for arg in "$@"; do
|
|
||||||
case "$arg" in
|
|
||||||
--skip-native) SKIP_NATIVE=true ;;
|
|
||||||
--skip-sidecar) SKIP_SIDECAR=true ;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
log() { echo "[build-testcontainers] $*"; }
|
|
||||||
|
|
||||||
# ── Preflight checks ──────────────────────────────────────────────────────────
|
|
||||||
log "Checking prerequisites..."
|
|
||||||
go version || { echo "Go not found. Install: https://go.dev/dl/"; exit 1; }
|
|
||||||
conan --version || { echo "Conan not found. Install: pip install conan"; exit 1; }
|
|
||||||
|
|
||||||
# ── Configure Nexus as Conan remote ───────────────────────────────────────────
|
|
||||||
log "Configuring Nexus Conan remote..."
|
|
||||||
conan remote add nexus "$NEXUS_URL" --force 2>/dev/null || true
|
|
||||||
conan remote login nexus "$NEXUS_USER" --password "$NEXUS_PASS"
|
|
||||||
|
|
||||||
# Ensure Nexus is before conancenter in priority (for future installs)
|
|
||||||
conan remote disable conancenter 2>/dev/null || true
|
|
||||||
conan remote enable conancenter 2>/dev/null || true
|
|
||||||
# Move nexus to index 0
|
|
||||||
conan remote update nexus --index 0 2>/dev/null || true
|
|
||||||
|
|
||||||
# ── Build + upload testcontainers-native ──────────────────────────────────────
|
|
||||||
if [ "$SKIP_NATIVE" = false ]; then
|
|
||||||
log "Building testcontainers-native/0.1.0 (C shared library)..."
|
|
||||||
log " Requires: Go + CMake + Docker"
|
|
||||||
conan create "$RECIPES_DIR/testcontainers-native" \
|
|
||||||
-s build_type=Release \
|
|
||||||
-s compiler.cppstd=20 \
|
|
||||||
--build=missing
|
|
||||||
|
|
||||||
log "Uploading testcontainers-native to Nexus..."
|
|
||||||
conan upload "testcontainers-native/0.1.0" --remote nexus --confirm
|
|
||||||
log "testcontainers-native uploaded ✓"
|
|
||||||
else
|
|
||||||
log "Skipping testcontainers-native (--skip-native)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ── Build + upload testcontainers-sidecar ─────────────────────────────────────
|
|
||||||
if [ "$SKIP_SIDECAR" = false ]; then
|
|
||||||
SIDECAR_SRC="$REPO_ROOT/dbal/testcontainers-sidecar"
|
|
||||||
log "Building testcontainers-sidecar/0.1.0 (Go binary)..."
|
|
||||||
log " Source: $SIDECAR_SRC"
|
|
||||||
|
|
||||||
# Export TESTCONTAINERS_SIDECAR_SRC so the Conan recipe's build() can find it
|
|
||||||
TESTCONTAINERS_SIDECAR_SRC="$SIDECAR_SRC" \
|
|
||||||
conan create "$RECIPES_DIR/testcontainers-sidecar" \
|
|
||||||
-s build_type=Release \
|
|
||||||
-s compiler.cppstd=20 \
|
|
||||||
--build=missing
|
|
||||||
|
|
||||||
log "Uploading testcontainers-sidecar to Nexus..."
|
|
||||||
conan upload "testcontainers-sidecar/0.1.0" --remote nexus --confirm
|
|
||||||
log "testcontainers-sidecar uploaded ✓"
|
|
||||||
else
|
|
||||||
log "Skipping testcontainers-sidecar (--skip-sidecar)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log ""
|
|
||||||
log "══════════════════════════════════════════"
|
|
||||||
log " Conan packages in Nexus:"
|
|
||||||
log " http://localhost:8091/#browse/browse:conan-hosted"
|
|
||||||
log ""
|
|
||||||
log " To use in DBAL tests:"
|
|
||||||
log " conan remote add nexus $NEXUS_URL --force"
|
|
||||||
log " conan remote login nexus $NEXUS_USER --password $NEXUS_PASS"
|
|
||||||
log " cd dbal/production/_build"
|
|
||||||
log " conan install ../build-config/conanfile.tests.py \\"
|
|
||||||
log " --output-folder=. --build=missing --remote nexus \\"
|
|
||||||
log " -s build_type=Debug -s compiler.cppstd=20"
|
|
||||||
log " cmake .. -DBUILD_DAEMON=OFF -DBUILD_INTEGRATION_TESTS=ON \\"
|
|
||||||
log " -DCMAKE_TOOLCHAIN_FILE=./build/Debug/generators/conan_toolchain.cmake -G Ninja"
|
|
||||||
log " cmake --build . --target dbal_integration_tests --parallel"
|
|
||||||
log " ctest -R dbal_integration_tests --output-on-failure -V"
|
|
||||||
log "══════════════════════════════════════════"
|
|
||||||
@@ -1,128 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# Quick build + deploy for one or more apps.
|
|
||||||
#
|
|
||||||
# Combines build-apps.sh --force + docker compose up --force-recreate
|
|
||||||
# into a single command for the most common workflow.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./deploy.sh codegen Build and deploy codegen
|
|
||||||
# ./deploy.sh codegen pastebin Build and deploy multiple apps
|
|
||||||
# ./deploy.sh --all Build and deploy all apps
|
|
||||||
#
|
|
||||||
# This replaces the manual workflow of:
|
|
||||||
# docker compose build --no-cache codegen
|
|
||||||
# docker compose up -d --force-recreate codegen
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
COMPOSE_FILE="$SCRIPT_DIR/docker-compose.stack.yml"
|
|
||||||
|
|
||||||
ALL_APPS=(workflowui codegen pastebin postgres emailclient exploded-diagrams storybook frontend-app dbal)
|
|
||||||
|
|
||||||
# Map friendly name → compose service name
|
|
||||||
resolve_service() {
|
|
||||||
case "$1" in
|
|
||||||
workflowui) echo "workflowui" ;;
|
|
||||||
codegen) echo "codegen" ;;
|
|
||||||
pastebin) echo "pastebin" ;;
|
|
||||||
postgres) echo "postgres-dashboard" ;;
|
|
||||||
emailclient) echo "emailclient-app" ;;
|
|
||||||
exploded-diagrams) echo "exploded-diagrams" ;;
|
|
||||||
storybook) echo "storybook" ;;
|
|
||||||
frontend-app) echo "frontend-app" ;;
|
|
||||||
dbal) echo "dbal" ;;
|
|
||||||
*) echo "" ;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
if [ $# -eq 0 ]; then
|
|
||||||
echo "Usage: ./deploy.sh <app> [app2 ...] | --all"
|
|
||||||
echo ""
|
|
||||||
echo "Available apps: ${ALL_APPS[*]}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
TARGETS=()
|
|
||||||
NO_CACHE=false
|
|
||||||
for arg in "$@"; do
|
|
||||||
case "$arg" in
|
|
||||||
--all) TARGETS=("${ALL_APPS[@]}") ;;
|
|
||||||
--no-cache) NO_CACHE=true ;;
|
|
||||||
*) TARGETS+=("$arg") ;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# Resolve service names
|
|
||||||
SERVICES=()
|
|
||||||
for target in "${TARGETS[@]}"; do
|
|
||||||
service="$(resolve_service "$target")"
|
|
||||||
if [ -z "$service" ]; then
|
|
||||||
echo -e "${RED}Unknown app: $target${NC}"
|
|
||||||
echo "Available: ${ALL_APPS[*]}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
SERVICES+=("$service")
|
|
||||||
done
|
|
||||||
|
|
||||||
echo -e "${BLUE}═══════════════════════════════════════════${NC}"
|
|
||||||
echo -e "${BLUE} Deploy: ${TARGETS[*]}${NC}"
|
|
||||||
echo -e "${BLUE}═══════════════════════════════════════════${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Step 1: Build
|
|
||||||
echo -e "${YELLOW}[1/3] Building...${NC}"
|
|
||||||
BUILD_ARGS=()
|
|
||||||
if [ "$NO_CACHE" = true ]; then
|
|
||||||
BUILD_ARGS+=("--no-cache")
|
|
||||||
fi
|
|
||||||
docker compose -f "$COMPOSE_FILE" build "${BUILD_ARGS[@]}" "${SERVICES[@]}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Step 2: Recreate containers
|
|
||||||
echo -e "${YELLOW}[2/3] Deploying...${NC}"
|
|
||||||
docker compose -f "$COMPOSE_FILE" up -d --force-recreate "${SERVICES[@]}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Step 3: Wait for health
|
|
||||||
echo -e "${YELLOW}[3/3] Waiting for health checks...${NC}"
|
|
||||||
HEALTHY=true
|
|
||||||
for service in "${SERVICES[@]}"; do
|
|
||||||
container="metabuilder-${service}"
|
|
||||||
# Some services use different container names
|
|
||||||
case "$service" in
|
|
||||||
postgres-dashboard) container="metabuilder-postgres-dashboard" ;;
|
|
||||||
emailclient-app) container="metabuilder-emailclient-app" ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
echo -n " $service: "
|
|
||||||
for i in $(seq 1 30); do
|
|
||||||
status=$(docker inspect --format='{{.State.Health.Status}}' "$container" 2>/dev/null || echo "missing")
|
|
||||||
if [ "$status" = "healthy" ]; then
|
|
||||||
echo -e "${GREEN}healthy${NC}"
|
|
||||||
break
|
|
||||||
elif [ "$status" = "unhealthy" ]; then
|
|
||||||
echo -e "${RED}unhealthy${NC}"
|
|
||||||
HEALTHY=false
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
sleep 2
|
|
||||||
done
|
|
||||||
if [ "$status" != "healthy" ] && [ "$status" != "unhealthy" ]; then
|
|
||||||
echo -e "${YELLOW}timeout (status: $status)${NC}"
|
|
||||||
HEALTHY=false
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [ "$HEALTHY" = true ]; then
|
|
||||||
echo -e "${GREEN}✓ All services deployed and healthy${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${YELLOW}⚠ Some services are not healthy — check with: docker compose -f $COMPOSE_FILE ps${NC}"
|
|
||||||
fi
|
|
||||||
@@ -19,8 +19,8 @@
|
|||||||
# Usage:
|
# Usage:
|
||||||
# docker compose -f docker-compose.nexus.yml up -d
|
# docker compose -f docker-compose.nexus.yml up -d
|
||||||
# # Wait ~2 min for init containers to finish, then:
|
# # Wait ~2 min for init containers to finish, then:
|
||||||
# ./push-to-nexus.sh # Docker images → Nexus
|
# python3 deployment.py nexus push # Docker images → Nexus
|
||||||
# ./publish-npm-patches.sh # Patched npm packages → Nexus
|
# python3 deployment.py npm publish-patches # Patched npm packages → Nexus
|
||||||
# conan remote add artifactory http://localhost:8092/artifactory/api/conan/conan-local
|
# conan remote add artifactory http://localhost:8092/artifactory/api/conan/conan-local
|
||||||
#
|
#
|
||||||
# URLs:
|
# URLs:
|
||||||
|
|||||||
@@ -1,88 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
# Lightweight Nexus initialisation for CI — npm repos only (no Docker, no Artifactory).
|
|
||||||
# Full local dev setup uses nexus-init.sh via docker compose.
|
|
||||||
set -e
|
|
||||||
|
|
||||||
NEXUS_URL="${NEXUS_URL:-http://localhost:8091}"
|
|
||||||
NEW_PASS="${NEXUS_ADMIN_NEW_PASS:-nexus}"
|
|
||||||
PASS_FILE="/tmp/nexus-data/admin.password"
|
|
||||||
|
|
||||||
log() { echo "[nexus-ci-init] $*"; }
|
|
||||||
|
|
||||||
# ── Resolve admin password ──────────────────────────────────────────────────
|
|
||||||
HTTP=$(curl -s -o /dev/null -w "%{http_code}" \
|
|
||||||
"$NEXUS_URL/service/rest/v1/status" -u "admin:$NEW_PASS")
|
|
||||||
if [ "$HTTP" = "200" ]; then
|
|
||||||
log "Already initialised with password '$NEW_PASS'"
|
|
||||||
elif [ -f "$PASS_FILE" ]; then
|
|
||||||
INIT_PASS=$(cat "$PASS_FILE")
|
|
||||||
log "First run: setting admin password..."
|
|
||||||
curl -sf -X PUT \
|
|
||||||
"$NEXUS_URL/service/rest/v1/security/users/admin/change-password" \
|
|
||||||
-u "admin:$INIT_PASS" -H "Content-Type: text/plain" -d "$NEW_PASS"
|
|
||||||
log "Admin password set"
|
|
||||||
else
|
|
||||||
log "ERROR: cannot authenticate and no password file found"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
AUTH="admin:$NEW_PASS"
|
|
||||||
|
|
||||||
# ── Enable anonymous access ────────────────────────────────────────────────
|
|
||||||
curl -sf -X PUT "$NEXUS_URL/service/rest/v1/security/anonymous" \
|
|
||||||
-u "$AUTH" -H "Content-Type: application/json" \
|
|
||||||
-d '{"enabled":true,"userId":"anonymous","realmName":"NexusAuthorizingRealm"}' || true
|
|
||||||
log "Anonymous access enabled"
|
|
||||||
|
|
||||||
# Enable npm token realm
|
|
||||||
curl -sf -X PUT "$NEXUS_URL/service/rest/v1/security/realms/active" \
|
|
||||||
-u "$AUTH" -H "Content-Type: application/json" \
|
|
||||||
-d '["NexusAuthenticatingRealm","NpmToken"]' || true
|
|
||||||
|
|
||||||
# ── npm-hosted (patched packages) ─────────────────────────────────────────
|
|
||||||
HTTP=$(curl -s -o /dev/null -w "%{http_code}" -X POST \
|
|
||||||
"$NEXUS_URL/service/rest/v1/repositories/npm/hosted" \
|
|
||||||
-u "$AUTH" -H "Content-Type: application/json" -d '{
|
|
||||||
"name": "npm-hosted",
|
|
||||||
"online": true,
|
|
||||||
"storage": {"blobStoreName": "default", "strictContentTypeValidation": true, "writePolicy": "allow"}
|
|
||||||
}')
|
|
||||||
case "$HTTP" in
|
|
||||||
201) log "npm-hosted repo created" ;;
|
|
||||||
400) log "npm-hosted repo already exists" ;;
|
|
||||||
*) log "ERROR creating npm-hosted: HTTP $HTTP"; exit 1 ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# ── npm-proxy (npmjs.org cache) ───────────────────────────────────────────
|
|
||||||
HTTP=$(curl -s -o /dev/null -w "%{http_code}" -X POST \
|
|
||||||
"$NEXUS_URL/service/rest/v1/repositories/npm/proxy" \
|
|
||||||
-u "$AUTH" -H "Content-Type: application/json" -d '{
|
|
||||||
"name": "npm-proxy",
|
|
||||||
"online": true,
|
|
||||||
"storage": {"blobStoreName": "default", "strictContentTypeValidation": true},
|
|
||||||
"proxy": {"remoteUrl": "https://registry.npmjs.org", "contentMaxAge": 1440, "metadataMaxAge": 1440},
|
|
||||||
"httpClient": {"blocked": false, "autoBlock": true},
|
|
||||||
"negativeCache": {"enabled": true, "timeToLive": 1440}
|
|
||||||
}')
|
|
||||||
case "$HTTP" in
|
|
||||||
201) log "npm-proxy repo created" ;;
|
|
||||||
400) log "npm-proxy repo already exists" ;;
|
|
||||||
*) log "ERROR creating npm-proxy: HTTP $HTTP"; exit 1 ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# ── npm-group (combines hosted + proxy) ──────────────────────────────────
|
|
||||||
HTTP=$(curl -s -o /dev/null -w "%{http_code}" -X POST \
|
|
||||||
"$NEXUS_URL/service/rest/v1/repositories/npm/group" \
|
|
||||||
-u "$AUTH" -H "Content-Type: application/json" -d '{
|
|
||||||
"name": "npm-group",
|
|
||||||
"online": true,
|
|
||||||
"storage": {"blobStoreName": "default", "strictContentTypeValidation": true},
|
|
||||||
"group": {"memberNames": ["npm-hosted", "npm-proxy"]}
|
|
||||||
}')
|
|
||||||
case "$HTTP" in
|
|
||||||
201) log "npm-group repo created" ;;
|
|
||||||
400) log "npm-group repo already exists" ;;
|
|
||||||
*) log "ERROR creating npm-group: HTTP $HTTP"; exit 1 ;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
log "Nexus CI init complete"
|
|
||||||
@@ -1,127 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# Push all locally-built MetaBuilder images to local Nexus registry.
|
|
||||||
# Tags each image as both :main and :latest at localhost:5050/<owner>/<repo>/<name>.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./populate-nexus.sh [--skip-heavy]
|
|
||||||
#
|
|
||||||
# --skip-heavy skip base-conan-deps (32 GB), devcontainer (41 GB), media-daemon (3.5 GB)
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
NEXUS="localhost:5050"
|
|
||||||
SLUG="johndoe6345789/metabuilder-small"
|
|
||||||
NEXUS_USER="admin"
|
|
||||||
NEXUS_PASS="nexus"
|
|
||||||
|
|
||||||
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; BLUE='\033[0;34m'; NC='\033[0m'
|
|
||||||
|
|
||||||
SKIP_HEAVY=false
|
|
||||||
[[ "${1:-}" == "--skip-heavy" ]] && SKIP_HEAVY=true
|
|
||||||
|
|
||||||
log() { echo -e "${BLUE}[nexus]${NC} $*"; }
|
|
||||||
ok() { echo -e "${GREEN}[nexus]${NC} $*"; }
|
|
||||||
warn() { echo -e "${YELLOW}[nexus]${NC} $*"; }
|
|
||||||
err() { echo -e "${RED}[nexus]${NC} $*"; }
|
|
||||||
|
|
||||||
# ── Login ────────────────────────────────────────────────────────────────────
|
|
||||||
log "Logging in to $NEXUS..."
|
|
||||||
echo "$NEXUS_PASS" | docker login "$NEXUS" -u "$NEXUS_USER" --password-stdin
|
|
||||||
|
|
||||||
# ── Image map: local_tag → nexus_name ────────────────────────────────────────
|
|
||||||
# Format: "local_image|nexus_name|size_hint"
|
|
||||||
#
|
|
||||||
# Base images (metabuilder/* prefix, built by build-base-images.sh)
|
|
||||||
declare -a BASE_IMAGES=(
|
|
||||||
"metabuilder/base-apt:latest|base-apt|2.8GB"
|
|
||||||
"metabuilder/base-node-deps:latest|base-node-deps|5.5GB"
|
|
||||||
"metabuilder/base-pip-deps:latest|base-pip-deps|1.4GB"
|
|
||||||
"metabuilder/base-android-sdk:latest|base-android-sdk|6.1GB"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Heavy base images — pushed last (or skipped with --skip-heavy)
|
|
||||||
declare -a HEAVY_IMAGES=(
|
|
||||||
"metabuilder/base-conan-deps:latest|base-conan-deps|32GB"
|
|
||||||
"metabuilder/devcontainer:latest|devcontainer|41GB"
|
|
||||||
)
|
|
||||||
|
|
||||||
# App images (deployment-* prefix, built by docker-compose)
|
|
||||||
declare -a APP_IMAGES=(
|
|
||||||
"deployment-dbal-init:latest|dbal-init|12MB"
|
|
||||||
"deployment-storybook:latest|storybook|112MB"
|
|
||||||
"deployment-nginx:latest|nginx|92MB"
|
|
||||||
"deployment-nginx-stream:latest|nginx-stream|92MB"
|
|
||||||
"deployment-pastebin-backend:latest|pastebin-backend|236MB"
|
|
||||||
"deployment-emailclient-app:latest|emailclient|350MB"
|
|
||||||
"deployment-email-service:latest|email-service|388MB"
|
|
||||||
"deployment-exploded-diagrams:latest|exploded-diagrams|315MB"
|
|
||||||
"deployment-pastebin:latest|pastebin|382MB"
|
|
||||||
"deployment-frontend-app:latest|frontend-app|361MB"
|
|
||||||
"deployment-workflowui:latest|workflowui|542MB"
|
|
||||||
"deployment-postgres-dashboard:latest|postgres-dashboard|508MB"
|
|
||||||
"deployment-smtp-relay:latest|smtp-relay|302MB"
|
|
||||||
"deployment-dbal:latest|dbal|3.0GB"
|
|
||||||
"deployment-codegen:latest|codegen|5.6GB"
|
|
||||||
)
|
|
||||||
|
|
||||||
declare -a HEAVY_APP_IMAGES=(
|
|
||||||
"deployment-media-daemon:latest|media-daemon|3.5GB"
|
|
||||||
)
|
|
||||||
|
|
||||||
# ── Push function ─────────────────────────────────────────────────────────────
|
|
||||||
pushed=0; skipped=0; failed=0
|
|
||||||
|
|
||||||
push_image() {
|
|
||||||
local src="$1" name="$2" size="$3"
|
|
||||||
|
|
||||||
# Check source exists
|
|
||||||
if ! docker image inspect "$src" &>/dev/null; then
|
|
||||||
warn "SKIP $name — $src not found locally"
|
|
||||||
((skipped++)) || true
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
local dst_main="$NEXUS/$SLUG/$name:main"
|
|
||||||
local dst_latest="$NEXUS/$SLUG/$name:latest"
|
|
||||||
|
|
||||||
log "Pushing $name ($size)..."
|
|
||||||
docker tag "$src" "$dst_main"
|
|
||||||
docker tag "$src" "$dst_latest"
|
|
||||||
|
|
||||||
if docker push "$dst_main" && docker push "$dst_latest"; then
|
|
||||||
ok " ✓ $name → :main + :latest"
|
|
||||||
((pushed++)) || true
|
|
||||||
else
|
|
||||||
err " ✗ $name FAILED"
|
|
||||||
((failed++)) || true
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# ── Execute ──────────────────────────────────────────────────────────────────
|
|
||||||
echo ""
|
|
||||||
log "Registry : $NEXUS"
|
|
||||||
log "Slug : $SLUG"
|
|
||||||
log "Skip heavy: $SKIP_HEAVY"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
for entry in "${BASE_IMAGES[@]}"; do IFS='|' read -r src name size <<< "$entry"; push_image "$src" "$name" "$size"; done
|
|
||||||
for entry in "${APP_IMAGES[@]}"; do IFS='|' read -r src name size <<< "$entry"; push_image "$src" "$name" "$size"; done
|
|
||||||
|
|
||||||
if $SKIP_HEAVY; then
|
|
||||||
warn "Skipping heavy images (--skip-heavy set):"
|
|
||||||
for entry in "${HEAVY_IMAGES[@]}" "${HEAVY_APP_IMAGES[@]}"; do
|
|
||||||
IFS='|' read -r src name size <<< "$entry"; warn " $name ($size)"
|
|
||||||
done
|
|
||||||
else
|
|
||||||
log "--- Heavy images (this will take a while) ---"
|
|
||||||
for entry in "${HEAVY_APP_IMAGES[@]}"; do IFS='|' read -r src name size <<< "$entry"; push_image "$src" "$name" "$size"; done
|
|
||||||
for entry in "${HEAVY_IMAGES[@]}"; do IFS='|' read -r src name size <<< "$entry"; push_image "$src" "$name" "$size"; done
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}══════════════════════════════════════════${NC}"
|
|
||||||
echo -e "${GREEN} Done. pushed=$pushed skipped=$skipped failed=$failed${NC}"
|
|
||||||
echo -e "${GREEN}══════════════════════════════════════════${NC}"
|
|
||||||
echo ""
|
|
||||||
echo -e "Browse: http://localhost:8091 (admin/nexus → Browse → docker/local)"
|
|
||||||
echo -e "Use: act push -j <job> --artifact-server-path /tmp/act-artifacts --env REGISTRY=localhost:5050"
|
|
||||||
@@ -1,166 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# Publish patched npm packages to a local registry (Nexus or Verdaccio).
|
|
||||||
#
|
|
||||||
# These packages fix vulnerabilities in bundled transitive dependencies
|
|
||||||
# that npm overrides cannot reach (e.g. minimatch/tar inside the npm package).
|
|
||||||
#
|
|
||||||
# Prerequisites (choose one):
|
|
||||||
# Nexus: docker compose -f docker-compose.nexus.yml up -d
|
|
||||||
# Verdaccio: npx verdaccio --config deployment/verdaccio.yaml &
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./publish-npm-patches.sh # auto-detect (Nexus first, Verdaccio fallback)
|
|
||||||
# ./publish-npm-patches.sh --verdaccio # force Verdaccio on :4873
|
|
||||||
# ./publish-npm-patches.sh --nexus # force Nexus on :8091
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
||||||
|
|
||||||
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; NC='\033[0m'
|
|
||||||
|
|
||||||
# Parse flags
|
|
||||||
USE_VERDACCIO=false
|
|
||||||
USE_NEXUS=false
|
|
||||||
for arg in "$@"; do
|
|
||||||
case "$arg" in
|
|
||||||
--verdaccio) USE_VERDACCIO=true ;;
|
|
||||||
--nexus) USE_NEXUS=true ;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# Auto-detect: try Nexus first, fall back to Verdaccio
|
|
||||||
if ! $USE_VERDACCIO && ! $USE_NEXUS; then
|
|
||||||
if curl -sf http://localhost:8091/service/rest/v1/status -u admin:nexus >/dev/null 2>&1; then
|
|
||||||
USE_NEXUS=true
|
|
||||||
else
|
|
||||||
USE_VERDACCIO=true
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
NEXUS_URL="${NEXUS_URL:-http://localhost:8091}"
|
|
||||||
NEXUS_NPM_HOSTED="${NEXUS_URL}/repository/npm-hosted/"
|
|
||||||
NEXUS_USER="${NEXUS_USER:-admin}"
|
|
||||||
NEXUS_PASS="${NEXUS_PASS:-nexus}"
|
|
||||||
VERDACCIO_URL="${VERDACCIO_URL:-http://localhost:4873}"
|
|
||||||
|
|
||||||
# Packages to patch — version must be the exact fixed version
|
|
||||||
PATCHES=(
|
|
||||||
"minimatch@10.2.4"
|
|
||||||
"tar@7.5.11"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Pre-patched local packages (tarball already in deployment/npm-patches/)
|
|
||||||
# Format: "name@version:filename"
|
|
||||||
LOCAL_PATCHES=(
|
|
||||||
"@esbuild-kit/core-utils@3.3.3-metabuilder.0:esbuild-kit-core-utils-3.3.3-metabuilder.0.tgz"
|
|
||||||
)
|
|
||||||
|
|
||||||
WORK_DIR=$(mktemp -d)
|
|
||||||
trap 'rm -rf "$WORK_DIR"' EXIT
|
|
||||||
|
|
||||||
log() { echo -e "${GREEN}[npm-patch]${NC} $*"; }
|
|
||||||
warn() { echo -e "${YELLOW}[npm-patch]${NC} $*"; }
|
|
||||||
fail() { echo -e "${RED}[npm-patch]${NC} $*"; exit 1; }
|
|
||||||
|
|
||||||
NPM_RC="$WORK_DIR/.npmrc"
|
|
||||||
|
|
||||||
if $USE_NEXUS; then
|
|
||||||
log "Using Nexus at $NEXUS_URL..."
|
|
||||||
HTTP=$(curl -s -o /dev/null -w "%{http_code}" "$NEXUS_URL/service/rest/v1/status" -u "$NEXUS_USER:$NEXUS_PASS")
|
|
||||||
if [ "$HTTP" != "200" ]; then
|
|
||||||
fail "Cannot reach Nexus (HTTP $HTTP). Is it running?"
|
|
||||||
fi
|
|
||||||
log "Nexus is up"
|
|
||||||
NEXUS_AUTH=$(echo -n "$NEXUS_USER:$NEXUS_PASS" | base64)
|
|
||||||
cat > "$NPM_RC" <<EOF
|
|
||||||
//$(echo "$NEXUS_NPM_HOSTED" | sed 's|https\?://||'):_auth=$NEXUS_AUTH
|
|
||||||
EOF
|
|
||||||
PUBLISH_REGISTRY="$NEXUS_NPM_HOSTED"
|
|
||||||
PUBLISH_ARGS="--userconfig $NPM_RC"
|
|
||||||
else
|
|
||||||
log "Using Verdaccio at $VERDACCIO_URL..."
|
|
||||||
HTTP=$(curl -s -o /dev/null -w "%{http_code}" "$VERDACCIO_URL/-/ping")
|
|
||||||
if [ "$HTTP" != "200" ]; then
|
|
||||||
fail "Cannot reach Verdaccio (HTTP $HTTP). Start with: npx verdaccio --config deployment/verdaccio.yaml"
|
|
||||||
fi
|
|
||||||
log "Verdaccio is up"
|
|
||||||
cat > "$NPM_RC" <<EOF
|
|
||||||
registry=$VERDACCIO_URL/
|
|
||||||
//${VERDACCIO_URL#*://}/:_authToken=
|
|
||||||
EOF
|
|
||||||
PUBLISH_REGISTRY="$VERDACCIO_URL"
|
|
||||||
PUBLISH_ARGS="--registry $VERDACCIO_URL --userconfig $NPM_RC"
|
|
||||||
fi
|
|
||||||
|
|
||||||
published=0
|
|
||||||
skipped=0
|
|
||||||
|
|
||||||
PATCHES_DIR="$SCRIPT_DIR/npm-patches"
|
|
||||||
|
|
||||||
# Publish pre-patched local tarballs first
|
|
||||||
for entry in "${LOCAL_PATCHES[@]}"; do
|
|
||||||
pkg_spec="${entry%%:*}"
|
|
||||||
tarball_name="${entry##*:}"
|
|
||||||
pkg_name="${pkg_spec%%@*}"
|
|
||||||
# handle scoped packages like @scope/name
|
|
||||||
if [[ "$pkg_spec" == @* ]]; then
|
|
||||||
pkg_name="$(echo "$pkg_spec" | cut -d@ -f1-2 | tr -d '@')"
|
|
||||||
pkg_name="@${pkg_name}"
|
|
||||||
pkg_version="$(echo "$pkg_spec" | cut -d@ -f3)"
|
|
||||||
else
|
|
||||||
pkg_version="${pkg_spec##*@}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log "Processing local patch $pkg_name@$pkg_version..."
|
|
||||||
|
|
||||||
TARBALL="$PATCHES_DIR/$tarball_name"
|
|
||||||
if [ ! -f "$TARBALL" ]; then
|
|
||||||
fail " Patched tarball not found: $TARBALL"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log " Publishing $tarball_name..."
|
|
||||||
if npm publish "$TARBALL" $PUBLISH_ARGS --tag patched 2>&1 | grep -v "^npm notice"; then
|
|
||||||
log " ${GREEN}Published${NC} $pkg_name@$pkg_version"
|
|
||||||
((published++)) || true
|
|
||||||
else
|
|
||||||
warn " $pkg_name@$pkg_version already exists or publish failed, skipping"
|
|
||||||
((skipped++)) || true
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
for pkg_spec in "${PATCHES[@]}"; do
|
|
||||||
pkg_name="${pkg_spec%%@*}"
|
|
||||||
pkg_version="${pkg_spec##*@}"
|
|
||||||
|
|
||||||
log "Processing $pkg_name@$pkg_version..."
|
|
||||||
|
|
||||||
# Check if already published to Nexus
|
|
||||||
# Download from npmjs.org and publish to local registry
|
|
||||||
cd "$WORK_DIR"
|
|
||||||
TARBALL=$(npm pack "$pkg_spec" 2>/dev/null)
|
|
||||||
if [ ! -f "$TARBALL" ]; then
|
|
||||||
fail " Failed to download $pkg_spec"
|
|
||||||
fi
|
|
||||||
|
|
||||||
log " Publishing $TARBALL..."
|
|
||||||
if npm publish "$TARBALL" $PUBLISH_ARGS --tag patched 2>&1 | grep -v "^npm notice"; then
|
|
||||||
log " ${GREEN}Published${NC} $pkg_name@$pkg_version"
|
|
||||||
((published++)) || true
|
|
||||||
else
|
|
||||||
warn " $pkg_name@$pkg_version already exists or publish failed, skipping"
|
|
||||||
((skipped++)) || true
|
|
||||||
fi
|
|
||||||
|
|
||||||
rm -f "$TARBALL"
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
log "Done. published=$published skipped=$skipped"
|
|
||||||
echo ""
|
|
||||||
if $USE_NEXUS; then
|
|
||||||
log "Nexus npm-group: ${NEXUS_URL}/repository/npm-group/"
|
|
||||||
else
|
|
||||||
log "Verdaccio registry: $VERDACCIO_URL"
|
|
||||||
fi
|
|
||||||
@@ -1,127 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# Push locally-built MetaBuilder images to the local Nexus registry.
|
|
||||||
#
|
|
||||||
# Re-tags ghcr.io/<owner>/<repo>/<image>:<tag> → localhost:5000/<owner>/<repo>/<image>:<tag>
|
|
||||||
# so act can use REGISTRY=localhost:5000 and pull from Nexus instead of GHCR.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./push-to-nexus.sh # push all images at current git ref
|
|
||||||
# ./push-to-nexus.sh --tag main # push with specific tag
|
|
||||||
# ./push-to-nexus.sh --src ghcr.io/... \ # pull from remote first, then push
|
|
||||||
# --pull
|
|
||||||
#
|
|
||||||
# Prerequisites:
|
|
||||||
# - Nexus running: docker compose -f docker-compose.nexus.yml up -d
|
|
||||||
# - localhost:5000 in Docker Desktop insecure-registries
|
|
||||||
# - Images already built locally (or use --pull to fetch from GHCR first)
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
|
|
||||||
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; NC='\033[0m'
|
|
||||||
|
|
||||||
LOCAL_REGISTRY="localhost:5050"
|
|
||||||
NEXUS_USER="admin"
|
|
||||||
NEXUS_PASS="nexus"
|
|
||||||
|
|
||||||
# Derive owner/repo from git remote (matches github.repository format)
|
|
||||||
REPO_SLUG=$(git -C "$SCRIPT_DIR/.." remote get-url origin 2>/dev/null \
|
|
||||||
| sed -E 's|.*github\.com[:/]([^/]+/[^/]+)(\.git)?$|\1|' \
|
|
||||||
| tr '[:upper:]' '[:lower:]')
|
|
||||||
REPO_SLUG="${REPO_SLUG:-johndoe6345789/metabuilder-small}"
|
|
||||||
|
|
||||||
SOURCE_REGISTRY="ghcr.io"
|
|
||||||
TAG=$(git -C "$SCRIPT_DIR/.." rev-parse --abbrev-ref HEAD 2>/dev/null || echo "main")
|
|
||||||
DO_PULL=false
|
|
||||||
|
|
||||||
# Parse args
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case "$1" in
|
|
||||||
--tag) TAG="$2"; shift 2 ;;
|
|
||||||
--src) SOURCE_REGISTRY="$2"; shift 2 ;;
|
|
||||||
--pull) DO_PULL=true; shift ;;
|
|
||||||
-h|--help)
|
|
||||||
grep '^#' "$0" | sed 's/^# //' | sed 's/^#//'
|
|
||||||
exit 0 ;;
|
|
||||||
*) echo "Unknown arg: $1"; exit 1 ;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# Base images built by container-base-tier1/2/3
|
|
||||||
BASE_IMAGES=(
|
|
||||||
base-apt
|
|
||||||
base-node-deps
|
|
||||||
base-pip-deps
|
|
||||||
base-conan-deps
|
|
||||||
base-android-sdk
|
|
||||||
devcontainer
|
|
||||||
)
|
|
||||||
|
|
||||||
# App images built by container-build-apps
|
|
||||||
APP_IMAGES=(
|
|
||||||
pastebin
|
|
||||||
workflowui
|
|
||||||
codegen
|
|
||||||
postgres-dashboard
|
|
||||||
emailclient
|
|
||||||
exploded-diagrams
|
|
||||||
storybook
|
|
||||||
)
|
|
||||||
|
|
||||||
ALL_IMAGES=("${BASE_IMAGES[@]}" "${APP_IMAGES[@]}")
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Registry:${NC} $LOCAL_REGISTRY"
|
|
||||||
echo -e "${YELLOW}Slug:${NC} $REPO_SLUG"
|
|
||||||
echo -e "${YELLOW}Tag:${NC} $TAG"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Log in to local Nexus
|
|
||||||
echo -e "${YELLOW}Logging in to $LOCAL_REGISTRY...${NC}"
|
|
||||||
echo "$NEXUS_PASS" | docker login "$LOCAL_REGISTRY" -u "$NEXUS_USER" --password-stdin
|
|
||||||
|
|
||||||
pushed=0
|
|
||||||
skipped=0
|
|
||||||
failed=0
|
|
||||||
|
|
||||||
for image in "${ALL_IMAGES[@]}"; do
|
|
||||||
src="$SOURCE_REGISTRY/$REPO_SLUG/$image:$TAG"
|
|
||||||
dst="$LOCAL_REGISTRY/$REPO_SLUG/$image:$TAG"
|
|
||||||
|
|
||||||
if $DO_PULL; then
|
|
||||||
echo -e " ${YELLOW}pulling${NC} $src..."
|
|
||||||
if ! docker pull "$src" 2>/dev/null; then
|
|
||||||
echo -e " ${YELLOW}skip${NC} $image (not found in $SOURCE_REGISTRY)"
|
|
||||||
((skipped++)) || true
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check image exists locally
|
|
||||||
if ! docker image inspect "$src" >/dev/null 2>&1; then
|
|
||||||
# Also check if it's already tagged for local registry
|
|
||||||
if ! docker image inspect "$dst" >/dev/null 2>&1; then
|
|
||||||
echo -e " ${YELLOW}skip${NC} $image (not found locally — build first or use --pull)"
|
|
||||||
((skipped++)) || true
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
# Already has local tag — just push it
|
|
||||||
else
|
|
||||||
docker tag "$src" "$dst"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e " ${GREEN}push${NC} $dst"
|
|
||||||
if docker push "$dst"; then
|
|
||||||
((pushed++)) || true
|
|
||||||
else
|
|
||||||
echo -e " ${RED}FAILED${NC} $image"
|
|
||||||
((failed++)) || true
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}Done.${NC} pushed=$pushed skipped=$skipped failed=$failed"
|
|
||||||
echo ""
|
|
||||||
echo -e "Run act with:"
|
|
||||||
echo -e " act push -j <job> --artifact-server-path /tmp/act-artifacts \\"
|
|
||||||
echo -e " --env REGISTRY=localhost:5050"
|
|
||||||
@@ -1,87 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# Bump patch version, commit, push, and redeploy a MetaBuilder frontend.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./release.sh pastebin Bump patch (0.8.1 → 0.8.2)
|
|
||||||
# ./release.sh pastebin minor Bump minor (0.8.1 → 0.9.0)
|
|
||||||
# ./release.sh pastebin major Bump major (0.8.1 → 1.0.0)
|
|
||||||
# ./release.sh pastebin 1.2.3 Set exact version
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
CYAN='\033[0;36m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
||||||
COMPOSE_FILE="$SCRIPT_DIR/docker-compose.stack.yml"
|
|
||||||
|
|
||||||
APP="${1:-}"
|
|
||||||
BUMP="${2:-patch}"
|
|
||||||
|
|
||||||
if [[ -z "$APP" ]]; then
|
|
||||||
echo -e "${RED}Usage: $0 <app> [patch|minor|major|x.y.z]${NC}"
|
|
||||||
echo " Apps: pastebin, workflowui, codegen, emailclient, ..."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Resolve package.json path
|
|
||||||
PKG_PATHS=(
|
|
||||||
"$REPO_ROOT/frontends/$APP/package.json"
|
|
||||||
"$REPO_ROOT/$APP/package.json"
|
|
||||||
)
|
|
||||||
PKG=""
|
|
||||||
for p in "${PKG_PATHS[@]}"; do
|
|
||||||
[[ -f "$p" ]] && PKG="$p" && break
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ -z "$PKG" ]]; then
|
|
||||||
echo -e "${RED}Cannot find package.json for '$APP'${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Read current version
|
|
||||||
CURRENT=$(node -p "require('$PKG').version")
|
|
||||||
|
|
||||||
# Compute next version
|
|
||||||
if [[ "$BUMP" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
|
||||||
NEXT="$BUMP"
|
|
||||||
else
|
|
||||||
IFS='.' read -r MAJOR MINOR PATCH <<< "$CURRENT"
|
|
||||||
case "$BUMP" in
|
|
||||||
major) NEXT="$((MAJOR + 1)).0.0" ;;
|
|
||||||
minor) NEXT="${MAJOR}.$((MINOR + 1)).0" ;;
|
|
||||||
patch) NEXT="${MAJOR}.${MINOR}.$((PATCH + 1))" ;;
|
|
||||||
*)
|
|
||||||
echo -e "${RED}Unknown bump type '$BUMP'. Use patch, minor, major, or x.y.z${NC}"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${CYAN}Releasing $APP: ${YELLOW}$CURRENT${CYAN} → ${GREEN}$NEXT${NC}"
|
|
||||||
|
|
||||||
# Update package.json
|
|
||||||
node -e "
|
|
||||||
const fs = require('fs');
|
|
||||||
const pkg = JSON.parse(fs.readFileSync('$PKG', 'utf8'));
|
|
||||||
pkg.version = '$NEXT';
|
|
||||||
fs.writeFileSync('$PKG', JSON.stringify(pkg, null, 2) + '\n');
|
|
||||||
"
|
|
||||||
|
|
||||||
# Commit and push
|
|
||||||
cd "$REPO_ROOT"
|
|
||||||
git add "$PKG"
|
|
||||||
git commit -m "chore: bump $APP to v$NEXT
|
|
||||||
|
|
||||||
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>"
|
|
||||||
git push origin main
|
|
||||||
|
|
||||||
echo -e "${CYAN}Building and deploying $APP...${NC}"
|
|
||||||
cd "$SCRIPT_DIR"
|
|
||||||
docker compose -f "$COMPOSE_FILE" up -d --build "$APP"
|
|
||||||
|
|
||||||
echo -e "${GREEN}✓ $APP v$NEXT deployed${NC}"
|
|
||||||
@@ -1,314 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# MetaBuilder Full Stack Startup Script
|
|
||||||
#
|
|
||||||
# Core: nginx gateway, PostgreSQL, MySQL, MongoDB, Redis, Elasticsearch,
|
|
||||||
# DBAL C++, WorkflowUI, CodeForge, Pastebin, Postgres Dashboard,
|
|
||||||
# Email Client, Exploded Diagrams, Storybook, Frontend App,
|
|
||||||
# Postfix, Dovecot, SMTP Relay, Email Service,
|
|
||||||
# phpMyAdmin, Mongo Express, RedisInsight, Kibana
|
|
||||||
# Monitoring: Prometheus, Grafana, Loki, Promtail, exporters, Alertmanager
|
|
||||||
# Media: Media daemon (FFmpeg/radio/retro), Icecast, HLS streaming
|
|
||||||
#
|
|
||||||
# Portal: http://localhost (nginx welcome page with links to all apps)
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./start-stack.sh [COMMAND] [--monitoring] [--media] [--all]
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
# Pull a single image with exponential backoff retries.
|
|
||||||
# Skips silently if the image is already present and up-to-date.
|
|
||||||
pull_with_retry() {
|
|
||||||
local image="$1"
|
|
||||||
local max_attempts=5
|
|
||||||
local delay=5
|
|
||||||
|
|
||||||
for attempt in $(seq 1 $max_attempts); do
|
|
||||||
if docker pull "$image" 2>&1; then
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
if [ "$attempt" -lt "$max_attempts" ]; then
|
|
||||||
echo -e "${YELLOW} Pull failed (attempt $attempt/$max_attempts), retrying in ${delay}s...${NC}"
|
|
||||||
sleep "$delay"
|
|
||||||
delay=$((delay * 2))
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
echo -e "${RED} Failed to pull $image after $max_attempts attempts${NC}"
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Pull all external (non-built) images for the requested profiles.
|
|
||||||
# Built images (dbal, workflowui, etc.) are skipped — they're local.
|
|
||||||
pull_external_images() {
|
|
||||||
local profiles=("$@")
|
|
||||||
|
|
||||||
local core_images=(
|
|
||||||
"postgres:15-alpine"
|
|
||||||
"redis:7-alpine"
|
|
||||||
"docker.elastic.co/elasticsearch/elasticsearch:8.11.0"
|
|
||||||
"mysql:8.0"
|
|
||||||
"mongo:7.0"
|
|
||||||
"phpmyadmin:latest"
|
|
||||||
"mongo-express:latest"
|
|
||||||
"redis/redisinsight:latest"
|
|
||||||
"docker.elastic.co/kibana/kibana:8.11.0"
|
|
||||||
"boky/postfix:latest"
|
|
||||||
"nginx:alpine"
|
|
||||||
)
|
|
||||||
|
|
||||||
local monitoring_images=(
|
|
||||||
"prom/prometheus:latest"
|
|
||||||
"grafana/grafana:latest"
|
|
||||||
"grafana/loki:latest"
|
|
||||||
"grafana/promtail:latest"
|
|
||||||
"prom/node-exporter:latest"
|
|
||||||
"prometheuscommunity/postgres-exporter:latest"
|
|
||||||
"oliver006/redis_exporter:latest"
|
|
||||||
"gcr.io/cadvisor/cadvisor:latest"
|
|
||||||
"prom/alertmanager:latest"
|
|
||||||
)
|
|
||||||
|
|
||||||
local media_images=(
|
|
||||||
"libretime/icecast:2.4.4"
|
|
||||||
)
|
|
||||||
|
|
||||||
local images=("${core_images[@]}")
|
|
||||||
|
|
||||||
local want_monitoring=false
|
|
||||||
local want_media=false
|
|
||||||
for p in "${profiles[@]}"; do
|
|
||||||
[[ "$p" == "monitoring" ]] && want_monitoring=true
|
|
||||||
[[ "$p" == "media" ]] && want_media=true
|
|
||||||
done
|
|
||||||
|
|
||||||
$want_monitoring && images+=("${monitoring_images[@]}")
|
|
||||||
$want_media && images+=("${media_images[@]}")
|
|
||||||
|
|
||||||
local total=${#images[@]}
|
|
||||||
echo -e "${YELLOW}Pre-pulling $total external images (with retry on flaky connections)...${NC}"
|
|
||||||
|
|
||||||
local failed=0
|
|
||||||
for i in "${!images[@]}"; do
|
|
||||||
local img="${images[$i]}"
|
|
||||||
echo -e " [$(( i + 1 ))/$total] $img"
|
|
||||||
pull_with_retry "$img" || failed=$((failed + 1))
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ "$failed" -gt 0 ]; then
|
|
||||||
echo -e "${RED}Warning: $failed image(s) failed to pull. Stack may be incomplete.${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${GREEN}All images ready.${NC}"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
}
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
COMPOSE_FILE="$SCRIPT_DIR/docker-compose.stack.yml"
|
|
||||||
|
|
||||||
# Parse arguments
|
|
||||||
COMMAND=""
|
|
||||||
PROFILES=()
|
|
||||||
|
|
||||||
for arg in "$@"; do
|
|
||||||
case "$arg" in
|
|
||||||
--monitoring) PROFILES+=("--profile" "monitoring") ;;
|
|
||||||
--media) PROFILES+=("--profile" "media") ;;
|
|
||||||
--all) PROFILES+=("--profile" "monitoring" "--profile" "media") ;;
|
|
||||||
*)
|
|
||||||
if [ -z "$COMMAND" ]; then
|
|
||||||
COMMAND="$arg"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
COMMAND=${COMMAND:-up}
|
|
||||||
|
|
||||||
# Check docker compose
|
|
||||||
if ! docker compose version &> /dev/null; then
|
|
||||||
echo -e "${RED}docker compose not found${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
case "$COMMAND" in
|
|
||||||
up|start)
|
|
||||||
echo -e "${BLUE}Starting MetaBuilder stack...${NC}"
|
|
||||||
|
|
||||||
# Collect profile names for the image pre-pull
|
|
||||||
PROFILE_NAMES=()
|
|
||||||
for p in "${PROFILES[@]}"; do
|
|
||||||
[[ "$p" == "--profile" ]] && continue
|
|
||||||
PROFILE_NAMES+=("$p")
|
|
||||||
done
|
|
||||||
pull_external_images "${PROFILE_NAMES[@]}"
|
|
||||||
;;
|
|
||||||
down|stop)
|
|
||||||
echo -e "${YELLOW}Stopping MetaBuilder stack...${NC}"
|
|
||||||
docker compose -f "$COMPOSE_FILE" "${PROFILES[@]}" down
|
|
||||||
echo -e "${GREEN}Stack stopped${NC}"
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
build)
|
|
||||||
echo -e "${YELLOW}Building MetaBuilder stack...${NC}"
|
|
||||||
PROFILE_NAMES=()
|
|
||||||
for p in "${PROFILES[@]}"; do
|
|
||||||
[[ "$p" == "--profile" ]] && continue
|
|
||||||
PROFILE_NAMES+=("$p")
|
|
||||||
done
|
|
||||||
pull_external_images "${PROFILE_NAMES[@]}"
|
|
||||||
docker compose -f "$COMPOSE_FILE" "${PROFILES[@]}" up -d --build
|
|
||||||
echo -e "${GREEN}Stack built and started${NC}"
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
logs)
|
|
||||||
docker compose -f "$COMPOSE_FILE" "${PROFILES[@]}" logs -f ${2:-}
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
restart)
|
|
||||||
docker compose -f "$COMPOSE_FILE" "${PROFILES[@]}" restart
|
|
||||||
echo -e "${GREEN}Stack restarted${NC}"
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
ps|status)
|
|
||||||
docker compose -f "$COMPOSE_FILE" "${PROFILES[@]}" ps
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
clean)
|
|
||||||
echo -e "${RED}This will remove all containers and volumes!${NC}"
|
|
||||||
read -p "Are you sure? (yes/no): " -r
|
|
||||||
if [[ $REPLY == "yes" ]]; then
|
|
||||||
docker compose -f "$COMPOSE_FILE" "${PROFILES[@]}" down -v
|
|
||||||
echo -e "${GREEN}Stack cleaned${NC}"
|
|
||||||
fi
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
help|--help|-h)
|
|
||||||
echo "Usage: ./start-stack.sh [COMMAND] [--monitoring] [--media] [--all]"
|
|
||||||
echo ""
|
|
||||||
echo "Commands:"
|
|
||||||
echo " up, start Start the stack (default)"
|
|
||||||
echo " build Build and start the stack"
|
|
||||||
echo " down, stop Stop the stack"
|
|
||||||
echo " restart Restart all services"
|
|
||||||
echo " logs [svc] Show logs (optionally for specific service)"
|
|
||||||
echo " ps, status Show service status"
|
|
||||||
echo " clean Stop and remove all containers and volumes"
|
|
||||||
echo " help Show this help message"
|
|
||||||
echo ""
|
|
||||||
echo "Profiles:"
|
|
||||||
echo " --monitoring Add Prometheus, Grafana, Loki, exporters, Alertmanager"
|
|
||||||
echo " --media Add media daemon, Icecast, HLS streaming"
|
|
||||||
echo " --all Enable all profiles"
|
|
||||||
echo ""
|
|
||||||
echo "Core services (always started):"
|
|
||||||
echo " nginx 80 Gateway + welcome portal (http://localhost)"
|
|
||||||
echo " postgres 5432 PostgreSQL database"
|
|
||||||
echo " mysql 3306 MySQL database"
|
|
||||||
echo " mongodb 27017 MongoDB database"
|
|
||||||
echo " redis 6379 Cache layer"
|
|
||||||
echo " elasticsearch 9200 Search layer"
|
|
||||||
echo " dbal 8080 DBAL C++ backend"
|
|
||||||
echo " workflowui 3001 Visual workflow editor (/workflowui)"
|
|
||||||
echo " codegen 3002 CodeForge IDE (/codegen)"
|
|
||||||
echo " pastebin 3003 Code snippet sharing (/pastebin)"
|
|
||||||
echo " postgres-dashboard 3004 PostgreSQL admin (/postgres)"
|
|
||||||
echo " emailclient-app 3005 Email client (/emailclient)"
|
|
||||||
echo " exploded-diagrams 3006 3D diagram viewer (/diagrams)"
|
|
||||||
echo " storybook 3007 Component library (/storybook)"
|
|
||||||
echo " frontend-app 3008 Main application (/app)"
|
|
||||||
echo " phpmyadmin 8081 MySQL admin (/phpmyadmin/)"
|
|
||||||
echo " mongo-express 8082 MongoDB admin (/mongo-express/)"
|
|
||||||
echo " redisinsight 8083 Redis admin (/redis-insight/)"
|
|
||||||
echo " kibana 5601 Elasticsearch admin (/kibana/)"
|
|
||||||
echo " postfix 1025 SMTP relay"
|
|
||||||
echo " dovecot 1143 IMAP/POP3"
|
|
||||||
echo " smtp-relay 2525 SMTP relay (dashboard: 8025)"
|
|
||||||
echo " email-service 8500 Flask email API"
|
|
||||||
echo ""
|
|
||||||
echo "Monitoring services (--monitoring):"
|
|
||||||
echo " prometheus 9090 Metrics"
|
|
||||||
echo " grafana 3009 Dashboards"
|
|
||||||
echo " loki 3100 Log aggregation"
|
|
||||||
echo " promtail - Log shipper"
|
|
||||||
echo " node-exporter 9100 Host metrics"
|
|
||||||
echo " postgres-exporter 9187 DB metrics"
|
|
||||||
echo " redis-exporter 9121 Cache metrics"
|
|
||||||
echo " cadvisor 8084 Container metrics"
|
|
||||||
echo " alertmanager 9093 Alert routing"
|
|
||||||
echo ""
|
|
||||||
echo "Media services (--media):"
|
|
||||||
echo " media-daemon 8090 FFmpeg, radio, retro gaming"
|
|
||||||
echo " icecast 8000 Radio streaming"
|
|
||||||
echo " nginx-stream 8088 HLS/DASH streaming"
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo -e "${RED}Unknown command: $COMMAND${NC}"
|
|
||||||
echo "Run './start-stack.sh help' for usage"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# Start
|
|
||||||
docker compose -f "$COMPOSE_FILE" "${PROFILES[@]}" up -d
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}Stack started!${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Count expected healthy services
|
|
||||||
# Core: postgres, redis, elasticsearch, mysql, mongodb (5)
|
|
||||||
# Admin tools: phpmyadmin, mongo-express, redisinsight, kibana (4)
|
|
||||||
# Backend: dbal, email-service (2)
|
|
||||||
# Mail: postfix, dovecot, smtp-relay (3)
|
|
||||||
# Gateway: nginx (1)
|
|
||||||
# Apps: workflowui, codegen, pastebin, postgres-dashboard, emailclient-app,
|
|
||||||
# exploded-diagrams, storybook, frontend-app (8)
|
|
||||||
# Total: 23
|
|
||||||
CORE_COUNT=23
|
|
||||||
PROFILE_INFO="core"
|
|
||||||
|
|
||||||
for arg in "$@"; do
|
|
||||||
case "$arg" in
|
|
||||||
--monitoring) CORE_COUNT=$((CORE_COUNT + 9)); PROFILE_INFO="core + monitoring" ;;
|
|
||||||
--media) CORE_COUNT=$((CORE_COUNT + 3)); PROFILE_INFO="core + media" ;;
|
|
||||||
--all) CORE_COUNT=$((CORE_COUNT + 12)); PROFILE_INFO="core + monitoring + media" ;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Waiting for services ($PROFILE_INFO)...${NC}"
|
|
||||||
|
|
||||||
MAX_WAIT=120
|
|
||||||
ELAPSED=0
|
|
||||||
while [ $ELAPSED -lt $MAX_WAIT ]; do
|
|
||||||
HEALTHY=$(docker compose -f "$COMPOSE_FILE" "${PROFILES[@]}" ps --format json 2>/dev/null | grep -c '"healthy"' || true)
|
|
||||||
|
|
||||||
if [ "$HEALTHY" -ge "$CORE_COUNT" ]; then
|
|
||||||
echo -e "\n${GREEN}All $CORE_COUNT services healthy!${NC}"
|
|
||||||
echo ""
|
|
||||||
echo -e "Portal: ${BLUE}http://localhost${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "Quick commands:"
|
|
||||||
echo " ./start-stack.sh logs View all logs"
|
|
||||||
echo " ./start-stack.sh logs dbal View DBAL logs"
|
|
||||||
echo " ./start-stack.sh stop Stop the stack"
|
|
||||||
echo " ./start-stack.sh restart Restart services"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -ne "\r Services healthy: $HEALTHY/$CORE_COUNT (${ELAPSED}s)"
|
|
||||||
sleep 2
|
|
||||||
ELAPSED=$((ELAPSED + 2))
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${YELLOW}Timeout waiting for all services. Check with:${NC}"
|
|
||||||
echo " ./start-stack.sh status"
|
|
||||||
echo " ./start-stack.sh logs"
|
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
# Uses metabuilder/base-conan-deps — all apt packages and Conan C++ packages
|
# Uses metabuilder/base-conan-deps — all apt packages and Conan C++ packages
|
||||||
# pre-installed. Build = compile only, zero downloads.
|
# pre-installed. Build = compile only, zero downloads.
|
||||||
#
|
#
|
||||||
# Prerequisites: run deployment/build-base-images.sh first.
|
# Prerequisites: run python3 deployment/deployment.py build base first.
|
||||||
|
|
||||||
# Stage 1: Build environment
|
# Stage 1: Build environment
|
||||||
ARG BASE_REGISTRY=metabuilder
|
ARG BASE_REGISTRY=metabuilder
|
||||||
|
|||||||
Reference in New Issue
Block a user