Files
metabuilder/dbal/shared/docs/SPARK_INTEGRATION.md
johndoe6345789 92c280b0e6 feat: Add PageConfig and ComponentNode entities with CRUD operations
- Introduced `PageConfig` entity for page configuration and layout definition.
- Introduced `ComponentNode` entity for managing component hierarchy within pages.
- Implemented operations for creating, reading, updating, and deleting both entities.
- Updated Prisma schema to reflect new entities and relationships.
- Enhanced ACL rules for new entities to manage access control.
- Added conformance cases to test page and component hierarchy management.
- Created a script to check synchronization between DBAL and Prisma schemas.
2026-01-07 13:31:26 +00:00

11 KiB

GitHub Spark Integration Guide

This document explains how to integrate the DBAL daemon with GitHub Spark deployments to provide secure database access.

Architecture for Spark

GitHub Spark applications run in a sandboxed browser environment. To provide secure database access without exposing credentials to user code, we use the DBAL daemon as a sidecar service:

┌─────────────────────────────────────────────────────┐
│          GitHub Spark Browser Runtime               │
│  ┌──────────────────────────────────────────────┐   │
│  │      Your MetaBuilder Application (TS)       │   │
│  │  - No database credentials                   │   │
│  │  - No direct DB access                       │   │
│  │  - Uses DBAL client library                  │   │
│  └─────────────┬────────────────────────────────┘   │
│                │ WebSocket/gRPC                     │
└────────────────┼────────────────────────────────────┘
                 │
                 ▼
┌─────────────────────────────────────────────────────┐
│         DBAL Daemon (C++ Sidecar Service)           │
│  ┌──────────────────────────────────────────────┐   │
│  │  - ACL enforcement                           │   │
│  │  - Query validation                          │   │
│  │  - Credential management                     │   │
│  │  - Sandboxed execution                       │   │
│  └─────────────┬────────────────────────────────┘   │
└────────────────┼────────────────────────────────────┘
                 │
                 ▼
┌─────────────────────────────────────────────────────┐
│              Database (SQLite/Postgres)              │
└─────────────────────────────────────────────────────┘

Deployment Options

Option 1: Development Mode (Current)

For development, use the TypeScript DBAL client directly with Prisma:

import { DBALClient } from './dbal/development/src'

const client = new DBALClient({
  mode: 'development',
  adapter: 'prisma',
  database: {
    url: process.env.DATABASE_URL
  }
})

Pros:

  • Fast iteration
  • No additional services needed
  • Easy debugging

Cons:

  • Database credentials in application code
  • No sandboxing
  • Not suitable for production

Option 2: GitHub Codespaces with Daemon

Run the DBAL daemon as a background service in your Codespace:

1. Build the daemon:

cd dbal/production
mkdir build && cd build
cmake .. && make

2. Create systemd user service:

mkdir -p ~/.config/systemd/user
cat > ~/.config/systemd/user/dbal.service << 'EOF'
[Unit]
Description=DBAL Daemon for Development

[Service]
Type=simple
ExecStart=/workspaces/spark-template/dbal/production/build/dbal_daemon --config=/workspaces/spark-template/dbal/config/dev.yaml
Restart=on-failure

[Install]
WantedBy=default.target
EOF

systemctl --user daemon-reload
systemctl --user enable dbal
systemctl --user start dbal

3. Update your application:

const client = new DBALClient({
  mode: 'production',
  adapter: 'prisma',
  endpoint: 'localhost:50051'
})

Pros:

  • Credentials isolated from application
  • ACL enforcement enabled
  • Closer to production setup

Cons:

  • Requires building C++ daemon
  • Additional complexity

Use Docker Compose to run both your app and the daemon:

# docker-compose.yml
version: '3.8'

services:
  app:
    build: .
    ports:
      - "5173:5173"
    environment:
      - DBAL_ENDPOINT=dbal:50051
    depends_on:
      - dbal
      
  dbal:
    build:
      context: ./dbal/production
      dockerfile: Dockerfile
    environment:
      - DATABASE_URL=file:/data/app.db
      - DBAL_MODE=development
    volumes:
      - db-data:/data
      
volumes:
  db-data:

Start everything:

docker-compose up

Pros:

  • Production-like environment
  • Easy to share with team
  • Credentials completely isolated

Cons:

  • Slower rebuild times
  • Docker overhead

Option 4: Cloud Deployment with Sidecar

For GitHub Spark production deployments, use a sidecar container pattern:

# .github/spark-deploy.yml
apiVersion: v1
kind: Pod
metadata:
  name: metabuilder-app
spec:
  containers:
    - name: app
      image: ghcr.io/yourorg/metabuilder:latest
      env:
        - name: DBAL_ENDPOINT
          value: "localhost:50051"
      ports:
        - containerPort: 5173
        
    - name: dbal-daemon
      image: ghcr.io/yourorg/dbal-daemon:latest
      env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: url
        - name: DBAL_MODE
          value: "production"
      ports:
        - containerPort: 50051

Pros:

  • Maximum security
  • Scales with your app
  • True zero-trust architecture

Cons:

  • Complex deployment
  • Requires container orchestration

Configuration

Development Config

# dbal/config/dev.yaml
server:
  bind: "127.0.0.1:50051"
  tls:
    enabled: false

database:
  adapter: "prisma"
  url: "file:./dev.db"
  
security:
  sandbox: "permissive"
  audit_log: "./logs/audit.log"
  
logging:
  level: "debug"
  format: "pretty"

Production Config

# dbal/config/prod.yaml
server:
  bind: "0.0.0.0:50051"
  tls:
    enabled: true
    cert: "/etc/dbal/certs/server.crt"
    key: "/etc/dbal/certs/server.key"

database:
  adapter: "prisma"
  url: "${DATABASE_URL}"
  pool_size: 20
  
security:
  sandbox: "strict"
  audit_log: "/var/log/dbal/audit.log"
  max_query_time: 30
  
acl:
  rules_file: "/etc/dbal/acl.yaml"
  enforce_row_level: true
  
logging:
  level: "info"
  format: "json"

Client Usage in Spark App

Basic Setup

// src/lib/dbal-client.ts
import { DBALClient } from '@metabuilder/dbal'

const isDevelopment = import.meta.env.DEV

export const dbal = new DBALClient({
  mode: isDevelopment ? 'development' : 'production',
  adapter: 'prisma',
  endpoint: import.meta.env.VITE_DBAL_ENDPOINT || 'localhost:50051',
  auth: {
    user: await spark.user(),
    session: {
      id: 'session-id',
      token: 'session-token',
      expiresAt: new Date(Date.now() + 3600000)
    }
  }
})

Using in Components

// src/components/UserList.tsx
import { dbal } from '@/lib/dbal-client'

export function UserList() {
  const [users, setUsers] = useState([])
  
  useEffect(() => {
    dbal.users.list({ limit: 10 }).then(result => {
      setUsers(result.data)
    })
  }, [])
  
  return (
    <ul>
      {users.map(user => (
        <li key={user.id}>{user.username}</li>
      ))}
    </ul>
  )
}

With useKV for Caching

import { useKV } from '@github/spark/hooks'
import { dbal } from '@/lib/dbal-client'

export function CachedUserList() {
  const [users, setUsers] = useKV<User[]>('user-list-cache', [])
  const [loading, setLoading] = useState(false)
  
  const refreshUsers = async () => {
    setLoading(true)
    const result = await dbal.users.list()
    setUsers(result.data)
    setLoading(false)
  }
  
  useEffect(() => {
    if (users.length === 0) {
      refreshUsers()
    }
  }, [])
  
  return (
    <>
      <button onClick={refreshUsers} disabled={loading}>
        Refresh
      </button>
      {users.map(user => <div key={user.id}>{user.username}</div>)}
    </>
  )
}

Security Considerations

ACL Configuration

The daemon enforces access control based on user roles:

# acl.yaml
rules:
  - entity: User
    role: [user]
    operations: [read]
    row_level_filter: "id = $user.id"
    
  - entity: User
    role: [admin, god, supergod]
    operations: [create, read, update, delete]
    
  - entity: PageConfig
    role: [god, supergod]
    operations: [create, update, delete]
    
  - entity: PageConfig
    role: [user, admin]
    operations: [read]
    row_level_filter: "level <= $user.level"

Credential Management

Never put database credentials in your Spark app code. They should only exist in:

  1. Environment variables for the DBAL daemon
  2. Kubernetes secrets
  3. Cloud provider secret managers (AWS Secrets Manager, etc.)

Audit Logging

All operations are logged:

{
  "timestamp": "2024-01-15T10:30:00Z",
  "user": "user_123",
  "operation": "create",
  "entity": "User",
  "success": true,
  "duration_ms": 45
}

Troubleshooting

Cannot Connect to Daemon

Check if daemon is running:

# Systemd
systemctl --user status dbal

# Docker
docker ps | grep dbal

# Direct
ps aux | grep dbal_daemon

Check connectivity:

nc -zv localhost 50051

Permission Denied Errors

Check ACL rules:

cat /etc/dbal/acl.yaml

Enable debug logging:

logging:
  level: "debug"

Slow Queries

Enable query timing:

logging:
  slow_query_threshold_ms: 1000

Check database indexes:

sqlite3 app.db ".schema"

Migration Path

Phase 1: Current State (TypeScript Only)

App → Prisma → Database

Phase 2: Add DBAL Client (TypeScript)

App → DBAL Client (TS) → Prisma → Database
  • Refactor to use DBAL client interface
  • No security changes yet
  • Validates API contract

Phase 3: Deploy Daemon (Hybrid)

App → DBAL Client (TS) → DBAL Daemon (C++) → Prisma → Database
  • Deploy daemon as sidecar
  • Enable ACL enforcement
  • Credentials isolated

Phase 4: Production Hardening

  • Enable TLS
  • Strict sandboxing
  • Full audit logging
  • Rate limiting

Performance Benchmarks

Expected latencies with DBAL daemon:

Operation Direct Prisma DBAL (TS) DBAL (C++)
Simple SELECT 2ms 3ms 2.5ms
Complex JOIN 15ms 17ms 16ms
Bulk INSERT (100) 50ms 55ms 52ms

Overhead is minimal (<20%) with significantly improved security.

Next Steps

  1. Complete DBAL specification (entities, operations, errors)
  2. Implement TypeScript adapters (Prisma, SQLite)
  3. Implement C++ daemon with basic adapters
  4. Write conformance tests
  5. Deploy to GitHub Codespaces
  6. Create Docker images
  7. Document GitHub Spark deployment

Resources