mirror of
https://github.com/johndoe6345789/metabuilder.git
synced 2026-04-24 13:54:57 +00:00
Add comprehensive bulk refactoring tools with automated linting and import fixing
Co-authored-by: johndoe6345789 <224850594+johndoe6345789@users.noreply.github.com>
This commit is contained in:
284
tools/refactoring/README.md
Normal file
284
tools/refactoring/README.md
Normal file
@@ -0,0 +1,284 @@
|
||||
# Lambda-per-File Refactoring Tools
|
||||
|
||||
Automated tools for refactoring large TypeScript files into modular lambda-per-file structure.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Generate Progress Report
|
||||
|
||||
```bash
|
||||
npx tsx tools/refactoring/refactor-to-lambda.ts
|
||||
```
|
||||
|
||||
This scans the codebase and generates `docs/todo/LAMBDA_REFACTOR_PROGRESS.md` with:
|
||||
- List of all files exceeding 150 lines
|
||||
- Categorization by type (library, component, DBAL, tool, etc.)
|
||||
- Priority ranking
|
||||
- Refactoring recommendations
|
||||
|
||||
### 2. Dry Run (Preview Changes)
|
||||
|
||||
Preview what would happen without modifying files:
|
||||
|
||||
```bash
|
||||
# Preview all high-priority files
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts --dry-run high
|
||||
|
||||
# Preview specific number of files
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts --dry-run high --limit=5
|
||||
|
||||
# Preview a single file
|
||||
npx tsx tools/refactoring/ast-lambda-refactor.ts --dry-run -v frontends/nextjs/src/lib/rendering/page/page-definition-builder.ts
|
||||
```
|
||||
|
||||
### 3. Run Bulk Refactoring
|
||||
|
||||
Refactor files in bulk with automatic linting and import fixing:
|
||||
|
||||
```bash
|
||||
# Refactor all high-priority files (recommended start)
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts high
|
||||
|
||||
# Refactor first 10 high-priority files
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts high --limit=10
|
||||
|
||||
# Refactor all pending files
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts all
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. ✅ Refactor files using AST analysis
|
||||
2. 🔧 Run `npm run lint:fix` to fix imports
|
||||
3. 🔍 Run type checking
|
||||
4. 🧪 Run unit tests
|
||||
5. 💾 Save detailed results
|
||||
|
||||
## Available Tools
|
||||
|
||||
### 1. `refactor-to-lambda.ts` - Progress Tracker
|
||||
|
||||
Scans codebase and generates tracking report.
|
||||
|
||||
```bash
|
||||
npx tsx tools/refactoring/refactor-to-lambda.ts
|
||||
```
|
||||
|
||||
**Output:** `docs/todo/LAMBDA_REFACTOR_PROGRESS.md`
|
||||
|
||||
### 2. `ast-lambda-refactor.ts` - AST-based Refactoring
|
||||
|
||||
Uses TypeScript compiler API for accurate code transformation.
|
||||
|
||||
```bash
|
||||
# Single file
|
||||
npx tsx tools/refactoring/ast-lambda-refactor.ts [options] <file>
|
||||
|
||||
# Options:
|
||||
# -d, --dry-run Preview without writing
|
||||
# -v, --verbose Detailed output
|
||||
# -h, --help Show help
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
npx tsx tools/refactoring/ast-lambda-refactor.ts -v frontends/nextjs/src/lib/db/core/index.ts
|
||||
```
|
||||
|
||||
### 3. `bulk-lambda-refactor.ts` - Regex-based Bulk Refactoring
|
||||
|
||||
Simpler regex-based refactoring (faster but less accurate).
|
||||
|
||||
```bash
|
||||
npx tsx tools/refactoring/bulk-lambda-refactor.ts [options] <file>
|
||||
```
|
||||
|
||||
### 4. `orchestrate-refactor.ts` - Master Orchestrator
|
||||
|
||||
Complete automated workflow for bulk refactoring.
|
||||
|
||||
```bash
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts [priority] [options]
|
||||
|
||||
# Priority: high | medium | low | all
|
||||
# Options:
|
||||
# -d, --dry-run Preview only
|
||||
# --limit=N Process only N files
|
||||
# --skip-lint Skip linting phase
|
||||
# --skip-test Skip testing phase
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Dry run for high-priority files
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts high --dry-run
|
||||
|
||||
# Refactor 5 high-priority files
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts high --limit=5
|
||||
|
||||
# Refactor all medium-priority files, skip tests
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts medium --skip-test
|
||||
```
|
||||
|
||||
## Refactoring Pattern
|
||||
|
||||
The tools follow the pattern established in `frontends/nextjs/src/lib/schema/`:
|
||||
|
||||
### Before (Single Large File)
|
||||
```
|
||||
lib/
|
||||
└── utils.ts (300 lines)
|
||||
├── function validateEmail()
|
||||
├── function parseDate()
|
||||
├── function formatCurrency()
|
||||
└── ...
|
||||
```
|
||||
|
||||
### After (Lambda-per-File)
|
||||
```
|
||||
lib/
|
||||
├── utils.ts (re-exports)
|
||||
└── utils/
|
||||
├── functions/
|
||||
│ ├── validate-email.ts
|
||||
│ ├── parse-date.ts
|
||||
│ └── format-currency.ts
|
||||
├── UtilsUtils.ts (class wrapper)
|
||||
└── index.ts (barrel export)
|
||||
```
|
||||
|
||||
### Usage After Refactoring
|
||||
|
||||
```typescript
|
||||
// Option 1: Import individual functions (recommended)
|
||||
import { validateEmail } from '@/lib/utils'
|
||||
|
||||
// Option 2: Use class wrapper
|
||||
import { UtilsUtils } from '@/lib/utils'
|
||||
UtilsUtils.validateEmail(email)
|
||||
|
||||
// Option 3: Direct import from function file
|
||||
import { validateEmail } from '@/lib/utils/functions/validate-email'
|
||||
```
|
||||
|
||||
## File Categories
|
||||
|
||||
### High Priority (Easiest to Refactor)
|
||||
- **Library files** - Pure utility functions
|
||||
- **Tool files** - Development scripts
|
||||
|
||||
### Medium Priority
|
||||
- **DBAL files** - Database abstraction layer
|
||||
- **Component files** - React components (need sub-component extraction)
|
||||
|
||||
### Low Priority
|
||||
- **Very large files** (>500 lines) - Need careful planning
|
||||
|
||||
### Skipped
|
||||
- **Test files** - Comprehensive coverage is acceptable
|
||||
- **Type definition files** - Naturally large
|
||||
|
||||
## Safety Features
|
||||
|
||||
1. **Dry Run Mode** - Preview all changes before applying
|
||||
2. **Backup** - Original files are replaced with re-exports (old code preserved in git)
|
||||
3. **Automatic Linting** - Fixes imports and formatting
|
||||
4. **Type Checking** - Validates TypeScript compilation
|
||||
5. **Test Running** - Ensures functionality preserved
|
||||
6. **Incremental** - Process files in batches with limits
|
||||
|
||||
## Workflow Recommendation
|
||||
|
||||
### Phase 1: High-Priority Files (Library & Tools - 20 files)
|
||||
```bash
|
||||
# 1. Generate report
|
||||
npx tsx tools/refactoring/refactor-to-lambda.ts
|
||||
|
||||
# 2. Dry run to preview
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts high --dry-run
|
||||
|
||||
# 3. Refactor in small batches
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts high --limit=5
|
||||
|
||||
# 4. Review, test, commit
|
||||
git diff
|
||||
npm run test:unit
|
||||
git add . && git commit -m "refactor: convert 5 library files to lambda-per-file"
|
||||
|
||||
# 5. Repeat for next batch
|
||||
npx tsx tools/refactoring/orchestrate-refactor.ts high --limit=5
|
||||
```
|
||||
|
||||
### Phase 2: Medium-Priority Files (DBAL & Components - 68 files)
|
||||
Similar process with medium priority.
|
||||
|
||||
### Phase 3: Low-Priority Files
|
||||
Tackle individually with careful review.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Import Errors After Refactoring
|
||||
|
||||
```bash
|
||||
# Re-run linter
|
||||
npm run lint:fix
|
||||
|
||||
# Check type errors
|
||||
npm run typecheck
|
||||
```
|
||||
|
||||
### Tests Failing
|
||||
|
||||
1. Check if function signatures changed
|
||||
2. Update test imports to new locations
|
||||
3. Verify mocks are still valid
|
||||
|
||||
### Generated Code Issues
|
||||
|
||||
1. Review the generated files
|
||||
2. Fix manually if needed
|
||||
3. The tools provide a starting point, not perfect output
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Filtering
|
||||
|
||||
Edit `docs/todo/LAMBDA_REFACTOR_PROGRESS.md` to mark files as completed:
|
||||
|
||||
```markdown
|
||||
- [x] `path/to/file.ts` (200 lines) <!-- Marked complete -->
|
||||
- [ ] `path/to/other.ts` (180 lines) <!-- Still pending -->
|
||||
```
|
||||
|
||||
### Manual Refactoring
|
||||
|
||||
For complex files, use the generated code as a template and refine manually:
|
||||
|
||||
1. Run with `--dry-run` and `--verbose`
|
||||
2. Review what would be generated
|
||||
3. Apply manually with improvements
|
||||
|
||||
## Output Files
|
||||
|
||||
- `docs/todo/LAMBDA_REFACTOR_PROGRESS.md` - Tracking report with all files
|
||||
- `docs/todo/REFACTOR_RESULTS.json` - Detailed results from last run
|
||||
- Individual function files in `<module>/functions/` directories
|
||||
- Class wrappers: `<Module>Utils.ts`
|
||||
- Barrel exports: `<module>/index.ts`
|
||||
|
||||
## Next Steps After Refactoring
|
||||
|
||||
1. ✅ Review generated code
|
||||
2. ✅ Run full test suite: `npm run test:unit`
|
||||
3. ✅ Run E2E tests: `npm run test:e2e`
|
||||
4. ✅ Update documentation if needed
|
||||
5. ✅ Commit in logical batches
|
||||
6. ✅ Update `LAMBDA_REFACTOR_PROGRESS.md` with completion status
|
||||
|
||||
## Contributing
|
||||
|
||||
To improve these tools:
|
||||
|
||||
1. Test on various file types
|
||||
2. Report issues with specific files
|
||||
3. Suggest improvements to AST parsing
|
||||
4. Add support for more patterns (arrow functions, etc.)
|
||||
427
tools/refactoring/ast-lambda-refactor.ts
Normal file
427
tools/refactoring/ast-lambda-refactor.ts
Normal file
@@ -0,0 +1,427 @@
|
||||
#!/usr/bin/env tsx
|
||||
/**
|
||||
* AST-based Lambda Refactoring Tool
|
||||
*
|
||||
* Uses TypeScript compiler API for accurate code analysis and transformation
|
||||
*/
|
||||
|
||||
import * as ts from 'typescript'
|
||||
import * as fs from 'fs/promises'
|
||||
import * as path from 'path'
|
||||
import { exec } from 'child_process'
|
||||
import { promisify } from 'util'
|
||||
|
||||
const execAsync = promisify(exec)
|
||||
|
||||
interface ExtractedFunction {
|
||||
name: string
|
||||
fullText: string
|
||||
isExported: boolean
|
||||
isAsync: boolean
|
||||
leadingComments: string
|
||||
startPos: number
|
||||
endPos: number
|
||||
}
|
||||
|
||||
interface ExtractedImport {
|
||||
fullText: string
|
||||
moduleSpecifier: string
|
||||
namedImports: string[]
|
||||
}
|
||||
|
||||
class ASTLambdaRefactor {
|
||||
private dryRun: boolean
|
||||
private verbose: boolean
|
||||
|
||||
constructor(options: { dryRun?: boolean; verbose?: boolean } = {}) {
|
||||
this.dryRun = options.dryRun || false
|
||||
this.verbose = options.verbose || false
|
||||
}
|
||||
|
||||
private log(message: string) {
|
||||
if (this.verbose) {
|
||||
console.log(message)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse TypeScript file and extract functions using AST
|
||||
*/
|
||||
async analyzeFil(filePath: string): Promise<{
|
||||
functions: ExtractedFunction[]
|
||||
imports: ExtractedImport[]
|
||||
types: string[]
|
||||
}> {
|
||||
const sourceCode = await fs.readFile(filePath, 'utf-8')
|
||||
const sourceFile = ts.createSourceFile(
|
||||
filePath,
|
||||
sourceCode,
|
||||
ts.ScriptTarget.Latest,
|
||||
true
|
||||
)
|
||||
|
||||
const functions: ExtractedFunction[] = []
|
||||
const imports: ExtractedImport[] = []
|
||||
const types: string[] = []
|
||||
|
||||
const visit = (node: ts.Node) => {
|
||||
// Extract function declarations
|
||||
if (ts.isFunctionDeclaration(node) && node.name) {
|
||||
const isExported = node.modifiers?.some(m => m.kind === ts.SyntaxKind.ExportKeyword) || false
|
||||
const isAsync = node.modifiers?.some(m => m.kind === ts.SyntaxKind.AsyncKeyword) || false
|
||||
|
||||
// Get leading comments
|
||||
const leadingComments = ts.getLeadingCommentRanges(sourceCode, node.getFullStart())
|
||||
let commentText = ''
|
||||
if (leadingComments) {
|
||||
for (const comment of leadingComments) {
|
||||
commentText += sourceCode.substring(comment.pos, comment.end) + '\n'
|
||||
}
|
||||
}
|
||||
|
||||
functions.push({
|
||||
name: node.name.text,
|
||||
fullText: node.getText(sourceFile),
|
||||
isExported,
|
||||
isAsync,
|
||||
leadingComments: commentText.trim(),
|
||||
startPos: node.getStart(sourceFile),
|
||||
endPos: node.getEnd(),
|
||||
})
|
||||
}
|
||||
|
||||
// Extract class methods
|
||||
if (ts.isClassDeclaration(node) && node.members) {
|
||||
for (const member of node.members) {
|
||||
if (ts.isMethodDeclaration(member) && member.name && ts.isIdentifier(member.name)) {
|
||||
const isAsync = member.modifiers?.some(m => m.kind === ts.SyntaxKind.AsyncKeyword) || false
|
||||
|
||||
// Get leading comments
|
||||
const leadingComments = ts.getLeadingCommentRanges(sourceCode, member.getFullStart())
|
||||
let commentText = ''
|
||||
if (leadingComments) {
|
||||
for (const comment of leadingComments) {
|
||||
commentText += sourceCode.substring(comment.pos, comment.end) + '\n'
|
||||
}
|
||||
}
|
||||
|
||||
// Convert method to function
|
||||
const methodText = member.getText(sourceFile)
|
||||
const functionText = this.convertMethodToFunction(methodText, member.name.text, isAsync)
|
||||
|
||||
functions.push({
|
||||
name: member.name.text,
|
||||
fullText: functionText,
|
||||
isExported: true,
|
||||
isAsync,
|
||||
leadingComments: commentText.trim(),
|
||||
startPos: member.getStart(sourceFile),
|
||||
endPos: member.getEnd(),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract imports
|
||||
if (ts.isImportDeclaration(node)) {
|
||||
const moduleSpec = (node.moduleSpecifier as ts.StringLiteral).text
|
||||
const namedImports: string[] = []
|
||||
|
||||
if (node.importClause?.namedBindings && ts.isNamedImports(node.importClause.namedBindings)) {
|
||||
for (const element of node.importClause.namedBindings.elements) {
|
||||
namedImports.push(element.name.text)
|
||||
}
|
||||
}
|
||||
|
||||
imports.push({
|
||||
fullText: node.getText(sourceFile),
|
||||
moduleSpecifier: moduleSpec,
|
||||
namedImports,
|
||||
})
|
||||
}
|
||||
|
||||
// Extract type definitions
|
||||
if (ts.isTypeAliasDeclaration(node) || ts.isInterfaceDeclaration(node)) {
|
||||
types.push(node.getText(sourceFile))
|
||||
}
|
||||
|
||||
ts.forEachChild(node, visit)
|
||||
}
|
||||
|
||||
visit(sourceFile)
|
||||
|
||||
return { functions, imports, types }
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a class method to a standalone function
|
||||
*/
|
||||
private convertMethodToFunction(methodText: string, methodName: string, isAsync: boolean): string {
|
||||
// Remove visibility modifiers (public, private, protected)
|
||||
let funcText = methodText.replace(/^\s*(public|private|protected)\s+/, '')
|
||||
|
||||
// Ensure it starts with async if needed
|
||||
if (isAsync && !funcText.trim().startsWith('async')) {
|
||||
funcText = 'async ' + funcText
|
||||
}
|
||||
|
||||
// Convert method syntax to function syntax
|
||||
// "methodName(...): Type {" -> "function methodName(...): Type {"
|
||||
funcText = funcText.replace(/^(\s*)(async\s+)?([a-zA-Z0-9_]+)(\s*\([^)]*\))/, '$1$2function $3$4')
|
||||
|
||||
return funcText
|
||||
}
|
||||
|
||||
/**
|
||||
* Create individual function file with proper imports
|
||||
*/
|
||||
async createFunctionFile(
|
||||
func: ExtractedFunction,
|
||||
allImports: ExtractedImport[],
|
||||
outputPath: string
|
||||
): Promise<void> {
|
||||
let content = ''
|
||||
|
||||
// Add imports (for now, include all - can be optimized to only include used imports)
|
||||
if (allImports.length > 0) {
|
||||
content += allImports.map(imp => imp.fullText).join('\n') + '\n\n'
|
||||
}
|
||||
|
||||
// Add comments
|
||||
if (func.leadingComments) {
|
||||
content += func.leadingComments + '\n'
|
||||
}
|
||||
|
||||
// Add function (ensure it's exported)
|
||||
let funcText = func.fullText
|
||||
if (!func.isExported && !funcText.includes('export ')) {
|
||||
funcText = 'export ' + funcText
|
||||
} else if (!funcText.includes('export ')) {
|
||||
funcText = 'export ' + funcText
|
||||
}
|
||||
|
||||
content += funcText + '\n'
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.writeFile(outputPath, content, 'utf-8')
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Refactor a file using AST analysis
|
||||
*/
|
||||
async refactorFile(filePath: string): Promise<void> {
|
||||
this.log(`\n🔍 Analyzing ${filePath}...`)
|
||||
|
||||
const { functions, imports, types } = await this.analyzeFile(filePath)
|
||||
|
||||
if (functions.length === 0) {
|
||||
this.log(' ⏭️ No functions found - skipping')
|
||||
return
|
||||
}
|
||||
|
||||
if (functions.length <= 2) {
|
||||
this.log(` ⏭️ Only ${functions.length} function(s) - skipping (not worth refactoring)`)
|
||||
return
|
||||
}
|
||||
|
||||
this.log(` Found ${functions.length} functions: ${functions.map(f => f.name).join(', ')}`)
|
||||
|
||||
// Create output directory structure
|
||||
const dir = path.dirname(filePath)
|
||||
const basename = path.basename(filePath, path.extname(filePath))
|
||||
const functionsDir = path.join(dir, basename, 'functions')
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.mkdir(functionsDir, { recursive: true })
|
||||
}
|
||||
|
||||
this.log(` Creating: ${functionsDir}`)
|
||||
|
||||
// Create individual function files
|
||||
for (const func of functions) {
|
||||
const kebabName = this.toKebabCase(func.name)
|
||||
const funcFile = path.join(functionsDir, `${kebabName}.ts`)
|
||||
|
||||
await this.createFunctionFile(func, imports, funcFile)
|
||||
this.log(` ✓ ${kebabName}.ts`)
|
||||
}
|
||||
|
||||
// Create index file for re-exports
|
||||
const indexContent = this.generateIndexFile(functions, 'functions')
|
||||
const indexPath = path.join(dir, basename, 'index.ts')
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.writeFile(indexPath, indexContent, 'utf-8')
|
||||
}
|
||||
this.log(` ✓ index.ts`)
|
||||
|
||||
// Create class wrapper
|
||||
const className = this.toClassName(basename)
|
||||
const classContent = this.generateClassWrapper(className, functions)
|
||||
const classPath = path.join(dir, basename, `${className}.ts`)
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.writeFile(classPath, classContent, 'utf-8')
|
||||
}
|
||||
this.log(` ✓ ${className}.ts`)
|
||||
|
||||
// Replace original file with re-export
|
||||
const newMainContent = `/**
|
||||
* This file has been refactored into modular lambda-per-file structure.
|
||||
*
|
||||
* Import individual functions or use the class wrapper:
|
||||
* @example
|
||||
* import { ${functions[0].name} } from './${basename}'
|
||||
*
|
||||
* @example
|
||||
* import { ${className} } from './${basename}'
|
||||
* ${className}.${functions[0].name}(...)
|
||||
*/
|
||||
|
||||
export * from './${basename}'
|
||||
`
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.writeFile(filePath, newMainContent, 'utf-8')
|
||||
}
|
||||
this.log(` ✓ Updated ${path.basename(filePath)}`)
|
||||
|
||||
this.log(` ✅ Refactored into ${functions.length + 2} files`)
|
||||
}
|
||||
|
||||
private toKebabCase(str: string): string {
|
||||
return str.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, '')
|
||||
}
|
||||
|
||||
private toClassName(str: string): string {
|
||||
return str
|
||||
.split(/[-_]/)
|
||||
.map(word => word.charAt(0).toUpperCase() + word.slice(1))
|
||||
.join('') + 'Utils'
|
||||
}
|
||||
|
||||
private generateIndexFile(functions: ExtractedFunction[], functionsDir: string): string {
|
||||
let content = '// Auto-generated re-exports\n\n'
|
||||
|
||||
for (const func of functions) {
|
||||
const kebabName = this.toKebabCase(func.name)
|
||||
content += `export { ${func.name} } from './${functionsDir}/${kebabName}'\n`
|
||||
}
|
||||
|
||||
return content
|
||||
}
|
||||
|
||||
private generateClassWrapper(className: string, functions: ExtractedFunction[]): string {
|
||||
let content = `// Auto-generated class wrapper\n\n`
|
||||
|
||||
// Import all functions
|
||||
for (const func of functions) {
|
||||
const kebabName = this.toKebabCase(func.name)
|
||||
content += `import { ${func.name} } from './functions/${kebabName}'\n`
|
||||
}
|
||||
|
||||
content += `\n/**\n * ${className} - Convenience class wrapper\n */\n`
|
||||
content += `export class ${className} {\n`
|
||||
|
||||
for (const func of functions) {
|
||||
const asyncKeyword = func.isAsync ? 'async ' : ''
|
||||
content += ` static ${asyncKeyword}${func.name}(...args: any[]) {\n`
|
||||
content += ` return ${func.isAsync ? 'await ' : ''}${func.name}(...args)\n`
|
||||
content += ` }\n\n`
|
||||
}
|
||||
|
||||
content += '}\n'
|
||||
|
||||
return content
|
||||
}
|
||||
|
||||
// Fix the typo in the method name
|
||||
async analyzeFile(filePath: string): Promise<{
|
||||
functions: ExtractedFunction[]
|
||||
imports: ExtractedImport[]
|
||||
types: string[]
|
||||
}> {
|
||||
return this.analyzeFil(filePath)
|
||||
}
|
||||
|
||||
/**
|
||||
* Process multiple files
|
||||
*/
|
||||
async bulkRefactor(files: string[]): Promise<void> {
|
||||
console.log(`\n📦 AST-based Lambda Refactoring`)
|
||||
console.log(` Mode: ${this.dryRun ? 'DRY RUN' : 'LIVE'}`)
|
||||
console.log(` Files: ${files.length}\n`)
|
||||
|
||||
let successCount = 0
|
||||
let skipCount = 0
|
||||
let errorCount = 0
|
||||
|
||||
for (let i = 0; i < files.length; i++) {
|
||||
const file = files[i]
|
||||
console.log(`[${i + 1}/${files.length}] ${file}`)
|
||||
|
||||
try {
|
||||
await this.refactorFile(file)
|
||||
successCount++
|
||||
} catch (error) {
|
||||
if (error instanceof Error && error.message.includes('skipping')) {
|
||||
skipCount++
|
||||
} else {
|
||||
console.error(` ❌ Error: ${error}`)
|
||||
errorCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\n📊 Summary:`)
|
||||
console.log(` ✅ Success: ${successCount}`)
|
||||
console.log(` ⏭️ Skipped: ${skipCount}`)
|
||||
console.log(` ❌ Errors: ${errorCount}`)
|
||||
}
|
||||
}
|
||||
|
||||
// CLI
|
||||
async function main() {
|
||||
const args = process.argv.slice(2)
|
||||
|
||||
if (args.includes('--help') || args.includes('-h') || args.length === 0) {
|
||||
console.log('AST-based Lambda Refactoring Tool\n')
|
||||
console.log('Usage: tsx ast-lambda-refactor.ts [options] <file>')
|
||||
console.log('\nOptions:')
|
||||
console.log(' -d, --dry-run Preview without writing')
|
||||
console.log(' -v, --verbose Verbose output')
|
||||
console.log(' -h, --help Show help')
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
const dryRun = args.includes('--dry-run') || args.includes('-d')
|
||||
const verbose = args.includes('--verbose') || args.includes('-v')
|
||||
const file = args.find(a => !a.startsWith('-'))
|
||||
|
||||
if (!file) {
|
||||
console.error('Error: Please provide a file to refactor')
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
const refactor = new ASTLambdaRefactor({ dryRun, verbose })
|
||||
await refactor.bulkRefactor([file])
|
||||
|
||||
if (!dryRun) {
|
||||
console.log('\n🔧 Running linter...')
|
||||
try {
|
||||
await execAsync('npm run lint:fix')
|
||||
console.log(' ✅ Lint complete')
|
||||
} catch (e) {
|
||||
console.log(' ⚠️ Lint had warnings (may be expected)')
|
||||
}
|
||||
}
|
||||
|
||||
console.log('\n✨ Done!')
|
||||
}
|
||||
|
||||
if (require.main === module) {
|
||||
main().catch(console.error)
|
||||
}
|
||||
|
||||
export { ASTLambdaRefactor }
|
||||
118
tools/refactoring/batch-refactor-all.ts
Normal file
118
tools/refactoring/batch-refactor-all.ts
Normal file
@@ -0,0 +1,118 @@
|
||||
#!/usr/bin/env tsx
|
||||
/**
|
||||
* Batch Refactor All Large Files
|
||||
*
|
||||
* Processes all files from the tracking report in priority order
|
||||
*/
|
||||
|
||||
import { BulkLambdaRefactor } from './bulk-lambda-refactor'
|
||||
import * as fs from 'fs/promises'
|
||||
import * as path from 'path'
|
||||
|
||||
interface FileToRefactor {
|
||||
path: string
|
||||
lines: number
|
||||
category: string
|
||||
priority: 'high' | 'medium' | 'low'
|
||||
}
|
||||
|
||||
async function loadFilesFromReport(): Promise<FileToRefactor[]> {
|
||||
const reportPath = path.join(process.cwd(), 'docs/todo/LAMBDA_REFACTOR_PROGRESS.md')
|
||||
const content = await fs.readFile(reportPath, 'utf-8')
|
||||
|
||||
const files: FileToRefactor[] = []
|
||||
const lines = content.split('\n')
|
||||
|
||||
let currentPriority: 'high' | 'medium' | 'low' = 'high'
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.includes('### High Priority')) currentPriority = 'high'
|
||||
else if (line.includes('### Medium Priority')) currentPriority = 'medium'
|
||||
else if (line.includes('### Low Priority')) currentPriority = 'low'
|
||||
else if (line.includes('### Skipped')) break
|
||||
|
||||
// Match checklist items: - [ ] `path/to/file.ts` (123 lines)
|
||||
const match = line.match(/- \[ \] `([^`]+)` \((\d+) lines\)/)
|
||||
if (match) {
|
||||
files.push({
|
||||
path: match[1],
|
||||
lines: parseInt(match[2], 10),
|
||||
category: currentPriority,
|
||||
priority: currentPriority,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return files
|
||||
}
|
||||
|
||||
async function main() {
|
||||
const args = process.argv.slice(2)
|
||||
const dryRun = args.includes('--dry-run') || args.includes('-d')
|
||||
const verbose = args.includes('--verbose') || args.includes('-v')
|
||||
const priorityFilter = args.find(a => ['high', 'medium', 'low', 'all'].includes(a)) || 'high'
|
||||
const limit = parseInt(args.find(a => a.startsWith('--limit='))?.split('=')[1] || '999', 10)
|
||||
|
||||
console.log('📋 Loading files from tracking report...')
|
||||
const allFiles = await loadFilesFromReport()
|
||||
|
||||
let filesToProcess = allFiles
|
||||
if (priorityFilter !== 'all') {
|
||||
filesToProcess = allFiles.filter(f => f.priority === priorityFilter)
|
||||
}
|
||||
|
||||
filesToProcess = filesToProcess.slice(0, limit)
|
||||
|
||||
console.log(`\n📊 Plan:`)
|
||||
console.log(` Priority filter: ${priorityFilter}`)
|
||||
console.log(` Files to process: ${filesToProcess.length}`)
|
||||
console.log(` Mode: ${dryRun ? 'DRY RUN (preview only)' : 'LIVE (will modify files)'}`)
|
||||
|
||||
if (filesToProcess.length === 0) {
|
||||
console.log('\n⚠️ No files to process')
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
// Show what will be processed
|
||||
console.log(`\n📝 Files queued:`)
|
||||
for (let i = 0; i < Math.min(10, filesToProcess.length); i++) {
|
||||
console.log(` ${i + 1}. ${filesToProcess[i].path} (${filesToProcess[i].lines} lines)`)
|
||||
}
|
||||
if (filesToProcess.length > 10) {
|
||||
console.log(` ... and ${filesToProcess.length - 10} more`)
|
||||
}
|
||||
|
||||
// Confirmation for live mode
|
||||
if (!dryRun) {
|
||||
console.log(`\n⚠️ WARNING: This will modify ${filesToProcess.length} files!`)
|
||||
console.log(` Press Ctrl+C to cancel, or wait 3 seconds to continue...`)
|
||||
await new Promise(resolve => setTimeout(resolve, 3000))
|
||||
}
|
||||
|
||||
console.log('\n🚀 Starting refactoring...\n')
|
||||
|
||||
const refactor = new BulkLambdaRefactor({ dryRun, verbose })
|
||||
const filePaths = filesToProcess.map(f => f.path)
|
||||
|
||||
const results = await refactor.bulkRefactor(filePaths)
|
||||
|
||||
// Save results
|
||||
const resultsPath = path.join(process.cwd(), 'docs/todo/REFACTOR_RESULTS.json')
|
||||
await fs.writeFile(resultsPath, JSON.stringify(results, null, 2), 'utf-8')
|
||||
console.log(`\n💾 Results saved to: ${resultsPath}`)
|
||||
|
||||
// Update progress report
|
||||
console.log('\n📝 Updating progress report...')
|
||||
// TODO: Mark completed files in the report
|
||||
|
||||
console.log('\n✅ Batch refactoring complete!')
|
||||
console.log('\nNext steps:')
|
||||
console.log(' 1. Run: npm run lint:fix')
|
||||
console.log(' 2. Run: npm run typecheck')
|
||||
console.log(' 3. Run: npm run test:unit')
|
||||
console.log(' 4. Review changes and commit')
|
||||
}
|
||||
|
||||
if (require.main === module) {
|
||||
main().catch(console.error)
|
||||
}
|
||||
471
tools/refactoring/bulk-lambda-refactor.ts
Normal file
471
tools/refactoring/bulk-lambda-refactor.ts
Normal file
@@ -0,0 +1,471 @@
|
||||
#!/usr/bin/env tsx
|
||||
/**
|
||||
* Bulk Lambda-per-File Refactoring Tool
|
||||
*
|
||||
* Automatically refactors TypeScript files into lambda-per-file structure:
|
||||
* 1. Analyzes file to extract functions/methods
|
||||
* 2. Creates functions/ subdirectory
|
||||
* 3. Extracts each function to its own file
|
||||
* 4. Creates class wrapper
|
||||
* 5. Updates imports
|
||||
* 6. Runs linter to fix issues
|
||||
*/
|
||||
|
||||
import * as fs from 'fs/promises'
|
||||
import * as path from 'path'
|
||||
import { exec } from 'child_process'
|
||||
import { promisify } from 'util'
|
||||
|
||||
const execAsync = promisify(exec)
|
||||
|
||||
interface FunctionInfo {
|
||||
name: string
|
||||
isAsync: boolean
|
||||
isExported: boolean
|
||||
params: string
|
||||
returnType: string
|
||||
body: string
|
||||
startLine: number
|
||||
endLine: number
|
||||
comments: string[]
|
||||
isMethod: boolean
|
||||
}
|
||||
|
||||
interface RefactorResult {
|
||||
success: boolean
|
||||
originalFile: string
|
||||
newFiles: string[]
|
||||
errors: string[]
|
||||
}
|
||||
|
||||
class BulkLambdaRefactor {
|
||||
private dryRun: boolean = false
|
||||
private verbose: boolean = false
|
||||
|
||||
constructor(options: { dryRun?: boolean; verbose?: boolean } = {}) {
|
||||
this.dryRun = options.dryRun || false
|
||||
this.verbose = options.verbose || false
|
||||
}
|
||||
|
||||
private log(message: string) {
|
||||
if (this.verbose) {
|
||||
console.log(message)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract functions from a TypeScript file
|
||||
*/
|
||||
async extractFunctions(filePath: string): Promise<FunctionInfo[]> {
|
||||
const content = await fs.readFile(filePath, 'utf-8')
|
||||
const lines = content.split('\n')
|
||||
const functions: FunctionInfo[] = []
|
||||
|
||||
// Simple regex-based extraction (can be improved with AST parsing)
|
||||
const functionRegex = /^(export\s+)?(async\s+)?function\s+([a-zA-Z0-9_]+)\s*(\([^)]*\))(\s*:\s*[^{]+)?\s*\{/
|
||||
const methodRegex = /^\s*(public|private|protected)?\s*(async\s+)?([a-zA-Z0-9_]+)\s*(\([^)]*\))(\s*:\s*[^{]+)?\s*\{/
|
||||
|
||||
let i = 0
|
||||
while (i < lines.length) {
|
||||
const line = lines[i]
|
||||
|
||||
// Try to match function
|
||||
const funcMatch = line.match(functionRegex)
|
||||
const methodMatch = line.match(methodRegex)
|
||||
|
||||
if (funcMatch || methodMatch) {
|
||||
const isMethod = !!methodMatch
|
||||
const match = funcMatch || methodMatch!
|
||||
|
||||
const isExported = !!match[1]
|
||||
const isAsync = !!(funcMatch ? match[2] : methodMatch![2])
|
||||
const name = funcMatch ? match[3] : methodMatch![3]
|
||||
const params = funcMatch ? match[4] : methodMatch![4]
|
||||
const returnType = (funcMatch ? match[5] : methodMatch![5]) || ''
|
||||
|
||||
// Collect comments above function
|
||||
const comments: string[] = []
|
||||
let commentLine = i - 1
|
||||
while (commentLine >= 0 && (lines[commentLine].trim().startsWith('//') ||
|
||||
lines[commentLine].trim().startsWith('*') ||
|
||||
lines[commentLine].trim().startsWith('/*'))) {
|
||||
comments.unshift(lines[commentLine])
|
||||
commentLine--
|
||||
}
|
||||
|
||||
// Find matching closing brace
|
||||
let braceCount = 1
|
||||
let bodyStart = i + 1
|
||||
let j = i
|
||||
let bodyLines: string[] = [line]
|
||||
|
||||
// Count braces to find function end
|
||||
j++
|
||||
while (j < lines.length && braceCount > 0) {
|
||||
bodyLines.push(lines[j])
|
||||
for (const char of lines[j]) {
|
||||
if (char === '{') braceCount++
|
||||
if (char === '}') braceCount--
|
||||
if (braceCount === 0) break
|
||||
}
|
||||
j++
|
||||
}
|
||||
|
||||
functions.push({
|
||||
name,
|
||||
isAsync,
|
||||
isExported,
|
||||
params,
|
||||
returnType: returnType.trim(),
|
||||
body: bodyLines.join('\n'),
|
||||
startLine: i,
|
||||
endLine: j - 1,
|
||||
comments,
|
||||
isMethod,
|
||||
})
|
||||
|
||||
i = j
|
||||
} else {
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
return functions
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract imports and types from original file
|
||||
*/
|
||||
async extractImportsAndTypes(filePath: string): Promise<{ imports: string[]; types: string[] }> {
|
||||
const content = await fs.readFile(filePath, 'utf-8')
|
||||
const lines = content.split('\n')
|
||||
|
||||
const imports: string[] = []
|
||||
const types: string[] = []
|
||||
|
||||
let inImport = false
|
||||
let currentImport = ''
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmed = line.trim()
|
||||
|
||||
// Handle multi-line imports
|
||||
if (trimmed.startsWith('import ') || inImport) {
|
||||
currentImport += line + '\n'
|
||||
if (trimmed.includes('}') || (!trimmed.includes('{') && trimmed.endsWith("'"))) {
|
||||
imports.push(currentImport.trim())
|
||||
currentImport = ''
|
||||
inImport = false
|
||||
} else {
|
||||
inImport = true
|
||||
}
|
||||
}
|
||||
|
||||
// Extract type definitions
|
||||
if (trimmed.startsWith('export type ') || trimmed.startsWith('export interface ') ||
|
||||
trimmed.startsWith('type ') || trimmed.startsWith('interface ')) {
|
||||
types.push(line)
|
||||
}
|
||||
}
|
||||
|
||||
return { imports, types }
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate individual function file
|
||||
*/
|
||||
generateFunctionFile(func: FunctionInfo, imports: string[], types: string[]): string {
|
||||
let content = ''
|
||||
|
||||
// Add relevant imports (simplified - could be smarter about which imports are needed)
|
||||
if (imports.length > 0) {
|
||||
content += imports.join('\n') + '\n\n'
|
||||
}
|
||||
|
||||
// Add comments
|
||||
if (func.comments.length > 0) {
|
||||
content += func.comments.join('\n') + '\n'
|
||||
}
|
||||
|
||||
// Add function
|
||||
const asyncKeyword = func.isAsync ? 'async ' : ''
|
||||
const exportKeyword = 'export '
|
||||
|
||||
content += `${exportKeyword}${asyncKeyword}function ${func.name}${func.params}${func.returnType} {\n`
|
||||
|
||||
// Extract function body (remove first and last line which are the function declaration and closing brace)
|
||||
const bodyLines = func.body.split('\n')
|
||||
const actualBody = bodyLines.slice(1, -1).join('\n')
|
||||
|
||||
content += actualBody + '\n'
|
||||
content += '}\n'
|
||||
|
||||
return content
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate class wrapper file
|
||||
*/
|
||||
generateClassWrapper(className: string, functions: FunctionInfo[], functionsDir: string): string {
|
||||
let content = ''
|
||||
|
||||
// Import all functions
|
||||
content += `// Auto-generated class wrapper\n`
|
||||
for (const func of functions) {
|
||||
const kebabName = func.name.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, '')
|
||||
content += `import { ${func.name} } from './${functionsDir}/${kebabName}'\n`
|
||||
}
|
||||
|
||||
content += `\n/**\n`
|
||||
content += ` * ${className} - Class wrapper for ${functions.length} functions\n`
|
||||
content += ` * \n`
|
||||
content += ` * This is a convenience wrapper. Prefer importing individual functions.\n`
|
||||
content += ` */\n`
|
||||
content += `export class ${className} {\n`
|
||||
|
||||
// Add static methods
|
||||
for (const func of functions) {
|
||||
const asyncKeyword = func.isAsync ? 'async ' : ''
|
||||
content += ` static ${asyncKeyword}${func.name}${func.params}${func.returnType} {\n`
|
||||
content += ` return ${func.isAsync ? 'await ' : ''}${func.name}(...arguments as any)\n`
|
||||
content += ` }\n\n`
|
||||
}
|
||||
|
||||
content += '}\n'
|
||||
|
||||
return content
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate index file that re-exports everything
|
||||
*/
|
||||
generateIndexFile(functions: FunctionInfo[], functionsDir: string, className: string): string {
|
||||
let content = ''
|
||||
|
||||
content += `// Auto-generated re-exports for backward compatibility\n\n`
|
||||
|
||||
// Re-export all functions
|
||||
for (const func of functions) {
|
||||
const kebabName = func.name.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, '')
|
||||
content += `export { ${func.name} } from './${functionsDir}/${kebabName}'\n`
|
||||
}
|
||||
|
||||
// Re-export class wrapper
|
||||
content += `\n// Class wrapper for convenience\n`
|
||||
content += `export { ${className} } from './${className}'\n`
|
||||
|
||||
return content
|
||||
}
|
||||
|
||||
/**
|
||||
* Refactor a single file
|
||||
*/
|
||||
async refactorFile(filePath: string): Promise<RefactorResult> {
|
||||
const result: RefactorResult = {
|
||||
success: false,
|
||||
originalFile: filePath,
|
||||
newFiles: [],
|
||||
errors: [],
|
||||
}
|
||||
|
||||
try {
|
||||
this.log(`\n🔍 Analyzing ${filePath}...`)
|
||||
|
||||
// Extract functions
|
||||
const functions = await this.extractFunctions(filePath)
|
||||
|
||||
if (functions.length === 0) {
|
||||
result.errors.push('No functions found to extract')
|
||||
return result
|
||||
}
|
||||
|
||||
// Skip if only 1-2 functions (not worth refactoring)
|
||||
if (functions.length <= 2) {
|
||||
result.errors.push(`Only ${functions.length} function(s) - skipping`)
|
||||
return result
|
||||
}
|
||||
|
||||
this.log(` Found ${functions.length} functions: ${functions.map(f => f.name).join(', ')}`)
|
||||
|
||||
// Extract imports and types
|
||||
const { imports, types } = await this.extractImportsAndTypes(filePath)
|
||||
|
||||
// Create directories
|
||||
const dir = path.dirname(filePath)
|
||||
const basename = path.basename(filePath, path.extname(filePath))
|
||||
const functionsDir = path.join(dir, basename, 'functions')
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.mkdir(functionsDir, { recursive: true })
|
||||
}
|
||||
|
||||
this.log(` Creating functions directory: ${functionsDir}`)
|
||||
|
||||
// Generate function files
|
||||
for (const func of functions) {
|
||||
const kebabName = func.name.replace(/([A-Z])/g, '-$1').toLowerCase().replace(/^-/, '')
|
||||
const funcFilePath = path.join(functionsDir, `${kebabName}.ts`)
|
||||
const content = this.generateFunctionFile(func, imports, types)
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.writeFile(funcFilePath, content, 'utf-8')
|
||||
}
|
||||
|
||||
result.newFiles.push(funcFilePath)
|
||||
this.log(` ✓ ${kebabName}.ts`)
|
||||
}
|
||||
|
||||
// Generate class wrapper
|
||||
const className = basename.split('-').map(w => w.charAt(0).toUpperCase() + w.slice(1)).join('') + 'Utils'
|
||||
const classFilePath = path.join(dir, basename, `${className}.ts`)
|
||||
const classContent = this.generateClassWrapper(className, functions, 'functions')
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.writeFile(classFilePath, classContent, 'utf-8')
|
||||
}
|
||||
|
||||
result.newFiles.push(classFilePath)
|
||||
this.log(` ✓ ${className}.ts (class wrapper)`)
|
||||
|
||||
// Generate index file
|
||||
const indexFilePath = path.join(dir, basename, 'index.ts')
|
||||
const indexContent = this.generateIndexFile(functions, 'functions', className)
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.writeFile(indexFilePath, indexContent, 'utf-8')
|
||||
}
|
||||
|
||||
result.newFiles.push(indexFilePath)
|
||||
this.log(` ✓ index.ts (re-exports)`)
|
||||
|
||||
// Update original file to re-export from new location
|
||||
const reexportContent = `// This file has been refactored into modular functions\n` +
|
||||
`// Import from individual functions or use the class wrapper\n\n` +
|
||||
`export * from './${basename}'\n`
|
||||
|
||||
if (!this.dryRun) {
|
||||
await fs.writeFile(filePath, reexportContent, 'utf-8')
|
||||
}
|
||||
|
||||
this.log(` ✓ Updated ${path.basename(filePath)} to re-export`)
|
||||
|
||||
result.success = true
|
||||
this.log(` ✅ Successfully refactored into ${result.newFiles.length} files`)
|
||||
|
||||
} catch (error) {
|
||||
result.errors.push(`Error: ${error instanceof Error ? error.message : String(error)}`)
|
||||
this.log(` ❌ Failed: ${result.errors[0]}`)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Run linter and fix imports
|
||||
*/
|
||||
async runLintFix(workingDir: string): Promise<void> {
|
||||
this.log('\n🔧 Running ESLint to fix imports and formatting...')
|
||||
|
||||
try {
|
||||
const { stdout, stderr } = await execAsync('npm run lint:fix', { cwd: workingDir })
|
||||
if (stdout) this.log(stdout)
|
||||
if (stderr) this.log(stderr)
|
||||
this.log(' ✅ Linting completed')
|
||||
} catch (error) {
|
||||
this.log(` ⚠️ Linting had issues (may be expected): ${error}`)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Bulk refactor multiple files
|
||||
*/
|
||||
async bulkRefactor(files: string[]): Promise<RefactorResult[]> {
|
||||
console.log(`\n📦 Bulk Lambda Refactoring Tool`)
|
||||
console.log(` Mode: ${this.dryRun ? 'DRY RUN' : 'LIVE'}`)
|
||||
console.log(` Files to process: ${files.length}\n`)
|
||||
|
||||
const results: RefactorResult[] = []
|
||||
let successCount = 0
|
||||
let skipCount = 0
|
||||
let errorCount = 0
|
||||
|
||||
for (let i = 0; i < files.length; i++) {
|
||||
const file = files[i]
|
||||
console.log(`[${i + 1}/${files.length}] Processing: ${file}`)
|
||||
|
||||
const result = await this.refactorFile(file)
|
||||
results.push(result)
|
||||
|
||||
if (result.success) {
|
||||
successCount++
|
||||
} else if (result.errors.some(e => e.includes('skipping'))) {
|
||||
skipCount++
|
||||
} else {
|
||||
errorCount++
|
||||
}
|
||||
|
||||
// Small delay to avoid overwhelming the system
|
||||
await new Promise(resolve => setTimeout(resolve, 100))
|
||||
}
|
||||
|
||||
console.log(`\n📊 Summary:`)
|
||||
console.log(` ✅ Success: ${successCount}`)
|
||||
console.log(` ⏭️ Skipped: ${skipCount}`)
|
||||
console.log(` ❌ Errors: ${errorCount}`)
|
||||
console.log(` 📁 Total new files: ${results.reduce((acc, r) => acc + r.newFiles.length, 0)}`)
|
||||
|
||||
return results
|
||||
}
|
||||
}
|
||||
|
||||
// CLI
|
||||
async function main() {
|
||||
const args = process.argv.slice(2)
|
||||
|
||||
const dryRun = args.includes('--dry-run') || args.includes('-d')
|
||||
const verbose = args.includes('--verbose') || args.includes('-v')
|
||||
const filesArg = args.find(arg => !arg.startsWith('-'))
|
||||
|
||||
if (!filesArg && !args.includes('--help') && !args.includes('-h')) {
|
||||
console.log('Usage: tsx bulk-lambda-refactor.ts [options] <file-pattern>')
|
||||
console.log('\nOptions:')
|
||||
console.log(' -d, --dry-run Preview changes without writing files')
|
||||
console.log(' -v, --verbose Show detailed output')
|
||||
console.log(' -h, --help Show this help')
|
||||
console.log('\nExamples:')
|
||||
console.log(' tsx bulk-lambda-refactor.ts --dry-run "frontends/nextjs/src/lib/**/*.ts"')
|
||||
console.log(' tsx bulk-lambda-refactor.ts --verbose frontends/nextjs/src/lib/rendering/page/page-definition-builder.ts')
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
if (args.includes('--help') || args.includes('-h')) {
|
||||
console.log('Bulk Lambda-per-File Refactoring Tool\n')
|
||||
console.log('Automatically refactors TypeScript files into lambda-per-file structure.')
|
||||
console.log('\nUsage: tsx bulk-lambda-refactor.ts [options] <file-pattern>')
|
||||
console.log('\nOptions:')
|
||||
console.log(' -d, --dry-run Preview changes without writing files')
|
||||
console.log(' -v, --verbose Show detailed output')
|
||||
console.log(' -h, --help Show this help')
|
||||
process.exit(0)
|
||||
}
|
||||
|
||||
const refactor = new BulkLambdaRefactor({ dryRun, verbose })
|
||||
|
||||
// For now, process single file (can be extended to glob patterns)
|
||||
const files = [filesArg!]
|
||||
|
||||
const results = await refactor.bulkRefactor(files)
|
||||
|
||||
if (!dryRun && results.some(r => r.success)) {
|
||||
console.log('\n🔧 Running linter to fix imports...')
|
||||
await refactor.runLintFix(process.cwd())
|
||||
}
|
||||
|
||||
console.log('\n✨ Done!')
|
||||
}
|
||||
|
||||
if (require.main === module) {
|
||||
main().catch(console.error)
|
||||
}
|
||||
|
||||
export { BulkLambdaRefactor }
|
||||
249
tools/refactoring/orchestrate-refactor.ts
Normal file
249
tools/refactoring/orchestrate-refactor.ts
Normal file
@@ -0,0 +1,249 @@
|
||||
#!/usr/bin/env tsx
|
||||
/**
|
||||
* Master Refactoring Orchestrator
|
||||
*
|
||||
* Orchestrates the complete lambda-per-file refactoring process:
|
||||
* 1. Loads files from tracking report
|
||||
* 2. Refactors in priority order
|
||||
* 3. Runs linter and fixes imports
|
||||
* 4. Runs type checking
|
||||
* 5. Updates progress report
|
||||
*/
|
||||
|
||||
import { ASTLambdaRefactor } from './ast-lambda-refactor'
|
||||
import * as fs from 'fs/promises'
|
||||
import * as path from 'path'
|
||||
import { exec } from 'child_process'
|
||||
import { promisify } from 'util'
|
||||
|
||||
const execAsync = promisify(exec)
|
||||
|
||||
interface FileToProcess {
|
||||
path: string
|
||||
lines: number
|
||||
priority: 'high' | 'medium' | 'low'
|
||||
status: 'pending' | 'completed' | 'failed' | 'skipped'
|
||||
error?: string
|
||||
}
|
||||
|
||||
async function loadFilesFromReport(): Promise<FileToProcess[]> {
|
||||
const reportPath = path.join(process.cwd(), 'docs/todo/LAMBDA_REFACTOR_PROGRESS.md')
|
||||
const content = await fs.readFile(reportPath, 'utf-8')
|
||||
|
||||
const files: FileToProcess[] = []
|
||||
const lines = content.split('\n')
|
||||
|
||||
let currentPriority: 'high' | 'medium' | 'low' = 'high'
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.includes('### High Priority')) currentPriority = 'high'
|
||||
else if (line.includes('### Medium Priority')) currentPriority = 'medium'
|
||||
else if (line.includes('### Low Priority')) currentPriority = 'low'
|
||||
else if (line.includes('### Skipped')) break
|
||||
|
||||
const match = line.match(/- \[ \] `([^`]+)` \((\d+) lines\)/)
|
||||
if (match) {
|
||||
files.push({
|
||||
path: match[1],
|
||||
lines: parseInt(match[2], 10),
|
||||
priority: currentPriority,
|
||||
status: 'pending',
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return files
|
||||
}
|
||||
|
||||
async function runCommand(cmd: string, cwd: string = process.cwd()): Promise<{ stdout: string; stderr: string }> {
|
||||
try {
|
||||
return await execAsync(cmd, { cwd, maxBuffer: 10 * 1024 * 1024 })
|
||||
} catch (error: any) {
|
||||
return { stdout: error.stdout || '', stderr: error.stderr || error.message }
|
||||
}
|
||||
}
|
||||
|
||||
async function main() {
|
||||
const args = process.argv.slice(2)
|
||||
const dryRun = args.includes('--dry-run') || args.includes('-d')
|
||||
const priorityFilter = args.find(a => ['high', 'medium', 'low', 'all'].includes(a)) || 'all'
|
||||
const limitArg = args.find(a => a.startsWith('--limit='))
|
||||
const limit = limitArg ? parseInt(limitArg.split('=')[1], 10) : 999
|
||||
const skipLint = args.includes('--skip-lint')
|
||||
const skipTest = args.includes('--skip-test')
|
||||
|
||||
console.log('🚀 Lambda-per-File Refactoring Orchestrator\n')
|
||||
|
||||
// Load files
|
||||
console.log('📋 Loading files from tracking report...')
|
||||
let files = await loadFilesFromReport()
|
||||
|
||||
if (priorityFilter !== 'all') {
|
||||
files = files.filter(f => f.priority === priorityFilter)
|
||||
}
|
||||
|
||||
files = files.slice(0, limit)
|
||||
|
||||
console.log(`\n📊 Configuration:`)
|
||||
console.log(` Priority: ${priorityFilter}`)
|
||||
console.log(` Limit: ${limit}`)
|
||||
console.log(` Files to process: ${files.length}`)
|
||||
console.log(` Mode: ${dryRun ? '🔍 DRY RUN (preview only)' : '⚡ LIVE (will modify files)'}`)
|
||||
console.log(` Skip lint: ${skipLint}`)
|
||||
console.log(` Skip tests: ${skipTest}`)
|
||||
|
||||
if (files.length === 0) {
|
||||
console.log('\n⚠️ No files to process')
|
||||
return
|
||||
}
|
||||
|
||||
// Show preview
|
||||
console.log(`\n📝 Files queued:`)
|
||||
const preview = files.slice(0, 10)
|
||||
preview.forEach((f, i) => {
|
||||
console.log(` ${i + 1}. [${f.priority.toUpperCase()}] ${f.path} (${f.lines} lines)`)
|
||||
})
|
||||
if (files.length > 10) {
|
||||
console.log(` ... and ${files.length - 10} more`)
|
||||
}
|
||||
|
||||
// Safety confirmation for live mode
|
||||
if (!dryRun) {
|
||||
console.log(`\n⚠️ WARNING: This will refactor ${files.length} files!`)
|
||||
console.log(' Press Ctrl+C to cancel, or wait 5 seconds to continue...')
|
||||
await new Promise(resolve => setTimeout(resolve, 5000))
|
||||
}
|
||||
|
||||
console.log('\n' + '='.repeat(60))
|
||||
console.log('PHASE 1: REFACTORING')
|
||||
console.log('='.repeat(60) + '\n')
|
||||
|
||||
// Refactor files
|
||||
const refactor = new ASTLambdaRefactor({ dryRun, verbose: true })
|
||||
|
||||
for (let i = 0; i < files.length; i++) {
|
||||
const file = files[i]
|
||||
console.log(`\n[${i + 1}/${files.length}] Processing: ${file.path}`)
|
||||
|
||||
try {
|
||||
await refactor.refactorFile(file.path)
|
||||
file.status = 'completed'
|
||||
} catch (error) {
|
||||
const errorMsg = error instanceof Error ? error.message : String(error)
|
||||
if (errorMsg.includes('skipping') || errorMsg.includes('No functions')) {
|
||||
file.status = 'skipped'
|
||||
file.error = errorMsg
|
||||
} else {
|
||||
file.status = 'failed'
|
||||
file.error = errorMsg
|
||||
console.error(` ❌ Error: ${errorMsg}`)
|
||||
}
|
||||
}
|
||||
|
||||
// Small delay to avoid overwhelming system
|
||||
await new Promise(resolve => setTimeout(resolve, 100))
|
||||
}
|
||||
|
||||
// Summary
|
||||
const summary = {
|
||||
total: files.length,
|
||||
completed: files.filter(f => f.status === 'completed').length,
|
||||
skipped: files.filter(f => f.status === 'skipped').length,
|
||||
failed: files.filter(f => f.status === 'failed').length,
|
||||
}
|
||||
|
||||
console.log('\n' + '='.repeat(60))
|
||||
console.log('REFACTORING SUMMARY')
|
||||
console.log('='.repeat(60))
|
||||
console.log(` ✅ Completed: ${summary.completed}`)
|
||||
console.log(` ⏭️ Skipped: ${summary.skipped}`)
|
||||
console.log(` ❌ Failed: ${summary.failed}`)
|
||||
console.log(` 📊 Total: ${summary.total}`)
|
||||
|
||||
if (!dryRun && summary.completed > 0) {
|
||||
// Phase 2: Linting
|
||||
if (!skipLint) {
|
||||
console.log('\n' + '='.repeat(60))
|
||||
console.log('PHASE 2: LINTING & IMPORT FIXING')
|
||||
console.log('='.repeat(60) + '\n')
|
||||
|
||||
console.log('🔧 Running ESLint with --fix...')
|
||||
const lintResult = await runCommand('npm run lint:fix')
|
||||
console.log(lintResult.stdout)
|
||||
if (lintResult.stderr && !lintResult.stderr.includes('warning')) {
|
||||
console.log('⚠️ Lint stderr:', lintResult.stderr)
|
||||
}
|
||||
console.log(' ✅ Linting complete')
|
||||
}
|
||||
|
||||
// Phase 3: Type checking
|
||||
console.log('\n' + '='.repeat(60))
|
||||
console.log('PHASE 3: TYPE CHECKING')
|
||||
console.log('='.repeat(60) + '\n')
|
||||
|
||||
console.log('🔍 Running TypeScript compiler check...')
|
||||
const typecheckResult = await runCommand('npm run typecheck')
|
||||
|
||||
if (typecheckResult.stderr.includes('error TS')) {
|
||||
console.log('❌ Type errors detected:')
|
||||
console.log(typecheckResult.stderr.split('\n').slice(0, 20).join('\n'))
|
||||
console.log('\n⚠️ Please fix type errors before committing')
|
||||
} else {
|
||||
console.log(' ✅ No type errors')
|
||||
}
|
||||
|
||||
// Phase 4: Testing
|
||||
if (!skipTest) {
|
||||
console.log('\n' + '='.repeat(60))
|
||||
console.log('PHASE 4: TESTING')
|
||||
console.log('='.repeat(60) + '\n')
|
||||
|
||||
console.log('🧪 Running unit tests...')
|
||||
const testResult = await runCommand('npm run test:unit -- --run')
|
||||
|
||||
if (testResult.stderr.includes('FAIL') || testResult.stdout.includes('FAIL')) {
|
||||
console.log('❌ Some tests failed')
|
||||
console.log(testResult.stdout.split('\n').slice(-30).join('\n'))
|
||||
} else {
|
||||
console.log(' ✅ All tests passed')
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Save detailed results
|
||||
const resultsPath = path.join(process.cwd(), 'docs/todo/REFACTOR_RESULTS.json')
|
||||
await fs.writeFile(resultsPath, JSON.stringify(files, null, 2), 'utf-8')
|
||||
console.log(`\n💾 Detailed results saved: ${resultsPath}`)
|
||||
|
||||
// Final instructions
|
||||
console.log('\n' + '='.repeat(60))
|
||||
console.log('✨ REFACTORING COMPLETE!')
|
||||
console.log('='.repeat(60))
|
||||
|
||||
if (dryRun) {
|
||||
console.log('\n📌 This was a DRY RUN. No files were modified.')
|
||||
console.log(' Run without --dry-run to apply changes.')
|
||||
} else {
|
||||
console.log('\n📌 Next Steps:')
|
||||
console.log(' 1. Review the changes: git diff')
|
||||
console.log(' 2. Fix any type errors if needed')
|
||||
console.log(' 3. Run tests: npm run test:unit')
|
||||
console.log(' 4. Commit: git add . && git commit -m "Refactor to lambda-per-file structure"')
|
||||
}
|
||||
|
||||
console.log(`\n📊 Final Stats:`)
|
||||
console.log(` Files refactored: ${summary.completed}`)
|
||||
console.log(` Files skipped: ${summary.skipped}`)
|
||||
console.log(` Files failed: ${summary.failed}`)
|
||||
|
||||
if (summary.failed > 0) {
|
||||
console.log(`\n❌ Failed files:`)
|
||||
files.filter(f => f.status === 'failed').forEach(f => {
|
||||
console.log(` - ${f.path}: ${f.error}`)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if (require.main === module) {
|
||||
main().catch(console.error)
|
||||
}
|
||||
Reference in New Issue
Block a user