Skip to content

aryanjsx/code-converter

Repository files navigation

πŸš€ CodexConvert

AI Code Conversion Benchmark Platform

Convert entire codebases across languages using multiple AI models and benchmark their performance.

React Vite TypeScript Security

CodexConvert Banner

✨ Features

Feature Description
πŸ” Multi-Model Conversion Run the same conversion task across multiple AI models simultaneously
πŸ“Š Automatic Benchmark Scoring Evaluate outputs using syntax validation, structural fidelity, and token efficiency
πŸ† Leaderboard Rankings See which AI models perform best across historical conversions
🧠 Language-Pair Benchmarking Discover the best model for specific migrations like Python β†’ Rust
🧩 Workspace Dashboard Modern developer interface with model comparison and benchmark insights
πŸ”’ Privacy-First Architecture All conversions run directly from your browser to the AI provider β€” no backend

🌐 Supported Languages

Python JavaScript TypeScript Java Go
Rust C C++ C# Ruby
PHP Swift Kotlin Scala Dart
R Perl Shell Script Julia MATLAB
Fortran COBOL Lisp

πŸ”Œ Supported AI Providers

CodexConvert works with any OpenAI-compatible API:

Provider Default Base URL
OpenAI https://api.openai.com/v1
DeepSeek https://api.deepseek.com/v1
Mistral https://api.mistral.ai/v1
Groq https://api.groq.com/openai/v1
Ollama (local) http://localhost:11434/v1
OpenRouter https://openrouter.ai/api/v1
Together AI https://api.together.xyz/v1
Custom Any endpoint you trust

βš™οΈ How It Works

                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚    πŸ“‚ Upload Files    β”‚
                    β”‚   (browser memory)    β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚  πŸ” Model Execution  β”‚  Parallel dispatch
                    β”‚     Manager          β”‚  (max 3 concurrent)
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                   β–Ό           β–Ό           β–Ό
              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              β”‚ Model A β”‚ β”‚ Model B β”‚ β”‚ Model C β”‚
              β”‚ (fetch) β”‚ β”‚ (fetch) β”‚ β”‚ (fetch) β”‚
              β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
                   β”‚           β”‚           β”‚
                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚  πŸ“Š Benchmark Engine β”‚  Syntax, structure,
                    β”‚     + Scoring        β”‚  token efficiency
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚  πŸ’Ύ Benchmark Datasetβ”‚  localStorage
                    β”‚     (max 200 runs)   β”‚  persistence
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               β”‚
                               β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚  πŸ† Leaderboard      β”‚  Rankings, filtering,
                    β”‚     System           β”‚  top models
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Layer Responsibility
Model Execution Manager Dispatches conversions in parallel with concurrency control and failure isolation
Benchmark Engine Runs syntax check, structural fidelity, and token efficiency metrics on each output
Scoring Combines metrics into a weighted 0–10 score per model
Benchmark Dataset Persists runs to localStorage for historical analysis
Leaderboard System Aggregates scores into global and language-pair rankings

For the full architecture breakdown, see πŸ“ Architecture.


πŸ“Š Benchmark Metrics

Every conversion is automatically evaluated using three metrics:

Metric Weight What It Measures
βœ… Syntax Validity 40% Balanced brackets and delimiters
πŸ—οΈ Structural Fidelity 40% Preservation of file count, paths, functions, classes, and imports
⚑ Token Efficiency 20% Output conciseness relative to other models

Scoring formula:

finalScore = syntaxScore Γ— 0.4 + structuralScore Γ— 0.4 + tokenScore Γ— 0.2

Scores are normalized to a 0–10 scale. Full methodology: πŸ“Š Benchmarking.


πŸ”’ Security & Privacy

CodexConvert was built with a privacy-first architecture and has undergone a comprehensive security audit:

Protection
πŸ” API keys stored in sessionStorage only β€” cleared when the tab closes
🚫 No backend server β€” zero server-side data collection
πŸ“‘ Direct browser β†’ AI provider communication
πŸ›‘οΈ Path sanitization prevents ZIP traversal attacks from LLM responses
πŸ“¦ No user code stored in benchmark dataset β€” only scores and metadata
πŸ” TypeScript strict mode enabled across the entire codebase
🧱 Content Security Policy restricts script sources and connections
πŸ”— HTTPS enforced for all remote provider URLs

Full details: πŸ”’ Security.


πŸ“ Project Structure

.
β”œβ”€β”€ App.tsx                           # Main app β€” state, conversion orchestration
β”œβ”€β”€ index.tsx                         # React entry point
β”œβ”€β”€ constants.ts                      # Languages, provider presets, model lists
β”œβ”€β”€ types.ts                          # Shared TypeScript interfaces
β”‚
β”œβ”€β”€ core/
β”‚   β”œβ”€β”€ modelExecutionManager.ts      # Parallel model execution (max 3)
β”‚   β”œβ”€β”€ benchmark/
β”‚   β”‚   β”œβ”€β”€ benchmarkEngine.ts        # Runs metrics, produces results
β”‚   β”‚   β”œβ”€β”€ scoring.ts                # Weighted scoring formula
β”‚   β”‚   β”œβ”€β”€ benchmarkDataset.ts       # localStorage persistence (max 200 runs)
β”‚   β”‚   β”œβ”€β”€ rankingEngine.ts          # Historical aggregation β†’ rankings
β”‚   β”‚   β”œβ”€β”€ types.ts                  # Benchmark & leaderboard types
β”‚   β”‚   └── metrics/
β”‚   β”‚       β”œβ”€β”€ syntaxCheck.ts        # Balanced bracket validation
β”‚   β”‚       β”œβ”€β”€ structuralFidelity.ts # File/path/element comparison
β”‚   β”‚       └── tokenUsage.ts         # Token count estimation
β”‚   └── leaderboard/
β”‚       └── leaderboardEngine.ts      # Global + language-pair queries
β”‚
β”œβ”€β”€ services/
β”‚   └── llmService.ts                 # OpenAI-compatible API client + path sanitization
β”‚
β”œβ”€β”€ utils/
β”‚   └── pathSanitizer.ts              # ZIP path traversal prevention + display truncation
β”‚
β”œβ”€β”€ context/
β”‚   β”œβ”€β”€ ProviderContext.tsx            # LLM provider config (sessionStorage)
β”‚   └── ToastContext.tsx               # Notification system
β”‚
β”œβ”€β”€ components/
β”‚   β”œβ”€β”€ layout/                       # AppLayout, Sidebar, TopBar
β”‚   β”œβ”€β”€ converter/                    # ConversionWorkspace (3-panel)
β”‚   β”œβ”€β”€ benchmark/                    # ScorePanel (metrics breakdown)
β”‚   β”œβ”€β”€ leaderboard/                  # LeaderboardView, tables, widgets
β”‚   β”œβ”€β”€ ProviderPicker.tsx            # Provider/model configuration UI
β”‚   β”œβ”€β”€ ModelSelector.tsx             # Multi-model checkbox selector
β”‚   β”œβ”€β”€ ComparisonPanel.tsx           # Side-by-side model output cards
β”‚   β”œβ”€β”€ CodeDisplay.tsx               # Original ↔ converted viewer
β”‚   β”œβ”€β”€ FileTree.tsx                  # Collapsible file tree
β”‚   β”œβ”€β”€ PrivacyBadge.tsx              # πŸ”’ Privacy mode indicator
β”‚   └── Loader.tsx                    # Loading overlay
β”‚
└── docs/
    β”œβ”€β”€ ARCHITECTURE.md               # System architecture
    β”œβ”€β”€ BENCHMARKING.md               # Evaluation methodology
    └── SECURITY.md                   # Security model & privacy

πŸ›£οΈ Roadmap

Phase Feature Status
Phase 1 LLM Abstraction Layer βœ… Complete
Phase 2 Multi-Model Conversion βœ… Complete
Phase 3 Benchmark & Scoring Engine βœ… Complete
Phase 4 Open Benchmark Leaderboard βœ… Complete
Phase 5 Workspace UI Redesign βœ… Complete
Phase 6 Security Hardening & Documentation βœ… Complete
Phase 7 Community benchmark submissions & public leaderboard πŸ”œ Planned
Phase 8 Advanced metrics (AST comparison, runtime validation) πŸ”œ Planned

πŸš€ Getting Started

Prerequisites

  • Node.js >= 18
  • An API key from any supported LLM provider

Install and run

git clone https://github.com/aryanjsx/code-converter.git
cd code-converter
npm install
npm run dev

Open the printed local URL (typically http://localhost:3000) in your browser.

Configure your provider

  1. Select a provider from the dropdown (OpenAI, DeepSeek, Groq, etc.)
  2. Enter your API key
  3. Choose one or more models
  4. Upload a project folder or files
  5. Click Convert (or Compare N Models for multi-model)

No .env file is needed. All configuration happens in the browser at runtime.


πŸ› οΈ Tech Stack

Layer Technology
Framework React 19
Bundler Vite 6
Language TypeScript 5.8 (strict mode)
AI Integration Any OpenAI-compatible API
ZIP Export JSZip + FileSaver
Styling Tailwind CSS

πŸ“š Documentation

Document Description
πŸ“ Architecture System design and subsystem breakdown
πŸ“Š Benchmarking Evaluation metrics and scoring methodology
πŸ”’ Security Privacy model, audit results, and security practices
🀝 Contributing Development setup and contribution guide

🀝 Contributing

Contributions are welcome! Whether it's a new benchmark metric, a provider preset, a UI improvement, or a bug fix β€” check out our Contributing Guide for development setup, code structure, and pull request guidelines.


πŸ“„ License

This project is licensed under the MIT License.

About

Browser-based multi-language code converter powered by Google Gemini AI. Upload full projects, preserve architecture & naming, compare original vs converted code side-by-side, and export as ZIP. Supports Python, Rust, Go, TypeScript, and 20+ languages.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors