bash\ncodesum --mcp-server --mcp-port 8000\n\n\nOr if you want to specify a different host:\n\nbash\ncodesum --mcp-server --mcp-host 0.0.0.0 --mcp-port 8000\n\n\n## API Endpoints\n\n### GET /health\n\nHealth check endpoint to verify the server is running:\n\nbash\ncurl http://localhost:8000/health\n\n\nResponse:\njson\n{\n \"status\": \"ok\",\n \"service\": \"codesum-mcp\"\n}\n\n\n### GET /summarize\n\nGenerate a summary using query parameters:\n\nbash\ncurl \"http://localhost:8000/summarize?query=configuration&max_files=5\"\n\n\n### POST /summarize\n\nGenerate a summary using a JSON body:\n\nbash\ncurl -X POST http://localhost:8000/summarize \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"query\": \"Find files related to configuration\",\n \"max_files\": 5\n }'\n\n\nResponse:\njson\n{\n \"summary\": \"# Code Summary for Query: Find files related to configuration\\nProject Root: /path/to/project\\nProject Structure:\\n\n.\n|-- src/\n| |-- codesum/\n| |-- init.py\n| |-- app.py\n| |-- config.py\n| |-- file_utils.py\n| |-- folder_utils.py\n| |-- mcp_http_server.py\n| |-- mcp_server.py\n| |-- openai_utils.py\n| |-- summary_utils.py\n| |-- tui.py\n|-- pyproject.toml\n|-- README.md\n\n\\n---\\n## File: src/codesum/config.py\\n\\npython\n# src/codesum/config.py\n\nimport os\nfrom pathlib import Path\nimport platformdirs\nfrom dotenv import load_dotenv, set_key, find_dotenv, unset_key\nimport sys\n\nAPP_NAME = \"codesum\"\nCONFIG_DIR = Path(platformdirs.user_config_dir(APP_NAME))\nCONFIG_FILE = CONFIG_DIR / \"settings.env\"\n\nDEFAULT_LLM_MODEL = \"gpt-4o\" # Keep a default\n\n# Simple flag for verbose debugging output\nDEBUG_CONFIG = False # Set to True locally if you need deep tracing\n\n...",\n "selected_files": [\n "/path/to/project/src/codesum/config.py",\n "/path/to/project/pyproject.toml",\n "/path/to/project/src/codesum/init.py"\n ]\n}\n\n\n## Programmatic Usage\n\nYou can also use the MCP server programmatically:\n\npython\nfrom codesum.mcp_server import CodeSumMCPServer\n\n# Create server instance\nserver = CodeSumMCPServer()\n\n# Process a request\nresult = server.process_request({\n 'query': 'Find files related to configuration',\n 'max_files': 5\n})\n\n# Access the results\nsummary = result['summary']\nselected_files = result['selected_files']\n```\n\n## Integration with LLM Tools\n\nThe MCP server can be integrated with LLM tools that support the Model Context Protocol (MCP). \nThe server will automatically select the most relevant files based on the query and return a \nstructured summary that can be used as context for further processing.\n