A collection of various ML algorithms implemented from scratch as part of my Algorithms, AI and ML Laboratory course.
This serves as my dumping ground for everything I have done throughout this course. In each week's folder you will find:
- Implementation code for the algorithm (
labXX.ipynb) - Associated report (
report.pdf) - Typst code for the reports
- Matplotlib figures, datasets and other artifacts
- Ensure you have uv installed
- Clone the repo
- Install dependencies
uv sync- Run
jupyter
uv run jupyter-lab- Clone the repo
- Optionally create a virtual environment and activate it
- Install dependencies
pip install .- Run
jupyter
jupyter-lab| Topic | Folder Link |
|---|---|
| Gradient-based Algorithms for Optimization | Week 1 |
| Regression | Week 2 |
| Support Vector Machines | Week 3 |
| Decision Trees | Week 4 |
| Random Forests | Week 5 |
Jupyter Lab can be integrated with your AI model of choice using the MCP protocol. Simply start Jupyter Lab with just using
just jupyter
and connect with your MCP client of choice.
Make sure to set the environment variables as mentioned in jupyter-mcp-server docs.
Support for OpenCode is already configured in the repo since that's what I use.
There's a agent skill jupyter-typst-report which can write a typst report for a given notebook. Compatible with Claude Code, Codex, GitHub Copilot, Gemini CLI, OpenCode, or any other AI agent that support skills protocol.
I have tried to document the most important learning points in the Jupyter notebook files and the associated reports. Some additional points that I didn't find suitable to write in the report are present in the respective week's README.
I am no expert in ML and have taken extensive help from AI while writing the algorithms. I have given my best effort to get the algorithms correct however there might be issues with correctness and accuracy.