File tree Expand file tree Collapse file tree
Expand file tree Collapse file tree Original file line number Diff line number Diff line change @@ -29,9 +29,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
2929 - Append CUDA version (cuXXX) and AVX level (basic/avx2) to the version string.
3030 - New format: package-ver+cuXXX.avxver-pyver-plat.whl (e.g., llama_cpp_python-0.3.23+cu130.basic-cp310-win_amd64.whl).
3131
32- - feat: Update llama.cpp to [ ggml-org/llama.cpp/commit/68ac3acb435450d5ba1e62748e17671815313dc3 ] ( https://github.com/ggml-org/llama.cpp/commit/68ac3acb435450d5ba1e62748e17671815313dc3 )
32+ - feat: Update llama.cpp to [ ggml-org/llama.cpp/commit/b33df266d0a53f800c47513386920cff1019d70e ] ( https://github.com/ggml-org/llama.cpp/commit/b33df266d0a53f800c47513386920cff1019d70e )
3333
34- - feat: Sync llama.cpp llama/mtmd API Binding 20260127
34+ - feat: Sync llama.cpp llama/mtmd API Binding 20260129
3535
3636## [ 0.3.22]
3737- perf (TFFT): Optimize longest_token_prefix with Numpy SIMD and fast-fail probe
You can’t perform that action at this time.
0 commit comments