Conversation
…direct nix-copy-closure: create `LegacySSHStoreConfig` directly
The `storeUri`, `substituters`, and `trustedSubstituters` settings now store typed `StoreReference` values directly instead of raw strings, so callers work with the real types without manual parsing. This is a reworked version of NixOS#10761.
Add a release note asking for help testing https://github.com/NixOS/nix-installer We're hoping to start recommending the Rust-based installer after one release cycle. Co-authored-by: Cole Helbling <cole.e.helbling@outlook.com>
…ypes globals: change store settings to use `StoreReference` types directly
beta nix-installer: add release-note
The underlying mechanism is now in a new `fetchBuildLog` function I put in `libcmd`. I am putting it in here and not in libstore because I have some doubts about `getDefaultSubstituters`, so I would like to keep it in a more "peripheral" part of the codebase for now.
`hashedMirrors` is only relevant to local builds (it is consumed by `builtin:fetchurl` during derivation building) but lived in the global `Settings` class. This moves it to `LocalSettings` where it belongs and threads it through `BuiltinBuilderContext` so `fetchurl.cc` reads it from the context instead of reaching into `settings` directly.
…TransferSettings` The `caFile`, `netrcFile`, and `downloadSpeed` settings are only used by the file transfer subsystem but lived in the global `Settings` class. This moves them to `FileTransferSettings` where they belong. Co-authored-by: Amaan Qureshi <git@amaanq.com>
Deduplicate `nix repl` and `nix log`
On some platforms (macOS), the default soft limit is very low, but the hard limit is high. So let's just raise it the maximum permitted.
…downloadspeed-filetransfer libstore: migrate `caFile`, `netrcFile`, and `downloadSpeed` to `FileTransferSettings`
DerivationBuilder: Preserve death signal across setuid,setgid
Increase the open file soft limit to the hard limit
nix-nswrapper allows running nix in its own user namespace,
believing it is root and with access to build users for sandboxing
with auto-allocate-uids, while it is actually unprivileged.
It is used to wrap nix, and an example of its use has been
added to the unprivileged daemon functional tests.
Running it does not require any elevated privileges,
only uids and gids allocated in /etc/sub{uid,gid}
Support build users on unprivileged users with subuid/subgid
This commit makes `FileTransfer` self-contained by giving it a reference to `FileTransferSettings` instead of reading from the global. It also adds an optional `FileTransfer` parameter to `HttpBinaryCacheStore` so callers can inject their own instance. The main motivation is test isolation. The HTTPS store tests now create custom `FileTransferSettings` with the test CA certificate and pass it through `makeFileTransfer()`, avoiding global state mutation entirely.
…ansfer libstore: make `FileTransfer` injectable into `HttpBinaryCacheStore`
…ParsedURL` In the https-store tests, a `TestHttpBinaryCacheStoreConfig` is constructed with a call to format to create the cache uri. This commit adds a constructor to `HttpBinaryCacheStoreConfig` to remove the need for this call, and updates the test type to leverage this so we're no longer manually calling fmt on a string to format the port.
libstore: add `HttpBinaryCacheStoreConfig` constructor that takes a ` ParsedURL`
…tive_interference_size This expands to __GCC_DESTRUCTIVE_SIZE, which is also 64 (at least in the x86_64 stdenv). Let the compiler decide what's the appropriate cache line size is. Also, on aarch64-darwin the cache line size 128 bytes, so the previous fix didn't actually get rid of false sharing reliably. Clang does this [1] [2], so it overestimates the sizes somewhat, but that's still enough for avoiding false sharing on darwin. [1]: https://github.com/llvm/llvm-project/blob/a289341dedf4d73955faefbb0b3c13881e13dd06/clang/lib/Frontend/InitPreprocessor.cpp#L1331-L1339 [2]: https://github.com/llvm/llvm-project/blob/6f51f8e0f93e6ff2baf0ed2f60021308920a557f/clang/lib/Basic/Targets/AArch64.h#L262-L264
This commit moves `pollInterval`, `maxSubstitutionJobs`, `postBuildHook`, and `logLines` into a dedicated `WorkerSettings` struct that `Settings` privately inherits from, as they are only used by the build worker subsystem. This follows the same pattern as `LocalSettings` and `LogFileSettings`.
…-line-size Unhardcode alignas cache line size
libstore: extract `WorkerSettings` from `Settings`
Idle connections in libcurl's connection pool can be silently dropped by the OS or intermediate firewalls/NATs before they can be reused, forcing new TCP connections to be created. This is especially problematic for HTTP/1.1 endpoints where multiplexing is unavailable. Enable TCP keep-alive with a 60-second idle/interval on all curl easy handles to prevent idle connection drops and improve connection reuse.
…tyle option S3 binary caches now use virtual-hosted-style URLs by default for standard AWS endpoints. Path-style endpoints (s3.region.amazonaws.com) only serve HTTP/1.1, preventing HTTP/2 multiplexing and causing TCP TIME_WAIT socket exhaustion under high concurrency. Virtual-hosted-style endpoints (bucket.s3.region.amazonaws.com) support HTTP/2, enabling multiplexing with the existing CURLPIPE_MULTIPLEX configuration. Add a new `addressing-style` store option (auto/path/virtual) to control this behavior. `auto` (default) uses virtual-hosted-style for standard AWS endpoints and path-style for custom endpoints. `path` forces path-style for backwards compatibility. `virtual` forces virtual-hosted- style for all endpoints including custom ones. Fixes: NixOS#15208
This commit introduces a `getReadOnly` method on the store config that returns if the current store is read only or not. This is then used in subtitution, so we fail gracefully with a nice error message if only the individual store is read-only. As a bonus, it gets us one step closer to getting rid of the global because we can use the per-store method instead. Progress on NixOS#5638
This makes `acquireUserLock` take an explicit stateDir parameter, since it was previously reaching into the global settings object just to read `nixStateDir` for constructing the userpool paths. Progress on NixOS#5638
It was a crude hack that this one low-level function was dependent on the high-level read-only mode setting --- all the more so because rather than making derivation writing fail, that setting made it silently "succeed" why not actually writing the derivation. (Also, for context, we didn't have an such behavior for any other store-mutating operations, just for this one function.) I have gotten rid of the wrapper, and updated the call sites accordingly. - For the ones that should remain dependent on this setting, I made this explicit, and added a comment. - For others, surrounding operations assumed writability (e.g. we had written something before, or were about to try to read back the written derivation after), and so I just made those do the underlying `Store::writeDerivation` operation.
Because unsigned char * can alias anything, the compiler has no choice
but to actually iterate byte-by-byte in readNum and readLittleEndian:
│ movzbl -0x2f(%rbp),%eax
12.54 │ movzbl -0x2e(%rbp),%edx
│ shl $0x8,%rax
1.88 │ shl $0x10,%rdx
│ or %rdx,%rax
│ movzbl -0x30(%rbp),%edx
5.09 │ or %rdx,%rax
2.37 │ movzbl -0x2d(%rbp),%edx
│ shl $0x18,%rdx
3.95 │ or %rdx,%rax
│ movzbl -0x2c(%rbp),%edx
│ shl $0x20,%rdx
│ or %rax,%rdx
5.59 │ movzbl -0x2b(%rbp),%eax
3.29 │ shl $0x28,%rax
│ or %rdx,%rax
7.83 │ movzbl -0x2a(%rbp),%edx
│ shl $0x30,%rdx
│ or %rax,%rdx
8.22 │ movzbl -0x29(%rbp),%eax
│ shl $0x38,%rax
│ or %rdx,%rax
6.42 │ mov %rax,%rcx
│ mov %rax,-0x60(%rbp)
1.35 │ shr $0x20,%rcxA
Which now compiles down to:
2.20 │ mov -0x30(%rbp),%rax
3.12 │ mov %rax,%rcx
│ mov %rax,-0x60(%rbp)
│ shr $0x20,%rcx
Use #include <...> for Windows system headers
libutil: More efficient `readLittleEndian`
Signed-off-by: Lisanna Dettwyler <lisanna.dettwyler@gmail.com>
Resolves NixOS#15317 Signed-off-by: Lisanna Dettwyler <lisanna.dettwyler@gmail.com>
Release notes for 2.34.0
libexpr: Make Boehm allocation cache thread_local
Add missing directories to uninstall.md
(cherry picked from commit 62a78c8)
Due to an erroneous cast, the wrong pointer was passed to these callbacks, leading to a crash. We now create a lightweight temporary EvalState wrapper on the stack in each callback bridge. This also eliminates the need for unsafe_new_with_self for EvalState construction. Co-authored-by: Jörg Thalheim <joerg@thalheim.io> (cherry picked from commit 6633c55)
…tenance [Backport 2.34-maintenance] fix(libexpr-c): pass valid EvalState to primop callback
Removes unnecessary `error: warning:` prefixes by using .message() and not .msg(). Also includes attempt counts in the filetransfer warnings. Removes a redundant trace from the filetransfer errors with FileTransferError (which is already descriptive enough) and contains the same information basically. (cherry picked from commit 6bedefe)
…tenance [Backport 2.34-maintenance] Improve formatting of various errors and warnings
…daemon Client setting overrides from trusted users were not passed to fileTransferSettings, so settings like `netrc-file` had no effect when set by the client. (cherry picked from commit 93246e6)
…tenance [Backport 2.34-maintenance] libstore: forward trusted client settings to fileTransferSettings in daemon
Apparently release-notes script failed to nuke the file.
📝 WalkthroughWalkthroughThis PR removes a custom installer action, updates CI, adds Windows test config and docs, removes Kaitai checks, and performs large refactors across libcmd, libexpr, libfetchers, libflake, and libstore: new BuildError/ExitStatusFlags, event-driven build goals, FileTransfer/TLS changes, many API/signature/type migrations, and extensive tests/data updates. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Client as Client
participant Worker as Worker (process)
participant Goal as DerivationBuildingGoal
participant Hook as Remote Builder
participant Local as Local Builder
participant Store as Store
participant Log as BuildLog
Client->>Worker: request build(paths)
Worker->>Goal: spawn goal(drv, options)
Goal->>Worker: acquire resources
alt Hook allowed
Goal->>Hook: tryBuildHook(drv, env)
Hook-->>Goal: (result, logs)
else Local build
Goal->>Local: buildLocally(drv, sandbox)
Local-->>Goal: (result, logs)
end
Goal->>Log: stream lines & capture tail
alt success
Goal->>Store: register outputs
Goal-->>Worker: amDone(success)
else failure/timeout
Goal-->>Worker: doneFailure(BuildError / TimedOut)
end
Worker-->>Client: BuildResult (ExitStatusFlags)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~180 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 10
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
doc/manual/source/package-management/binary-cache-substituter.md (1)
43-43:⚠️ Potential issue | 🟡 MinorFix grammatical error: mixed singular/plural.
The phrase "or another binary caches" mixes singular ("another") with plural ("caches"). Should be either "or another binary cache" or "or other binary caches".
📝 Proposed fix
-the path is available on the server `avalon` or another binary caches. +the path is available on the server `avalon` or other binary caches.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/manual/source/package-management/binary-cache-substituter.md` at line 43, The sentence contains a grammatical number mismatch: replace the phrase "the path is available on the server `avalon` or another binary caches." by correcting the noun to match "another" (e.g., change "or another binary caches" to "or another binary cache") so the sentence reads "the path is available on the server `avalon` or another binary cache." Locate the exact text "or another binary caches" in the document and update it accordingly.src/libstore/builtins/buildenv.cc (1)
51-69:⚠️ Potential issue | 🟠 MajorThis separator-mismatch issue is currently mitigated but will surface when Windows support is added.
At Line 51,
srcFileis built with.string()(which uses native separators), but Lines 67-69 match only"/"suffixes. On Windows, this would bypass the exclusion filters. However, the code path is currently protected by#ifndef _WIN32(Line 86), so it doesn't execute on Windows today.When Windows support is completed (note the TODO comment at Line 86), this latent bug must be addressed. The proposed fix using
.generic_string()is correct.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/builtins/buildenv.cc` around lines 51 - 69, The code constructs srcFile using std::filesystem::path{srcDir} / name and calls .string(), producing platform-native separators, but subsequently uses hasSuffix(srcFile, "/...") checks which only match forward slashes; change the construction to use .generic_string() for srcFile (and dstFile if used with literal "/" matches) so that hasSuffix comparisons like hasSuffix(srcFile, "/propagated-build-inputs") etc. work correctly on Windows when the `#ifndef` _WIN32 guard is removed; update references to srcFile (and dstFile if applicable) where literal "/"-based suffix checks are performed, leaving maybeStat and hasSuffix unchanged.
🟠 Major comments (21)
src/libexpr/eval-profiler.cc-144-146 (1)
144-146:⚠️ Potential issue | 🟠 MajorDisable symlink following when truncating profiler output.
Line 145 currently follows symlinks while truncating existing files. That can unexpectedly truncate the symlink target and is risky in shared/temp directories.
🔒 Proposed fix
{ .truncateExisting = true, - .followSymlinksOnTruncate = true, /* FIXME: Probably shouldn't follow symlinks. */ + .followSymlinksOnTruncate = false, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr/eval-profiler.cc` around lines 144 - 146, The profiler truncation call currently sets followSymlinksOnTruncate = true which can accidentally truncate symlink targets; in src/libexpr/eval-profiler.cc change the option for followSymlinksOnTruncate to false (next to .truncateExisting) so truncation does not follow symlinks, and remove or update the /* FIXME */ comment accordingly; the relevant symbols to edit are the options struct where .truncateExisting and .followSymlinksOnTruncate are set.src/libexpr/include/nix/expr/counter.hh-14-14 (1)
14-14:⚠️ Potential issue | 🟠 MajorAdd
#include <new>and guardstd::hardware_destructive_interference_sizefor portability.Line 14 uses
std::hardware_destructive_interference_sizewithout including<new>, where this constant is defined. Additionally, this constant is conditionally supported and may not be available on all standard library implementations. Add an explicit include and feature-test macro guard with fallback value to ensure portability across toolchains.🔧 Proposed fix
`#include` <atomic> `#include` <cstdint> +#include <new> namespace nix { +constexpr std::size_t counter_alignment = +#if defined(__cpp_lib_hardware_interference_size) + std::hardware_destructive_interference_size; +#else + 64; +#endif + /** * An atomic counter aligned on a cache line to prevent false sharing. * The counter is only enabled when the `NIX_SHOW_STATS` environment * variable is set. This is to prevent contention on these counters * when multi-threaded evaluation is enabled. */ -struct alignas(std::hardware_destructive_interference_size) Counter +struct alignas(counter_alignment) Counter {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr/include/nix/expr/counter.hh` at line 14, Include <new> and guard usage of std::hardware_destructive_interference_size around the Counter declaration: add the header <new>, and wrap use of std::hardware_destructive_interference_size in a preprocessor check (e.g., `#if` defined(__cpp_lib_hardware_interference_size) or check for the symbol) with a fallback constant (e.g., define a fallback value like 64) to ensure portability; update the alignas on struct Counter to use the guarded symbol or the fallback so builds on toolchains that lack std::hardware_destructive_interference_size succeed.src/libstore/builtins/fetchurl.cc-16-18 (1)
16-18:⚠️ Potential issue | 🟠 MajorReset
fileTransferSettings.netrcFilewhenctx.netrcDatais empty.When
ctx.netrcDatais empty (due to failed read in derivation-builder.cc),fileTransferSettings.netrcFileis left unchanged, preserving any client-provided path. TheFileTransferinstance created on line 47 will then attempt to use that inaccessible path in the sandbox context. Either reset the setting to an empty string in the else branch, or verify that an emptynetrcDatafrom a failed read should instead disable netrc entirely for this invocation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/builtins/fetchurl.cc` around lines 16 - 18, The current branch only sets fileTransferSettings.netrcFile when ctx.netrcData != "" leaving a previous client path in place when ctx.netrcData is empty; update the conditional around ctx.netrcData in fetchurl.cc so that when ctx.netrcData is empty you explicitly reset fileTransferSettings.netrcFile to an empty string (i.e., add an else that sets fileTransferSettings.netrcFile = "") to ensure the subsequent FileTransfer instance does not attempt to use a stale/inaccessible path; keep the writeFile call only in the non-empty branch.src/libstore/daemon.cc-299-305 (1)
299-305:⚠️ Potential issue | 🟠 MajorDon't let
settings.set()block file-transfer overrides.Line 303 can throw before Line 304 runs because both setters share the same
try. Any override that only exists onfileTransferSettingsis still dropped here, which defeats the new forwarding path for file-transfer-specific settings. Apply the two registries independently and only treat the override as unknown if neither accepts it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/daemon.cc` around lines 299 - 305, The current combined try can throw in settings.set(name, value) and prevent fileTransferSettings.set(name, value) from running, dropping overrides that only apply to fileTransferSettings; split the operations so each registry is applied independently: first attempt settings.set(name, value) in its own try/catch, then attempt fileTransferSettings.set(name, value) in a separate try/catch, track whether either succeeded, and only run the "unknown override" handling (the else branch that calls setSubstituters or similar) if both attempts fail; keep the original conditional around trusted/name checks and use the same identifiers (settings.set, fileTransferSettings.set, settings.getWorkerSettings().substituters) to locate and update the logic.src/libstore/gc.cc-311-323 (1)
311-323:⚠️ Potential issue | 🟠 MajorBound the roots-daemon request so GC cannot hang unboundedly.
This path blocks until the daemon sends the terminating NUL record. Because it runs during root discovery after GC has taken the global lock, a wedged daemon can stall GC indefinitely. Please add
poll()/deadline handling here and fail fast instead of blocking forever.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/gc.cc` around lines 311 - 323, The requestRuntimeRoots function currently blocks forever on socketSource.readLine waiting for a terminating NUL; change it to use a bounded wait by adding poll()/deadline handling around the socket read so the loop times out and returns an error instead of hanging. Specifically, wrap the FdSource/socket read in a timed wait (using poll/select or a std::chrono deadline) and on timeout break and propagate a failure (e.g., return empty Roots or throw) from requestRuntimeRoots so callers (GC/global lock holder) can fail fast; update references to socket, socketSource and the readLine loop to check the deadline and handle ERR/timeout paths cleanly. Ensure any resources (socket) are closed on timeout and document the timeout behavior in the function comment.src/libstore/gc.cc-41-45 (1)
41-45:⚠️ Potential issue | 🟠 MajorNormalize indirect-root paths before hashing and linking.
findRoots()resolves auto-root targets relative togcroots/auto, so storing a relativepathhere makes the root point at the wrong location and can let a live store path get collected. Please absolutize/canonicalizepathfirst, then hash that normalized value.🛠️ Suggested fix
void LocalStore::addIndirectRoot(const std::filesystem::path & path) { - std::string hash = hashString(HashAlgorithm::SHA1, path.string()).to_string(HashFormat::Nix32, false); + auto absRoot = absPath(path); + std::string hash = hashString(HashAlgorithm::SHA1, absRoot.string()).to_string(HashFormat::Nix32, false); auto realRoot = canonPath(config->stateDir.get() / gcRootsDir / "auto" / hash); - makeSymlink(realRoot, path); + makeSymlink(realRoot, absRoot); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/gc.cc` around lines 41 - 45, Normalize the incoming path before hashing and creating the symlink: in LocalStore::addIndirectRoot call canonPath (or equivalent) on the provided path to get an absolute/canonicalized path (e.g., auto normalized = canonPath(path)); compute the SHA1 hash from normalized.string() and build realRoot the same way, and pass the normalized path to makeSymlink instead of the original relative path so the stored indirect root is absolute and matches findRoots resolution.packaging/dev-shell.nix-134-135 (1)
134-135:⚠️ Potential issue | 🟠 MajorUse the same
stdenvfor platform filtering to avoid cross-shell mismatches.Line 134 uses
pkgs.stdenv.hostPlatform, but this expression otherwise keys offstdenv = pkgs.nixDependencies2.stdenv. Mixing these can incorrectly include/exclude components in cross setups.Suggested fix
- availableComponents = lib.filterAttrs ( - k: v: lib.meta.availableOn pkgs.stdenv.hostPlatform v - ) allComponents; + availableComponents = lib.filterAttrs ( + k: v: lib.meta.availableOn stdenv.hostPlatform v + ) allComponents;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packaging/dev-shell.nix` around lines 134 - 135, The platform filtering uses pkgs.stdenv.hostPlatform while the rest of the expression relies on stdenv = pkgs.nixDependencies2.stdenv, causing cross-shell mismatches; update the platform reference so it uses the same stdenv as the rest of the expression (replace pkgs.stdenv.hostPlatform with stdenv.hostPlatform or otherwise reference nixDependencies2.stdenv consistently) in the expression that computes lib.meta.availableOn to ensure components are included/excluded using the same stdenv instance.doc/manual/meson.build-118-123 (1)
118-123:⚠️ Potential issue | 🟠 MajorExclude the actual build-dir basename here.
This only skips a top-level entry literally named
build. In in-tree setups likebuild-debugorout, the tar copy will recurse the active build tree back into itself and contaminate the manual inputs.🛠️ Proposed fix
- # Copy source to build directory, excluding the build directory itself + # Copy source to build directory, excluding the active build directory itself # (which is present when built as an individual component). # Use tar with --dereference to copy symlink targets (e.g., JSON examples from tests). - (cd `@CURRENT_SOURCE_DIR`@ && find . -mindepth 1 -maxdepth 1 ! -name build | tar -c --dereference -T - -f -) | (cd `@2`@ && tar -xf -) + build_dir_name="$(basename "@2@")" + (cd "@CURRENT_SOURCE_DIR@" && find . -mindepth 1 -maxdepth 1 ! -name "$build_dir_name" | tar -c --dereference -T - -f -) | (cd "@2@" && tar -xf -) chmod -R u+w `@2`@ find `@2`@ -name '*.drv' -delete🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/manual/meson.build` around lines 118 - 123, The tar copy currently excludes only a literal top-level entry named "build" which fails for in-tree builds like "build-debug"; update the find invocation that currently uses "! -name build" to exclude the actual build directory basename instead (e.g., replace "! -name build" with "! -name '@CURRENT_BUILD_DIR_BASENAME@'" or otherwise compute the basename of `@CURRENT_BUILD_DIR`@ and use that in the ! -name test) so the active build tree is properly skipped when creating the tar from `@CURRENT_SOURCE_DIR`@ to `@2`@.src/libstore-c/nix_api_store.cc-304-309 (1)
304-309:⚠️ Potential issue | 🟠 MajorDon't silently omit outputs with unknown paths.
nix::DerivationOutput::path()can legitimately returnstd::nulloptfor floating, deferred, and impure outputs. With the new guard,nix_derivation_make_outputs()now drops those names entirely, so C callers cannot distinguish “no path yet” from “no such output”. Please fail this call before invoking callbacks, or route callers tonix_derivation_get_outputs_and_optpaths()instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore-c/nix_api_store.cc` around lines 304 - 309, The current loop in nix_derivation_make_outputs() drops outputs whose nix::DerivationOutput::path() returns std::nullopt, which hides “no path yet” vs “no such output”; change nix_derivation_make_outputs() to detect any optional-less outputs and fail early (return an error) before invoking the callback, or instead call into nix_derivation_get_outputs_and_optpaths() so callers receive names with optional paths; adjust logic around the loop over drv.outputs (where outPath is computed and callback(userdata, ... ) is invoked) to either return an error when any outPath is missing or delegate to the _get_outputs_and_optpaths variant and document the behavior.src/libflake/flakeref.cc-151-165 (1)
151-165:⚠️ Potential issue | 🟠 MajorUse a portable root check for the upward path walks.
After switching these variables to
std::filesystem::path,while (path != "/")is no longer cross-platform. Windows roots likeC:\never equal"/", andparent_path()returns the path itself when called on a root, causing these loops to spin forever or terminate incorrectly. Useparent_path() == currentor the canonicalpath == path.root_path()to detect the filesystem root.Suggested fix
- while (path != "/") { + while (true) { if (pathExists(path / "flake.nix")) { found = true; break; } else if (pathExists(path / ".git")) throw Error( "path %s is not part of a flake (neither it nor its parent directories contain a 'flake.nix' file)", PathFmt(path)); else { if (lstat(path).st_dev != device) throw Error( "unable to find a flake before encountering filesystem boundary at %s", PathFmt(path)); } - path = path.parent_path(); + auto parent = path.parent_path(); + if (parent == path) + break; + path = std::move(parent); } @@ - while (flakeRoot != "/") { + while (true) { if (pathExists(flakeRoot / ".git")) { auto parsedURL = ParsedURL{ .scheme = "git+file", .authority = ParsedURL::Authority{}, .path = pathToUrlPath(flakeRoot), .query = query, .fragment = fragment, }; @@ return fromParsedURL(fetchSettings, std::move(parsedURL), isFlake); } + auto parent = flakeRoot.parent_path(); + if (parent == flakeRoot) + break; subdir = flakeRoot.filename().string() + (subdir.empty() ? "" : "/" + subdir); - flakeRoot = flakeRoot.parent_path(); + flakeRoot = std::move(parent); }Also applies to: lines 176–200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libflake/flakeref.cc` around lines 151 - 165, The loop currently uses a hardcoded string check while (path != "/") which is not portable; change the loop termination to detect filesystem roots using std::filesystem::path semantics (e.g., stop when path == path.root_path() or when path.parent_path() == path) so the upward walk won't loop forever on Windows; update both the block containing the lstat() check and the similar block at lines 176–200 to use this root detection, keeping the existing error throws (Error(..., PathFmt(path))) and the parent_path() update logic intact.src/libstore/build-result.cc-131-143 (1)
131-143:⚠️ Potential issue | 🟠 MajorDon't drop
provenancefrom failure equality and ordering.
BuildResult::operator==and<=>are still defaulted, so these operators now decide whether two failure results compare equal.to_json/from_jsonstill preserveprovenance, but this implementation ignores it, so two failures that serialize differently can collapse as equal in tests, maps, or dedup logic. If pointer identity is too weak, add a value-level comparison forProvenanceinstead of omitting it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/build-result.cc` around lines 131 - 143, The equality and ordering operators for BuildError (BuildError::operator== and BuildError::operator<=>) currently ignore the failure Provenance, causing distinct provenance to be treated as equal; update both operators to include provenance in the comparison (e.g., compare provenance pointer identity or, preferably, a value-level comparison of Provenance) alongside status, isNonDeterministic, and message(); ensure you reference the Provenance accessor/member used by BuildError (or call a value-level compare function on Provenance) so serialized/deserialized results with different provenance no longer collapse as equal.src/libstore/http-binary-cache-store.cc-28-29 (1)
28-29:⚠️ Potential issue | 🟠 MajorCheck the actual URL scheme here, not the supported-schemes set.
When
_NIX_FORCE_HTTP=1,uriSchemes()contains"file", so this condition stops enforcing an authority for all URLs. That means malformedhttp:///...andhttps:///...cache URLs are now accepted even though onlyfile:should be exempt.🩹 Suggested fix
- if (!uriSchemes().contains("file") && (!cacheUri.authority || cacheUri.authority->host.empty())) + if (cacheUri.scheme != "file" && (!cacheUri.authority || cacheUri.authority->host.empty())) throw UsageError("`%s` Store requires a non-empty authority in Store URL", cacheUri.scheme);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/http-binary-cache-store.cc` around lines 28 - 29, The check uses uriSchemes() (which can be altered by env vars) instead of the actual parsed URL scheme, letting malformed http/https URLs bypass validation; change the condition to inspect cacheUri.scheme directly (e.g., if cacheUri.scheme != "file" && (!cacheUri.authority || cacheUri.authority->host.empty())) and keep the existing UsageError throw (referencing cacheUri.scheme and UsageError) so only non-file schemes require a non-empty authority.src/libstore-test-support/include/nix/store/tests/https-store.hh-4-5 (1)
4-5:⚠️ Potential issue | 🟠 MajorRemove unused GoogleTest includes from this public header.
The
<gtest/gtest.h>and<gmock/gmock.h>headers are not used in any declarations here. This public header unnecessarily forces every consumer to have GoogleTest include paths available. The test infrastructure dependencies come from theLibStoreNetworkTestbase class already included inlibstore-network.hh.Suggested fix
-#include <gtest/gtest.h> -#include <gmock/gmock.h>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore-test-support/include/nix/store/tests/https-store.hh` around lines 4 - 5, Remove the unnecessary GoogleTest includes by deleting the lines that include <gtest/gtest.h> and <gmock/gmock.h> from this public header; ensure the header does not reference any gtest/gmock symbols and relies on the existing LibStoreNetworkTest base class (from libstore-network.hh) for test dependencies so consumers are not forced to have GoogleTest include paths.packaging/everything.nix-34-34 (1)
34-34:⚠️ Potential issue | 🟠 MajorMake
nix-nswrapperoptional at the function boundary to avoid non-Linux eval failures.
nix-nswrapperis declared as a mandatory parameter at line 34, but the package is restricted to Linux-only platforms (meta.platforms = lib.platforms.linuxinsrc/nswrapper/package.nixandlinuxOnly = trueinflake.nix). On non-Linux systems, the scope passed tocallPackageat line 455 incomponents.nixwon't includenix-nswrapper, causing evaluation failure before thestdenv.isLinuxguard at line 177 can protect its usage.Suggested change
- nix-nswrapper, + nix-nswrapper ? null, ... - + lib.optionalString stdenv.isLinux '' + + lib.optionalString (stdenv.isLinux && nix-nswrapper != null) '' lndir ${nix-nswrapper} $out '';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packaging/everything.nix` at line 34, The function parameter list that currently requires nix-nswrapper should make it optional to avoid non-Linux eval failures: change the argument named nix-nswrapper in the top-level function signature to an optional parameter (e.g., nix-nswrapper ? null or nix-nswrapper ?) so callers that don't provide it won't break during callPackage; leave the existing runtime guard that checks stdenv.isLinux (and the usage in src/nswrapper/package.nix) intact so the package is only used on Linux.src/libfetchers/path.cc-22-22 (1)
22-22:⚠️ Potential issue | 🟠 MajorUse
generic_string()to ensure forward-slash path separators.On Windows,
std::filesystem::path::string()returns the native format with backslashes, but line 98'stoURL()method splits the stored path by/, breaking round-trip URL reconstruction. Usegeneric_string()to ensure consistent forward-slash separators across platforms.Proposed fix
- input.attrs.insert_or_assign("path", urlPathToPath(url.path).string()); + input.attrs.insert_or_assign("path", urlPathToPath(url.path).generic_string());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/path.cc` at line 22, The stored path is written using std::filesystem::path::string(), which yields platform-native backslashes on Windows and breaks round-trip URL reconstruction in toURL(); change the insertion at input.attrs.insert_or_assign("path", urlPathToPath(url.path).string()) to use urlPathToPath(url.path).generic_string() instead so the stored path always uses forward slashes compatible with toURL()'s splitting.src/libexpr/eval-cache.cc-73-78 (1)
73-78:⚠️ Potential issue | 🟠 MajorPreserve backward compatibility for persisted string contexts.
Line 73 still reuses
eval-cache-v6, but Lines 145-150 and Line 268 change the on-disk delimiter for string contexts from the previous format to spaces. That makes older cache entries unreadable with the new parser and can surface asNixStringContextElem::parse(...)failures until users manually clear their eval cache.💡 Backward-compatible fix
case AttrType::String: { NixStringContext context; - if (!queryAttribute.isNull(3)) - for (auto & s : tokenizeString<std::vector<std::string>>(queryAttribute.getStr(3), " ")) - context.insert(NixStringContextElem::parse(s)); + if (!queryAttribute.isNull(3)) { + auto contextStr = queryAttribute.getStr(3); + auto delimiter = contextStr.find(';') != std::string::npos ? ";" : " "; + for (auto & s : tokenizeString<std::vector<std::string>>(contextStr, delimiter)) + context.insert(NixStringContextElem::parse(s)); + } return {{rowId, string_t{queryAttribute.getStr(2), context}}}; }Also applies to: 145-154, 268-269
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr/eval-cache.cc` around lines 73 - 78, The cache code changed the on-disk delimiter for persisted string contexts, breaking backward compatibility; fix by preserving the old delimiter semantics or making the parser accept both formats. Update the serialization/deserialization around NixStringContextElem::parse (and any corresponding serialize function) so it either writes the same delimiter used historically or accepts both the old delimiter and the new space-delimited form when reading; ensure the changes are applied where the cache is created/loaded (references: getCacheDir(), the eval-cache-v6 dbPath construction using fingerprint.to_string, and NixStringContextElem::parse) so existing entries remain readable without forcing users to clear eval-cache.src/libcmd/include/nix/cmd/unix-socket-server.hh-45-55 (1)
45-55:⚠️ Potential issue | 🟠 MajorDefault the socket to private permissions.
Line 55 makes newly created sockets world-connectable by default. For a command-server helper, that is too permissive; callers that need broader access can opt in, but the safe default should be private.
🔒 Proposed fix
- mode_t socketMode = 0666; + mode_t socketMode = 0600;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libcmd/include/nix/cmd/unix-socket-server.hh` around lines 45 - 55, The ServeUnixSocketOptions struct currently sets socketMode = 0666 which makes sockets world-connectable; change the default to a private permission like 0600 (owner read/write) by updating the socketMode initializer in ServeUnixSocketOptions to a more restrictive value and adjust the comment to reflect the private-by-default behavior so callers who need broader access can opt in by passing a different socketMode.src/libstore/build/goal.cc-24-30 (1)
24-30:⚠️ Potential issue | 🟠 MajorQueue EOF events instead of storing a single one.
handleEOF()carries an fd, so a goal can receive more than one EOF before the coroutine wakes up. With a singlechildEOF, the second close hits the assert in debug builds and overwrites the first EOF in release builds, which can leave one stream looking permanently open.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/build/goal.cc` around lines 24 - 30, ChildEvents::pushChildEvent currently stores a single childEOF and asserts if a second arrives; change it to queue EOF events: replace the single childEOF member with a container (e.g., std::deque<ChildEOF> or std::vector<ChildEOF>), remove the assert, and have pushChildEvent push the incoming event into that container unless childTimeout is set. Also update the consumer code (e.g., handleEOF or wherever childEOF is read) to pop/process events from the new queue in FIFO order so multiple EOFs are handled correctly. Ensure any move semantics are preserved when inserting into the container and that childTimeout behavior remains unchanged.src/libstore-test-support/https-store.cc-85-89 (1)
85-89:⚠️ Potential issue | 🟠 MajorThe mTLS client credentials never get configured.
HttpsBinaryCacheStoreMtlsTest::serverArgs()requires a client certificate, but the baseSetUp()only installs the CA file intotestFileTransferSettings. The generatedclientCertandclientKeyare otherwise unused, so the mTLS handshake will fail.Also applies to: 114-119
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore-test-support/https-store.cc` around lines 85 - 89, The mTLS client cert/key are never installed into the per-test FileTransferSettings, so the server (HttpsBinaryCacheStoreMtlsTest::serverArgs()) that requires client certs will fail the handshake; update the SetUp() code where testFileTransferSettings is created to assign testFileTransferSettings->clientCert = clientCert and testFileTransferSettings->clientKey = clientKey (before calling makeFileTransfer(*testFileTransferSettings)), and apply the same change to the other similar block that creates testFileTransferSettings so testFileTransfer is constructed with the client credentials.src/libstore/build/derivation-building-goal.cc-1032-1037 (1)
1032-1037:⚠️ Potential issue | 🟠 MajorOverwrite hook environment variables instead of
emplace()-ing them.
getEnvOs()copies the current process environment first, so Lines 1034-1037 become no-ops wheneverDRV_PATH,OUT_PATHS, orNIX_CONFIGare already set. That can hand the post-build hook stale caller values instead of the current build's values.Suggested fix
- hookEnvironment.emplace(OS_STR("DRV_PATH"), string_to_os_string(store.printStorePath(drvPath))); - hookEnvironment.emplace( + hookEnvironment.insert_or_assign(OS_STR("DRV_PATH"), string_to_os_string(store.printStorePath(drvPath))); + hookEnvironment.insert_or_assign( OS_STR("OUT_PATHS"), string_to_os_string(chomp(concatStringsSep(" ", store.printStorePathSet(outputPaths))))); - hookEnvironment.emplace(OS_STR("NIX_CONFIG"), string_to_os_string(globalConfig.toKeyValue())); + hookEnvironment.insert_or_assign(OS_STR("OUT_PATHS"), string_to_os_string( + chomp(concatStringsSep(" ", store.printStorePathSet(outputPaths))))); + hookEnvironment.insert_or_assign(OS_STR("NIX_CONFIG"), string_to_os_string(globalConfig.toKeyValue()));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/build/derivation-building-goal.cc` around lines 1032 - 1037, The code uses hookEnvironment.emplace(...) after creating hookEnvironment from getEnvOs(), so if DRV_PATH, OUT_PATHS, or NIX_CONFIG are already present those emplace calls are no-ops and won't update the hook values; change the three emplace calls to overwrite existing entries (e.g., use hookEnvironment[OS_STR("DRV_PATH")] = string_to_os_string(...), or hookEnvironment.insert_or_assign(...) if available) for DRV_PATH, OUT_PATHS, and NIX_CONFIG so the hook environment reliably contains the current build's values when getEnvOs() returned preexisting keys.src/libfetchers/mercurial.cc-163-183 (1)
163-183:⚠️ Potential issue | 🟠 MajorDon't treat pinned
hg+fileinputs as writable working trees.
getSourcePath()still treats only!ref && !revinputs as writable working trees, butputFile()now keys offgetActualUrl()instead. A local input pinned to a branch or revision will still hit this branch and write/commit into the repository behind what should be an immutable snapshot.💡 Suggested fix
- std::visit( - overloaded{ - [&](const std::filesystem::path & repoPath) { - auto absPath = repoPath / path.rel(); - - writeFile(absPath, contents); - - // FIXME: shut up if file is already tracked. - runHg({OS_STR("add"), absPath.native()}); - - if (commitMsg) - runHg({OS_STR("commit"), absPath.native(), OS_STR("-m"), string_to_os_string(*commitMsg)}); - }, - [&](const std::string &) { - throw Error( - "cannot commit '%s' to Mercurial repository '%s' because it's not a working tree", - path, - input.to_string()); - }, - }, - getActualUrl(input)); + auto repoPath = getSourcePath(input); + if (!repoPath) + throw Error( + "cannot commit '%s' to Mercurial repository '%s' because it's not a working tree", + path, + input.to_string()); + + auto absPath = *repoPath / path.rel(); + + writeFile(absPath, contents); + + // FIXME: shut up if file is already tracked. + runHg({OS_STR("add"), absPath.native()}); + + if (commitMsg) + runHg({OS_STR("commit"), absPath.native(), OS_STR("-m"), string_to_os_string(*commitMsg)});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/mercurial.cc` around lines 163 - 183, The code in putFile visits getActualUrl(input) and treats any hg+file as a writable working tree, which allows writes for inputs that are pinned by ref or rev; instead, mirror getSourcePath() logic and only permit writing when the original input has neither ref nor rev. Modify the visitor in putFile to inspect the original input (or call the same helper used by getSourcePath()) rather than getActualUrl(), and if the input is pinned (has ref or rev) throw the same Error used for non-working-tree cases; keep existing calls to writeFile, runHg and commit only in the branch that is explicitly determined writable by the absence of ref/rev.
🟡 Minor comments (18)
src/libstore-tests/derivation-advanced-attrs.cc-8-8 (1)
8-8:⚠️ Potential issue | 🟡 MinorPin
WorkerSettingsin the fixture instead of reading globalsettings.These expectations now depend on process-global worker settings, so an unrelated test or ambient config change can flip
substitutesAllowed(...)even when the derivation under test is unchanged. Please pass a fixture-localWorkerSettingswith explicit test values here so these parser tests stay deterministic.Also applies to: 196-196, 244-244, 336-336, 401-401
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore-tests/derivation-advanced-attrs.cc` at line 8, The tests are reading process-global WorkerSettings which makes substitutesAllowed(...) non-deterministic; instead instantiate and initialize a local WorkerSettings with explicit values in the test fixture and pass that fixture-local settings object into the parser/derivation code under test (replace uses of the global settings with the fixture's WorkerSettings instance), ensuring any calls that compute substitutesAllowed(...) use the passed-in settings; update all test cases in this file that construct/parse derivations to use the fixture's WorkerSettings so the expectations are deterministic.src/libstore/globals.cc-6-11 (1)
6-11:⚠️ Potential issue | 🟡 MinorDuplicate include.
nix/util/executable-path.hhis included twice (lines 6 and 11). Remove one of the duplicate includes.Proposed fix
`#include` "nix/util/executable-path.hh" `#include` "nix/util/archive.hh" `#include` "nix/util/args.hh" `#include` "nix/util/abstract-setting-to-json.hh" `#include` "nix/util/compute-levels.hh" -#include "nix/util/executable-path.hh" `#include` "nix/util/signals.hh"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/globals.cc` around lines 6 - 11, The file contains a duplicate preprocessor include for "nix/util/executable-path.hh" (appears twice in the include list); remove the redundant include so the header is only included once. Locate the two identical include directives for "nix/util/executable-path.hh" in globals.cc and delete one of them, preserving the other includes such as "nix/util/archive.hh", "nix/util/args.hh", "nix/util/abstract-setting-to-json.hh", and "nix/util/compute-levels.hh".src/libexpr-tests/value/print.cc-191-205 (1)
191-205:⚠️ Potential issue | 🟡 MinorThis doesn't exercise the regression described in the comment.
Line 200 says the bug was a tried value reverting to the original thunk, but Lines 193-197 never create a thunk or go through
tryEval; they only callmkFailed()on a freshValue. If that transition regresses again, this test still passes. Please drive a real failing thunk/app through the try-eval path before asserting the printed«failed»output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr-tests/value/print.cc` around lines 191 - 205, The test currently calls v.mkFailed() directly, which doesn't exercise the try-eval thunk regression; replace that with creating a real thunk (e.g., use Value::mkThunk or construct a Value as a thunk that throws Error("nope") when evaluated), then invoke the try-evaluation path (call tryEval(v) or the equivalent tryEval helper used in tests) so the thunk is attempted and transitions to failed via the real code path instead of manually calling Value::mkFailed; keep the same assertions using test(v, "«failed»") and test(v, ANSI_MAGENTA "«failed»" ANSI_NORMAL, PrintOptions{.ansiColors = true}) in the ValuePrintingTests::vFailed case.src/libstore/builtins/buildenv.cc-54-57 (1)
54-57:⚠️ Potential issue | 🟡 MinorDangling-symlink warning logs the wrong path.
Line 54 checks
srcFile, but Line 56 logsdstFile. This makes debugging collisions/skips harder.🛠️ Proposed fix
- warn("skipping dangling symlink '%s'", dstFile); + warn("skipping dangling symlink '%s'", srcFile);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/builtins/buildenv.cc` around lines 54 - 57, The warning for dangling symlinks logs the wrong path: the code checks maybeStat(srcFile) but calls warn with dstFile; update the warn call in the same block so it logs srcFile (the path checked) instead of dstFile (e.g., change the warn invocation referenced near srcStOpt/maybeStat to use srcFile), keeping the existing message format and variables consistent with warn() usage.src/libstore/include/nix/store/builtins.hh-24-24 (1)
24-24:⚠️ Potential issue | 🟡 MinorAdd
#include <filesystem>to make this public header self-contained.Line 24 exposes
std::filesystem::path tmpDirInSandbox, butbuiltins.hhdoes not include<filesystem>directly. Any code including this header standalone will fail to compile because the type definition is missing. The upstream headers (derivations.hh,config.hh) do not provide it either.Fix
`#include` <nlohmann/json_fwd.hpp> +#include <filesystem>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/include/nix/store/builtins.hh` at line 24, The header exposes the type std::filesystem::path via the member tmpDirInSandbox but doesn't include the <filesystem> header; make this public header self-contained by adding an `#include` <filesystem> at the top so that declarations referencing std::filesystem::path (e.g., tmpDirInSandbox) compile for any translation unit that includes this header.src/libstore/gc.cc-322-346 (1)
322-346:⚠️ Potential issue | 🟡 Minor
censor=falseis ineffective on the roots-daemon path.
requestRuntimeRoots()hard-codes{censored}intounchecked, sofindRuntimeRoots(..., false)still returns censored provenance whenuseRootsDaemonis enabled. That changes the user-visible contract of thecensorflag. Either extend the protocol to carry labels, or use a stable non-censored marker for this path instead of{censored}.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/gc.cc` around lines 322 - 346, The roots-daemon path currently embeds the literal {censored} token via requestRuntimeRoots(), which makes LocalStore::findRuntimeRoots(..., false) return censored provenance even when censor==false; update the daemon code/protocol so requestRuntimeRoots returns real link labels (or a stable non-censored marker) instead of inserting the censored token, and change LocalStore::findRuntimeRoots to rely on requestRuntimeRoots' actual labels (not override with censored) — locate requestRuntimeRoots and its producer of the {censored} entry and either extend the protocol to include labels or replace {censored} with a neutral sentinel recognized as “unknown but not censored,” keeping findRuntimeRootsUnchecked behavior unchanged for the non-daemon path..github/workflows/build.yml-56-59 (1)
56-59:⚠️ Potential issue | 🟡 MinorUpdate
actions/upload-artifactto v7, the latest stable version.Version 6 is outdated; the current release is
actions/upload-artifact@v7.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/build.yml around lines 56 - 59, Replace the outdated GitHub Action reference "uses: actions/upload-artifact@v6" with the current stable release "uses: actions/upload-artifact@v7" in the workflow step that uploads the artifact (the step using name: ${{ inputs.system }} and path: ./tarball/*.xz) so the workflow uses the latest upload-artifact version while keeping the existing inputs unchanged.doc/manual/source/protocols/nix-cache-info.md-11-13 (1)
11-13:⚠️ Potential issue | 🟡 MinorAdd language tags to fenced code blocks (markdownlint MD040).
The three fences should specify a language (e.g.
text) to keep docs lint-clean.✅ Proposed fix
-``` +```text Key: value@@
-+text
error: binary cache 'https://example.com' is for Nix stores with prefix '/nix/store', not '/home/user/nix/store'@@ -``` +```text StoreDir: /nix/store WantMassQuery: 1 Priority: 30</details> Also applies to: 27-29, 41-45 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@doc/manual/source/protocols/nix-cache-info.mdaround lines 11 - 13, The
fenced code blocks in the markdown examples (e.g. the blocks containing "Key:
value", the error message starting with "error: binary cache
'https://example.com'...", and the block with "StoreDir:
/nix/store\nWantMassQuery: 1\nPriority: 30") are missing language tags; update
each triple-backtick fence to use a language tag such as text (i.e. changetotext) for all similar examples in the document so the code blocks are
lint-compliant (this applies to the three shown blocks and the other similar
fenced blocks referenced).</details> </blockquote></details> <details> <summary>src/libstore/include/nix/store/build/drv-output-substitution-goal.hh-23-24 (1)</summary><blockquote> `23-24`: _⚠️ Potential issue_ | _🟡 Minor_ **Fix the TODO rename typo.** `BuidlTraceEntryGoal` looks like a misspelling of `BuildTraceEntryGoal`, so the note currently points at a nonexistent name. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/libstore/include/nix/store/build/drv-output-substitution-goal.hh` around lines 23 - 24, The TODO comment references a misspelled type name "BuidlTraceEntryGoal"; update that comment to use the correct identifier "BuildTraceEntryGoal" (and keep the rest of the note about renaming Realisation to BuildTraceEntry intact) so the TODO points to the real symbol and is not misleading. ``` </details> </blockquote></details> <details> <summary>src/libstore/build/build-log.cc-13-17 (1)</summary><blockquote> `13-17`: _⚠️ Potential issue_ | _🟡 Minor_ **Clear the buffered line on `\r`.** A trailing `\r` leaves `currentLogLine` non-empty while `currentLogLinePos == 0`, so `BuildLog::flush()` emits a blank line and `BuildLog::hasPartialLine()` still reports pending content. Clearing the buffer here, and keying `flush()` off `currentLogLinePos`, avoids the spurious tail entry. <details> <summary>Suggested fix</summary> ```diff for (auto c : data) - if (c == '\r') - currentLogLinePos = 0; + if (c == '\r') { + currentLogLine.clear(); + currentLogLinePos = 0; + } else if (c == '\n') flushLine(); else { if (currentLogLinePos >= currentLogLine.size()) currentLogLine.resize(currentLogLinePos + 1); currentLogLine[currentLogLinePos++] = c; } @@ - if (!currentLogLine.empty()) + if (currentLogLinePos != 0) flushLine(); ``` </details> Also applies to: 25-28 <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/libstore/build/build-log.cc` around lines 13 - 17, The loop handling '\r' currently only resets currentLogLinePos leaving currentLogLine non-empty which causes BuildLog::flush() to emit a blank line and BuildLog::hasPartialLine() to report pending content; update the '\r' branch in the loop that iterates over data (in build-log.cc) to also clear currentLogLine (e.g., reset the buffer) so that a carriage return truly clears the buffered line, and ensure flushLine()/BuildLog::flush() behavior remains keyed off currentLogLinePos so no spurious tail entries remain. ``` </details> </blockquote></details> <details> <summary>src/libstore/include/nix/store/build/build-log.hh-79-82 (1)</summary><blockquote> `79-82`: _⚠️ Potential issue_ | _🟡 Minor_ **Use `currentLogLinePos` to detect whether a line is buffered.** `BuildLog::operator()` handles `\r` by resetting `currentLogLinePos` without clearing `currentLogLine` (`src/libstore/build/build-log.cc`, Lines 10-22). After a carriage-return rewrite, `!currentLogLine.empty()` still reports a partial line, so `flush()` can emit a stale or blank tail entry when the child exits. <details> <summary>💡 Minimal fix</summary> ```diff bool hasPartialLine() const { - return !currentLogLine.empty(); + return currentLogLinePos != 0; } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/libstore/include/nix/store/build/build-log.hh` around lines 79 - 82, hasPartialLine() currently checks !currentLogLine.empty(), which is wrong because BuildLog::operator() can reset currentLogLinePos on '\r' without clearing currentLogLine; change hasPartialLine() to detect buffered content by checking currentLogLinePos (e.g., return currentLogLinePos > 0 or != 0) so flush() only emits when there is actually buffered text; update the function that returns the partial-line state (hasPartialLine) and any callers that rely on it (e.g., flush()) to use currentLogLinePos semantics rather than currentLogLine.empty(). ``` </details> </blockquote></details> <details> <summary>src/libstore/include/nix/store/build/goal.hh-459-478 (1)</summary><blockquote> `459-478`: _⚠️ Potential issue_ | _🟡 Minor_ **Remove unnecessary defensive check in `ChildEventAwaiter::await_ready()`.** The `handle &&` check is redundant because `handle` is initialized via `handle_type::from_promise(*this)` in `await_transform()` before `await_ready()` is ever called. The C++20 standard guarantees that `std::coroutine_handle::from_promise()` always returns a valid handle. Simplify to: ```cpp bool await_ready() { return handle.promise().childEvents.hasChildEvent(); } ``` <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/libstore/include/nix/store/build/goal.hh` around lines 459 - 478, Remove the redundant null check in ChildEventAwaiter::await_ready(): replace the current implementation that checks "handle && ..." with a direct call to handle.promise().childEvents.hasChildEvent(); ensure this is done in the method on the ChildEventAwaiter class used with await_transform() and constructed via handle_type::from_promise() so await_ready() assumes a valid handle per the coroutine handle guarantee. ``` </details> </blockquote></details> <details> <summary>doc/manual/source/protocols/json/build-trace-entry.md-20-20 (1)</summary><blockquote> `20-20`: _⚠️ Potential issue_ | _🟡 Minor_ **Schema label is stale (`v1`) while link points to `v2`.** Even though this section is commented out, keeping labels accurate avoids confusion when it gets re-enabled. <details> <summary>💡 Proposed fix</summary> ```diff -[JSON Schema for Build Trace Entry v1](schema/build-trace-entry-v2.json) +[JSON Schema for Build Trace Entry v2](schema/build-trace-entry-v2.json) ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@doc/manual/source/protocols/json/build-trace-entry.md` at line 20, Update the schema label in the markdown so it matches the linked file version: change the text "[JSON Schema for Build Trace Entry v1]" to "[JSON Schema for Build Trace Entry v2]" in the document section that references schema/build-trace-entry-v2.json to avoid the stale v1 label mismatch. ``` </details> </blockquote></details> <details> <summary>doc/manual/source/installation/building-source.md-15-15 (1)</summary><blockquote> `15-15`: _⚠️ Potential issue_ | _🟡 Minor_ **Use a copy-pastable install command in the shell block.** Line 15 uses `(sudo) ninja install`, which is not valid shell syntax when pasted directly. <details> <summary>💡 Proposed fix</summary> ```diff -(sudo) ninja install +sudo ninja install # optional if your prefix requires elevated permissions ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@doc/manual/source/installation/building-source.md` at line 15, Replace the non-copy-pastable shell snippet "(sudo) ninja install" with a valid command that can be pasted directly into a terminal; update the shell block to show "sudo ninja install" (and optionally a separate line showing "ninja install" for non-root installs) so readers can copy-paste a working command without parentheses. ``` </details> </blockquote></details> <details> <summary>src/libcmd/unix/unix-socket-server.cc-67-68 (1)</summary><blockquote> `67-68`: _⚠️ Potential issue_ | _🟡 Minor_ **Consider graceful handling of malformed `LISTEN_FDS`.** Using `assert(count)` after `string2Int` means a malformed `LISTEN_FDS` environment variable (non-numeric) will cause an assertion failure in release builds or undefined behavior if assertions are disabled. Since this is external input from the environment, consider throwing an error instead. <details> <summary>🛡️ Proposed fix for robust error handling</summary> ```diff auto count = string2Int<unsigned int>(*listenFds); - assert(count); + if (!count) + throw Error("LISTEN_FDS environment variable is not a valid unsigned integer: '%s'", *listenFds); for (unsigned int i = 0; i < count; ++i) { ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/libcmd/unix/unix-socket-server.cc` around lines 67 - 68, The code currently does assert(count) after calling string2Int on LISTEN_FDS which is unsafe for malformed/non-numeric env values; replace the assertion with proper error handling in the function that processes LISTEN_FDS by checking the return/validity of string2Int<unsigned int>(*listenFds) (e.g., test for a parse failure or zero) and throw a descriptive exception or return an error (include the env var value in the message) instead of using assert; locate the use of listenFds, the local variable count, and the call to string2Int to implement this change so callers receive a controlled error rather than an assertion failure. ``` </details> </blockquote></details> <details> <summary>src/libexpr-tests/nix_api_external.cc-88-107 (1)</summary><blockquote> `88-107`: _⚠️ Potential issue_ | _🟡 Minor_ **ExternalValue `val` is not explicitly decreffed.** Per the `nix_create_external_value()` documentation, "Call nix_gc_decref() when you're done with the pointer." The `ExternalValue * val` created at line 88 should be decreffed after use. The existing test `nix_expr_eval_external` (line 46) has the same pattern, suggesting this may be intentional (ownership transfer when `nix_init_external()` stores the pointer in `value`), but the code doesn't document or clarify the ownership semantics. Add `nix_gc_decref(ctx, val)` at the end of the test for consistency with documented API expectations, or clarify if ownership transfers to `value`. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/libexpr-tests/nix_api_external.cc` around lines 88 - 107, The ExternalValue* returned by nix_create_external_value (named val) is not decreffed, violating the API contract; after calling nix_init_external(ctx, value, val) you must either document an ownership transfer or explicitly call nix_gc_decref(ctx, val) when done—so add a nix_gc_decref(ctx, val) at the end of the test (after nix_gc_decref(ctx, toJsonFn) or before test exit) unless you intentionally transfer ownership to value, in which case update the comment near nix_init_external to state that ownership is transferred. ``` </details> </blockquote></details> <details> <summary>doc/manual/source/protocols/json/schema/build-trace-entry-v2.yaml-15-15 (1)</summary><blockquote> `15-15`: _⚠️ Potential issue_ | _🟡 Minor_ **Fix typo: "Verision" should be "Version".** <details> <summary>📝 Proposed fix</summary> ```diff - Verision history: + Version history: ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@doc/manual/source/protocols/json/schema/build-trace-entry-v2.yaml` at line 15, Fix the typo in the YAML where the heading "Verision history:" appears; change it to "Version history:" so the schema documentation uses the correct word ("Version history:")—update the literal string in the file (the heading line) accordingly. ``` </details> </blockquote></details> <details> <summary>src/libfetchers/mercurial.cc-171-174 (1)</summary><blockquote> `171-174`: _⚠️ Potential issue_ | _🟡 Minor_ **Terminate option parsing before `absPath` in `hg add` and `hg commit` commands.** Repository paths beginning with `-` will be parsed as options by Mercurial's CLI. Insert `--` to force subsequent arguments to be treated as file paths. <details> <summary>Suggested fix</summary> ```diff - runHg({OS_STR("add"), absPath.native()}); + runHg({OS_STR("add"), OS_STR("--"), absPath.native()}); - if (commitMsg) - runHg({OS_STR("commit"), absPath.native(), OS_STR("-m"), string_to_os_string(*commitMsg)}); + if (commitMsg) + runHg({OS_STR("commit"), OS_STR("-m"), string_to_os_string(*commitMsg), OS_STR("--"), absPath.native()}); ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/mercurial.cc` around lines 171 - 174, The hg CLI calls in runHg are treating repository paths starting with '-' as options; update the runHg invocations in mercurial.cc (the runHg calls that use absPath and the conditional commit using commitMsg) to insert the terminator OS_STR("--") before absPath so subsequent arguments are parsed as file paths (i.e., pass {"add", OS_STR("--"), absPath.native()} and {"commit", OS_STR("--"), absPath.native(), OS_STR("-m"), string_to_os_string(*commitMsg)} respectively). ``` </details> </blockquote></details> </blockquote></details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| additionalProperties: | ||
| dependentRealisations: | ||
| description: deprecated field | ||
| type: object |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
JSON Schema additionalProperties syntax for deprecated fields
💡 Result:
additionalProperties is only about unknown object members (property names not covered by properties / patternProperties). To mark known fields as deprecated, keep them in properties and add "deprecated": true to that property’s subschema (it’s an annotation, not a validation rule). [1][2]
Allow deprecated fields (but flag them as deprecated)
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"properties": {
"newField": { "type": "string" },
"oldField": { "type": "string", "deprecated": true }
},
"additionalProperties": false
}With additionalProperties: false, the instance can only contain declared fields (including deprecated ones). [3]
Forbid deprecated fields (fail validation if present)
JSON Schema has no “error-on-deprecated” built in (since deprecated is annotation-only), so you must add a validation constraint, e.g. make the deprecated property always invalid:
{
"type": "object",
"properties": {
"newField": { "type": "string" },
"oldField": false
},
"additionalProperties": false
}Sources: JSON Schema annotations / deprecated meaning [1][2]; additionalProperties: false behavior [3].
References:
[1] https://json-schema.org/understanding-json-schema/reference/annotations
[2] https://json-schema.org/draft/2019-09/draft-handrews-json-schema-validation-02
[3] https://www.mongodb.com/docs/v7.0/core/schema-validation/specify-json-schema/json-schema-tips/
🏁 Script executed:
cat -n doc/manual/source/protocols/json/schema/build-trace-entry-v2.yaml | head -50Repository: DeterminateSystems/nix-src
Length of output: 1668
This YAML syntax violates JSON Schema specification.
The additionalProperties keyword must be either a boolean or a schema object, not a container for named properties. The nested dependentRealisations structure is invalid.
To properly handle a deprecated field in JSON Schema, move it to the properties section and annotate it with deprecated: true:
Suggested fix
properties:
id: {}
outPath: {}
signatures: {}
dependentRealisations:
description: deprecated field
type: object
deprecated: true
additionalProperties: false(Note: The changelog at line 19 states "Version 2: Remove dependentRealisations", so you may instead want to omit this field entirely rather than marking it deprecated.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@doc/manual/source/protocols/json/schema/build-trace-entry-v2.yaml` around
lines 33 - 36, The YAML uses additionalProperties as a container for
dependentRealisations which violates JSON Schema; move dependentRealisations
into the schema's properties section (or remove it entirely per the changelog)
and make additionalProperties a boolean or schema object. Specifically, add a
properties entry for dependentRealisations with type: object and deprecated:
true (or drop the dependentRealisations property if version 2 removed it), and
set additionalProperties to false (or an appropriate schema) so the file no
longer treats additionalProperties as a named-property map.
| %token <StringToken> PATH_END "end of path" | ||
| %token <StringToken> URI "URI" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Check typed PATH_END declaration:"
rg -n -C2 '%token\s*<StringToken>\s+PATH_END' src/libexpr/parser.y
echo
echo "Inspect PATH_END lexer return block:"
awk 'NR>=294 && NR<=304 { printf "%d:%s\n", NR, $0 }' src/libexpr/lexer.lRepository: DeterminateSystems/nix-src
Length of output: 703
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== All PATH_END occurrences in lexer ==="
rg -n "PATH_END" src/libexpr/lexer.l
echo
echo "=== All URI occurrences in lexer ==="
rg -n "URI" src/libexpr/lexer.l | head -20
echo
echo "=== Check if yylval is set for any PATH_END ==="
rg -B5 "PATH_END" src/libexpr/lexer.l | grep -E "(yylval|emplace)" || echo "No yylval assignments found for PATH_END"
echo
echo "=== Check for any URI yylval assignments ==="
rg -B5 "return URI" src/libexpr/lexer.lRepository: DeterminateSystems/nix-src
Length of output: 783
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== PATH_END usage in parser.y ==="
rg -n "PATH_END" src/libexpr/parser.y
echo
echo "=== Context around PATH_END declaration ==="
sed -n '175,185p' src/libexpr/parser.y
echo
echo "=== Check grammar rules using PATH_END ==="
rg -B2 -A2 "PATH_END" src/libexpr/parser.y | grep -v "^--$"Repository: DeterminateSystems/nix-src
Length of output: 1110
Remove the StringToken type from PATH_END declaration.
The token is declared as <StringToken> but the lexer returns it without setting yylval (line 302 in src/libexpr/lexer.l). Since the parser never accesses PATH_END's semantic value (lines 360–361), the type should be removed.
Proposed fix
-%token <StringToken> PATH_END "end of path"
+%token PATH_END "end of path"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| %token <StringToken> PATH_END "end of path" | |
| %token <StringToken> URI "URI" | |
| %token PATH_END "end of path" | |
| %token <StringToken> URI "URI" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libexpr/parser.y` around lines 179 - 180, Remove the incorrect semantic
type from the PATH_END token declaration: change the %token <StringToken>
PATH_END "end of path" to a plain %token PATH_END since the lexer emits PATH_END
without setting yylval and the parser never reads its semantic value; update any
related parser token references to use PATH_END without a semantic type and
ensure the lexer continues to return the PATH_END token constant (not a value)
when it encounters the end-of-path condition.
| auto [status, output] = runProgram({ | ||
| .program = "git", | ||
| .args{ | ||
| OS_STR("-c"), | ||
| OS_STR("gpg.ssh.allowedSignersFile=") + allowedSignersFile.native(), | ||
| OS_STR("-C"), | ||
| path.native(), | ||
| OS_STR("verify-commit"), | ||
| string_to_os_string(rev.gitRev()), | ||
| }, | ||
| .mergeStderrToStdout = true, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
sed -n '712,723p' src/libfetchers/git-utils.cc
rg -n '\.args\s*\{' src/libfetchers/git-utils.ccRepository: DeterminateSystems/nix-src
Length of output: 522
🏁 Script executed:
# Search for other RunOptions constructions to see the correct pattern
rg -n '\.args\s*=' src/libfetchers/git-utils.cc -A 2 -B 2Repository: DeterminateSystems/nix-src
Length of output: 294
🏁 Script executed:
# Check for all designated initializer patterns in this file
rg -n 'runProgram\(' src/libfetchers/git-utils.cc -A 8 | head -80Repository: DeterminateSystems/nix-src
Length of output: 875
Fix the malformed designated initializer here too.
This RunOptions construction has the same syntax error: .args{ ... } is not valid C++20 designated initializer syntax. The correct syntax requires an equals sign: .args = { ... }. This is confirmed by the correct pattern used elsewhere in the same file (line 652: .args = gitArgs).
🐛 Proposed fix
auto [status, output] = runProgram({
.program = "git",
- .args{
+ .args = {
OS_STR("-c"),
OS_STR("gpg.ssh.allowedSignersFile=") + allowedSignersFile.native(),
OS_STR("-C"),
path.native(),
OS_STR("verify-commit"),
string_to_os_string(rev.gitRev()),
},
.mergeStderrToStdout = true,
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| auto [status, output] = runProgram({ | |
| .program = "git", | |
| .args{ | |
| OS_STR("-c"), | |
| OS_STR("gpg.ssh.allowedSignersFile=") + allowedSignersFile.native(), | |
| OS_STR("-C"), | |
| path.native(), | |
| OS_STR("verify-commit"), | |
| string_to_os_string(rev.gitRev()), | |
| }, | |
| .mergeStderrToStdout = true, | |
| }); | |
| auto [status, output] = runProgram({ | |
| .program = "git", | |
| .args = { | |
| OS_STR("-c"), | |
| OS_STR("gpg.ssh.allowedSignersFile=") + allowedSignersFile.native(), | |
| OS_STR("-C"), | |
| path.native(), | |
| OS_STR("verify-commit"), | |
| string_to_os_string(rev.gitRev()), | |
| }, | |
| .mergeStderrToStdout = true, | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libfetchers/git-utils.cc` around lines 712 - 723, The RunOptions
initializer passed to runProgram is using invalid designated-init syntax `.args{
... }`; change it to use `.args = { ... }` so the args field is correctly
initialized for the call in the verify-commit block where runProgram is invoked
(the initializer that sets .program = "git", the gpg.ssh.allowedSignersFile
using allowedSignersFile.native(), .args, and .mergeStderrToStdout). Ensure the
rest of the RunOptions fields (program, mergeStderrToStdout) remain unchanged.
| @@ -0,0 +1,144 @@ | |||
| #include "nix/store/tests/https-store.hh" | |||
There was a problem hiding this comment.
Fix the header include path.
This include does not resolve; clang already reports nix/store/tests/https-store.hh as missing, so the new test file does not compile as-is.
🧰 Tools
🪛 Clang (14.0.6)
[error] 1-1: 'nix/store/tests/https-store.hh' file not found
(clang-diagnostic-error)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore-test-support/https-store.cc` at line 1, The file
src/libstore-test-support/https-store.cc currently includes a non-existent
header "nix/store/tests/https-store.hh"; update that include to the correct
header path so the compiler can find it (for example replace
"nix/store/tests/https-store.hh" with the actual location used in the repo such
as "nix/tests/https-store.hh" or an appropriate relative path like
"../store/tests/https-store.hh"), keeping the include as a quoted header and
ensuring the include string exactly matches the header file location.
| } else if (std::get_if<TimedOut>(&event)) { | ||
| unreachable(); |
There was a problem hiding this comment.
Handle TimedOut through normal goal failure.
WaitForChildEvent can legitimately deliver TimedOut via the goal event API, so unreachable() turns a timeout/cancellation into a hard abort instead of a failed goal. Please propagate that case through the usual failure path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore/build/drv-output-substitution-goal.cc` around lines 73 - 74, The
TimedOut branch in the WaitForChildEvent handling currently calls unreachable(),
which turns legitimate timeouts into hard aborts; instead remove unreachable()
and handle std::get_if<TimedOut>(&event) by propagating it through the normal
goal failure path used for other child failures (i.e., mirror the same failure
handling code/path invoked for non-success events), using the same
functions/logic around WaitForChildEvent and the surrounding event/failure
handling so the goal is marked failed/cancelled rather than aborting.
| std::optional<CompressionAlgo> HttpBinaryCacheStore::getCompressionMethod(const std::string & path) | ||
| { | ||
| if (hasSuffix(path, ".narinfo") && !config->narinfoCompression.get().empty()) | ||
| if (hasSuffix(path, ".narinfo") && config->narinfoCompression.get()) | ||
| return config->narinfoCompression; | ||
| else if (hasSuffix(path, ".ls") && !config->lsCompression.get().empty()) | ||
| else if (hasSuffix(path, ".ls") && config->lsCompression.get()) | ||
| return config->lsCompression; | ||
| else if (hasPrefix(path, "log/") && !config->logCompression.get().empty()) | ||
| else if (hasPrefix(path, "log/") && config->logCompression.get()) | ||
| return config->logCompression; | ||
| else | ||
| return std::nullopt; |
There was a problem hiding this comment.
Update the declaration to std::optional<CompressionAlgo> too.
src/libstore/include/nix/store/http-binary-cache-store.hh still declares getCompressionMethod(const std::string & path) as std::optional<std::string>, so this definition no longer matches the class declaration.
🩹 Header fix
--- a/src/libstore/include/nix/store/http-binary-cache-store.hh
+++ b/src/libstore/include/nix/store/http-binary-cache-store.hh
- std::optional<std::string> getCompressionMethod(const std::string & path);
+ std::optional<CompressionAlgo> getCompressionMethod(const std::string & path);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore/http-binary-cache-store.cc` around lines 82 - 91, The
implementation of HttpBinaryCacheStore::getCompressionMethod now returns
std::optional<CompressionAlgo>, but the class declaration in
http-binary-cache-store.hh still declares std::optional<std::string>; update the
header so the method signature matches (change declaration of
getCompressionMethod(const std::string & path) to return
std::optional<CompressionAlgo>), and ensure the header includes or
forward-declares CompressionAlgo (or includes its defining header) so the type
is known in the class declaration.
| fileTransfer->enqueueFileTransfer(request, {[callbackPtr, this](std::future<FileTransferResult> result) { | ||
| try { | ||
| (*callbackPtr)(std::move(result.get().data)); | ||
| } catch (FileTransferError & e) { | ||
| if (e.error == FileTransfer::NotFound | ||
| || e.error == FileTransfer::Forbidden) | ||
| return (*callbackPtr)({}); | ||
| maybeDisable(); | ||
| callbackPtr->rethrow(); | ||
| } catch (...) { | ||
| callbackPtr->rethrow(); | ||
| } | ||
| }}); |
There was a problem hiding this comment.
Don't capture raw this across the async transfer callback.
enqueueFileTransfer is asynchronous, but this lambda calls maybeDisable() through a raw this. If the store is destroyed before the transfer completes, the error path becomes a use-after-free. Please retain the store for the callback lifetime, or move the disable logic behind state that outlives the request.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore/http-binary-cache-store.cc` around lines 242 - 254, The lambda
passed to enqueueFileTransfer captures raw this and calls maybeDisable(),
causing a potential use-after-free if the store is destroyed before the async
callback runs; change the capture to hold the store for the callback lifetime
(e.g., obtain a shared_ptr via shared_from_this() and capture it in the lambda:
use a guard like guard = shared_from_this()), then call guard->maybeDisable()
(or check guard before calling) instead of calling maybeDisable() on raw this;
ensure the store class supports enable_shared_from_this so shared_from_this() is
valid and update the lambda capture list to include the guard alongside
callbackPtr.
There was a problem hiding this comment.
Actionable comments posted: 7
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/libexpr/include/nix/expr/counter.hh (1)
1-14:⚠️ Potential issue | 🟠 MajorMissing
#include <new>forstd::hardware_destructive_interference_size.The file uses
std::hardware_destructive_interference_sizeon line 14 but does not include<new>, where this constant is declared. Add#include <new>after line 4.Note: This constant was historically unavailable in older Apple libc++, but modern Apple Clang (17.0.0+ / Xcode 16.x) now provides it. If the project supports older toolchains, consider adding a fallback with the
__cpp_lib_hardware_interference_sizefeature-test macro.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr/include/nix/expr/counter.hh` around lines 1 - 14, The code uses std::hardware_destructive_interference_size in the alignas on struct Counter but does not include the header that defines it; add `#include` <new> near the other includes so std::hardware_destructive_interference_size is available for Counter, and optionally provide a fallback guarded by the feature-test macro __cpp_lib_hardware_interference_size to define a sensible default if the symbol is unavailable on older toolchains.src/libstore/gc.cc (1)
864-877:⚠️ Potential issue | 🟠 MajorAvoid capturing
gcSettingsby reference in detached thread.Line 864 captures
&gcSettingsinto a detached thread. This is risky for lifetime/thread-safety if settings storage is reloaded or accessor returns non-stable backing storage. Capture scalar values by copy before spawning.✅ Safer capture-by-value pattern
- const auto & gcSettings = config->getLocalSettings().getGCSettings(); + const auto & gcSettings = config->getLocalSettings().getGCSettings(); + const auto minFreeCheckInterval = gcSettings.minFreeCheckInterval; + const auto minFree = gcSettings.minFree; + const auto maxFree = gcSettings.maxFree; @@ - if (now < state->lastGCCheck + std::chrono::seconds(gcSettings.minFreeCheckInterval)) + if (now < state->lastGCCheck + std::chrono::seconds(minFreeCheckInterval)) return; @@ - if (avail >= gcSettings.minFree || avail >= gcSettings.maxFree) + if (avail >= minFree || avail >= maxFree) return; @@ - std::thread([promise{std::move(promise)}, this, avail, getAvail, &gcSettings]() mutable { + std::thread([promise{std::move(promise)}, this, avail, getAvail, maxFree]() mutable { @@ - options.maxFreed = gcSettings.maxFree - avail; + options.maxFreed = maxFree - avail;#!/bin/bash set -euo pipefail echo "Inspect getLocalSettings/getGCSettings declarations and return types:" rg -n -C3 '\bgetLocalSettings\s*\(|\bgetGCSettings\s*\(' --type=cpp --type=h echo echo "Inspect detached thread capture in autoGC:" rg -n -C4 'std::thread\(\[.*gcSettings|gcSettings\.' src/libstore/gc.ccExpected verification: confirm whether accessors return stable references and whether detached thread may outlive that storage contract.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/gc.cc` around lines 864 - 877, The detached std::thread lambda is capturing gcSettings by reference (see the lambda creation where promise, this, avail, getAvail, &gcSettings are captured); change this to capture either a copy of gcSettings or — better — copy the specific scalar fields you need (e.g., capture gcMaxFree = gcSettings.maxFree or copy an entire GCSettings value) before spawning the thread and use those local copies inside the lambda (so updates or deallocation of the original settings cannot race with the detached thread); ensure the assignment to options.maxFreed uses the copied value (e.g., use gcMaxFree - avail) and verify getLocalSettings/getGCSettings do not need to be referenced inside the thread.src/libstore/daemon.cc (1)
139-145:⚠️ Potential issue | 🔴 CriticalConvert full
Versioncomparisons to.numbercomparisons.Lines 139, 344, 421, 923, 935 use
clientVersion >= WorkerProto::Version{...}andconn.protoVersion >= WorkerProto::Version{...}instead of.number. The headerworker-protocol.hhexplicitly documents:"The version number has a total ordering, but the full
Version(number + features) only has a partial ordering, so there is nooperator<=>onVersionitself --- callers must compare.numberexplicitly."These should use
.numbercomparisons (e.g.,clientVersion.number >= WorkerProto::Version::Number{1, 26}) to match the documented contract and the numeric-only comparisons used elsewhere in the file (e.g., lines 155, 168, 177, 821, 969, 986).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/daemon.cc` around lines 139 - 145, Several comparisons use the full Version type (e.g., clientVersion >= WorkerProto::Version{...} and conn.protoVersion >= WorkerProto::Version{...}) but must compare only the numeric part; change those to compare the .number field against WorkerProto::Version::Number (for example replace clientVersion >= WorkerProto::Version{1,26} with clientVersion.number >= WorkerProto::Version::Number{1,26}); update every occurrence in this file (including the comparisons around the exception formatting and any conn.protoVersion checks) to use .number so you use the total ordering of Version::Number rather than the partial ordering of Version.doc/manual/source/package-management/binary-cache-substituter.md (1)
42-44:⚠️ Potential issue | 🟡 MinorFix the pluralization in the fallback sentence.
Line 43 says “another binary caches”, which reads oddly in the rendered manual.
📝 Suggested fix
Thus, for any path in the closure of Firefox, Nix will first check if -the path is available on the server `avalon` or another binary caches. +the path is available on the server `avalon` or one of the other binary caches. If not, it will fall back to building from source.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/manual/source/package-management/binary-cache-substituter.md` around lines 42 - 44, The sentence currently reads "another binary caches" which is incorrect; update the wording in the paragraph containing "Thus, for any path in the closure of Firefox, Nix will first check..." to use the correct pluralization — either "another binary cache" if you mean one other cache or "other binary caches" if you mean multiple — for example replace "another binary caches" with "another binary cache" for singular or "other binary caches" for plural so the sentence reads smoothly.
♻️ Duplicate comments (6)
src/libfetchers/git.cc (1)
494-505:⚠️ Potential issue | 🔴 CriticalFix malformed designated initializer in
runProgramoptions.Line 497 uses
.args{ ... }, which is invalid C++ designated-initializer syntax. This is a compile-time break; it must be.args = { ... }.Proposed fix
auto result = runProgram( RunOptions{ .program = "git", - .args{ + .args = { OS_STR("-C"), repoPath->native(), OS_STR("--git-dir"), string_to_os_string(repoInfo.gitDir), OS_STR("check-ignore"), OS_STR("--quiet"), string_to_os_string(std::string(path.rel())), }, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/git.cc` around lines 494 - 505, The RunOptions initializer passed to runProgram uses an invalid designated-initializer `.args{ ... }`; update the RunOptions construction (in the call to runProgram) to use `.args = { ... }` instead so the args member is correctly initialized; locate the RunOptions instance around runProgram, refer to the RunOptions struct and the args field and adjust the line that builds the args list (which includes repoPath->native(), string_to_os_string(repoInfo.gitDir), and string_to_os_string(std::string(path.rel()))) to use the `= ` form.src/libfetchers/git-utils.cc (2)
712-721:⚠️ Potential issue | 🔴 CriticalFix malformed designated initializer in verify-commit
runProgramcall.Line 714 uses
.args{ ... }, which is invalid C++ designated-initializer syntax and will fail compilation.Proposed fix
auto [status, output] = runProgram({ .program = "git", - .args{ + .args = { OS_STR("-c"), OS_STR("gpg.ssh.allowedSignersFile=") + allowedSignersFile.native(), OS_STR("-C"), path.native(), OS_STR("verify-commit"), string_to_os_string(rev.gitRev()), }, .mergeStderrToStdout = true, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/git-utils.cc` around lines 712 - 721, The runProgram call in git-utils.cc uses an invalid designated initializer `.args{ ... }` which fails compilation; update the initializer for the args field in the call to runProgram (the call that passes .program = "git", gpg.ssh.allowedSignersFile=allowedSignersFile.native(), -C path.native(), verify-commit and rev.gitRev()) to valid C++ syntax—e.g., use `.args = { ... }` or construct the args vector before the call and pass it as `.args = argsVec`—so the args member is assigned correctly when invoking runProgram for the verify-commit invocation.
643-650:⚠️ Potential issue | 🔴 CriticalUse OS-native literals in
OsStringsinitialization.Line 643 initializes
OsStringswith narrow literals; this is not portable for Windows wide-characterOsStringbuilds.Proposed fix
- OsStrings gitArgs{"-C", dir.native(), "--git-dir", ".", "fetch", "--progress", "--force"}; + OsStrings gitArgs{ + OS_STR("-C"), + dir.native(), + OS_STR("--git-dir"), + OS_STR("."), + OS_STR("fetch"), + OS_STR("--progress"), + OS_STR("--force"), + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/git-utils.cc` around lines 643 - 650, The OsStrings initialization uses narrow string literals which won't work on Windows wide-character builds; update the initialization of gitArgs (type OsStrings) to use OS-native literals (e.g., wrap each literal with the OS_STR macro) for "-C", "--git-dir", ".", "fetch", "--progress", "--force" and the standalone "--", and ensure dir.native(), url and refspec are converted with string_to_os_string (or already-OS-native) consistently; also use OS_STR for the conditional depth literals pushed when shallow is true (the existing push_back(OS_STR(...)) pattern should be applied to the initial constructor arguments).src/libstore/build/drv-output-substitution-goal.cc (1)
73-74:⚠️ Potential issue | 🔴 CriticalHandle
TimedOutas normal goal failure, notunreachable().
TimedOutis part of the child-event API, sounreachable()can hard-abort on legitimate timeout/cancellation paths.🐛 Proposed fix
- } else if (std::get_if<TimedOut>(&event)) { - unreachable(); + } else if (auto * timeout = std::get_if<TimedOut>(&event)) { + worker.childTerminated(this); + co_return doneFailure(std::move(*timeout)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/build/drv-output-substitution-goal.cc` around lines 73 - 74, The TimedOut child event is currently treated with unreachable() which can abort on legitimate timeouts; instead handle TimedOut as a normal goal failure. Replace the branch "else if (std::get_if<TimedOut>(&event)) { unreachable(); }" with logic equivalent to the existing failure-handling path used for other child-failure events: record/emit the failure, mark the goal as failed (use the same method or state transition used elsewhere in this file), and propagate/return the failure result so it follows normal cancellation/timeout flows. Ensure you reference the TimedOut variant and remove the unreachable() call.src/libstore/http-binary-cache-store.cc (1)
242-250:⚠️ Potential issue | 🔴 CriticalAvoid capturing raw
thisin async transfer callback.The callback can run after object destruction; calling
maybeDisable()through rawthisrisks use-after-free.Suggested fix
- fileTransfer->enqueueFileTransfer(request, {[callbackPtr, this](std::future<FileTransferResult> result) { + auto self = ref{*this}; + fileTransfer->enqueueFileTransfer(request, {[callbackPtr, self](std::future<FileTransferResult> result) { try { (*callbackPtr)(std::move(result.get().data)); } catch (FileTransferError & e) { if (e.error == FileTransfer::NotFound || e.error == FileTransfer::Forbidden) return (*callbackPtr)({}); - maybeDisable(); + self->maybeDisable(); callbackPtr->rethrow(); } catch (...) { callbackPtr->rethrow(); } }});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/http-binary-cache-store.cc` around lines 242 - 250, The async lambda passed to fileTransfer->enqueueFileTransfer must not capture raw this; replace the lambda capture with a weak_ptr to the owning object (acquire a weak_ptr via shared_from_this where the transfer is initiated) and inside the callback lock the weak_ptr before calling member methods; only call maybeDisable() (and any other non-static members) if the lock succeeds, otherwise handle the result path without touching this (e.g. invoke (*callbackPtr)({}) or return). Update the enqueueFileTransfer call site so the lambda captures callbackPtr and the weak_ptr (not this) and checks auto self = weak.lock() before calling self->maybeDisable().doc/manual/source/protocols/json/schema/build-trace-entry-v2.yaml (1)
33-36:⚠️ Potential issue | 🟠 Major
additionalPropertiesis modeled incorrectly for deprecation.Line 33 uses
additionalPropertiesas a property container, which does not express a deprecated known field and can loosen validation unexpectedly. PutdependentRealisationsunderproperties(if still accepted) withdeprecated: true, and keepadditionalPropertiesas a boolean/schema.Suggested schema fix
properties: id: {} outPath: {} signatures: {} + dependentRealisations: + description: deprecated field + type: object + deprecated: true -additionalProperties: - dependentRealisations: - description: deprecated field - type: object +additionalProperties: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/manual/source/protocols/json/schema/build-trace-entry-v2.yaml` around lines 33 - 36, The schema incorrectly uses additionalProperties as a container for dependentRealisations; instead move dependentRealisations into the schema's properties block (i.e., add a properties entry for dependentRealisations) and mark it as deprecated: true, and restore additionalProperties to be a boolean or schema (not an object property) so validation isn't loosened; refer to the property name dependentRealisations and the schema keyword additionalProperties to locate and update the YAML.
🟠 Major comments (19)
src/libstore-tests/derivation-advanced-attrs.cc-8-8 (1)
8-8:⚠️ Potential issue | 🟠 MajorAvoid global worker settings in these assertions.
Using
settings.getWorkerSettings()at Line 196, Line 244, Line 336, and Line 401 reintroduces global state into this fixture, which conflicts with the concurrency-safety intent documented at Line 31-Line 34. This can cause order-dependent/flaky tests if global worker settings are modified elsewhere. Prefer a fixture-local worker settings object (explicitly initialized for these tests) and pass that tosubstitutesAllowed(...).Also applies to: 196-196, 244-244, 336-336, 401-401
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore-tests/derivation-advanced-attrs.cc` at line 8, The assertions currently call settings.getWorkerSettings() (used in substitutesAllowed(...)) which reintroduces global state; instead create a fixture-local WorkerSettings instance inside the test fixture (initialize it explicitly for these tests) and pass that local object into substitutesAllowed(...) at each call site that currently uses settings.getWorkerSettings() (the occurrences in derivation-advanced-attrs.cc). Replace direct use of the global settings.getWorkerSettings() with the local worker settings variable so the tests remain concurrency-safe and order-independent.src/libflake/flakeref.cc-151-151 (1)
151-151:⚠️ Potential issue | 🟠 MajorUse proper cross-platform root path check instead of string comparison.
The comparison
path != "/"is platform-specific. On Windows,std::filesystem::pathrepresents root paths asC:\(or other drive letters), not/, causing the loop termination condition to fail. The codebase already provides a proper cross-platform abstraction viaCanonPath::isRoot()elsewhere; this code should be refactored to use it or equivalent platform-aware logic, especially since Windows support is actively being developed in this codebase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libflake/flakeref.cc` at line 151, Replace the platform-specific string comparison "while (path != \"/\")" with a cross-platform root check: use the project's CanonPath::isRoot() utility (or, if unavailable here, compare path == path.root_path() via std::filesystem::path) to detect root directories on Windows and POSIX; update the loop condition referencing the local variable path (and any CanonPath wrapper if present) so the loop terminates correctly on drive-letter roots like "C:\\" as well as "/" on POSIX.src/libmain/unix/stack.cc-75-77 (1)
75-77:⚠️ Potential issue | 🟠 MajorReplace
strlen()with compile-time constant to maintain async-signal-safety inSIGSEGVhandler.Line 76 calls
strlen()inside a signal-handler context. Per POSIX,strlen()is not in the async-signal-safe function list, making it unsafe to call from aSIGSEGVhandler. Thewrite()and_exit()calls are safe, butstrlen()is not.Use
sizeof(msg) - 1instead to evaluate the string length at compile time:Proposed fix
void defaultStackOverflowHandler(siginfo_t * info, void * ctx) { char msg[] = "error: stack overflow (possible infinite recursion)\n"; - [[gnu::unused]] auto res = ::write(2, msg, strlen(msg)); + [[gnu::unused]] auto res = ::write(2, msg, sizeof(msg) - 1); _exit(1); // maybe abort instead? }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libmain/unix/stack.cc` around lines 75 - 77, The signal handler uses strlen(msg) which is not async-signal-safe; replace the runtime call with a compile-time length by changing the write invocation to use sizeof(msg) - 1 (keep msg as the same char[]), e.g. call ::write(2, msg, sizeof(msg) - 1) in the handler where msg, ::write, strlen, and _exit are used so you avoid calling strlen from the SIGSEGV/signal context.src/libfetchers/git-utils.cc-90-99 (1)
90-99:⚠️ Potential issue | 🟠 MajorGuard
git_error_last()before dereferencing inGitError.Line 93 calls
git_error_last()which can returnnullptr. With asserts disabled in Release builds, this dereferences null and crashes. Currently all callers pass error context immediately after failed git_* operations, but the constructor does not enforce this requirement. Add explicit null check with fallback error message to prevent crashes during error handling.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libfetchers/git-utils.cc` around lines 90 - 99, The GitError constructor currently dereferences git_error_last() unguarded; change the lambda in the GitError(Ts&&...) delegating ctor to check the pointer returned by git_error_last() and avoid dereference when it's null: if git_error_last() returns non-null use it, otherwise construct/use a fallback git_error-like object (or a static struct/std::string with a clear "unknown git error" message and code 0) and pass that into the existing GitError(...) delegating call so Release builds never dereference a null pointer; update references in the GitError constructor logic accordingly.src/libstore/globals.cc-131-131 (1)
131-131:⚠️ Potential issue | 🟠 MajorAvoid lossy path round-trip when loading user config files.
At line 147,
file->string()converts astd::filesystem::pathto a narrow string and then back topathviaapplyConfigFile(...). On Windows, this is lossy for non-ASCII characters and breaks config discovery for affected users. Pass the path object directly, matching the pattern already used at line 139.Suggested fix
- for (auto file = files.rbegin(); file != files.rend(); file++) { - applyConfigFile(file->string()); + for (auto file = files.rbegin(); file != files.rend(); file++) { + applyConfigFile(*file);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/globals.cc` at line 131, The code is converting a std::filesystem::path to a narrow string then back to path which loses non-ASCII chars; update the call site that currently uses file->string() to pass the std::filesystem::path object directly into the applyConfigFile lambda (referencing applyConfigFile and the file variable) so it matches the existing usage pattern and preserves wide characters on Windows.packaging/dev-shell.nix-145-147 (1)
145-147:⚠️ Potential issue | 🟠 MajorUse defensive attribute access for
meta.brokento prevent eval-time errors if a test derivation lacksmeta.While all current test packages include
meta, accessing nested attributes without a guard can fail in Nix if the attribute is missing. Use theoroperator to safely handle absentmetaattributes:Safe attribute access patch
- ++ lib.concatMap (c: lib.filter (v: !v.meta.broken) (lib.attrValues (c.tests or { }))) ( + ++ lib.concatMap (c: lib.filter (v: !(v.meta.broken or false)) (lib.attrValues (c.tests or { }))) (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packaging/dev-shell.nix` around lines 145 - 147, The filter currently accesses v.meta.broken directly which will throw at eval time if a test derivation has no meta; update the predicate passed to lib.filter to defensively access meta (e.g. use (v.meta or {}) and then check .broken) so the expression becomes something like lib.filter (v: !((v.meta or {}).broken)) and keep this change in the same concatMap expression that iterates lib.attrValues (c.tests or { }) over availableComponents to ensure missing meta attributes don't cause evaluation failures.ci/gha/tests/windows.nix-13-13 (1)
13-13:⚠️ Potential issue | 🟠 Major
nativeBuildInputsappend logic is missing in cases whereprev.nativeBuildInputsexists.The current expression on line 13 parses as
prev.nativeBuildInputs or ([ ] ++ [ pkgs.colorized-logs ])due to Nix operator precedence: theoroperator (attribute selection) binds tighter than++(list concatenation). This means whenprev.nativeBuildInputsis already defined,pkgs.colorized-logsis never appended. Add parentheses to ensure the fallback is applied before concatenation:Proposed fix
- nativeBuildInputs = prev.nativeBuildInputs or [ ] ++ [ pkgs.colorized-logs ]; + nativeBuildInputs = (prev.nativeBuildInputs or [ ]) ++ [ pkgs.colorized-logs ];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ci/gha/tests/windows.nix` at line 13, The expression for nativeBuildInputs currently evaluates as prev.nativeBuildInputs or ([ ] ++ [ pkgs.colorized-logs ]) due to operator precedence, so when prev.nativeBuildInputs exists the pkgs.colorized-logs item is never appended; update the nativeBuildInputs assignment to evaluate the fallback first by wrapping the fallback in parentheses (so the result of prev.nativeBuildInputs or [] is concatenated with [ pkgs.colorized-logs ])—look for the nativeBuildInputs line and adjust the expression using prev.nativeBuildInputs and pkgs.colorized-logs accordingly.ci/gha/tests/windows.nix-14-14 (1)
14-14:⚠️ Potential issue | 🟠 MajorPreserve existing
envinstead of replacing it.Line 14 uses
env.GTEST_COLOR = "no"which replaces the entireenvattribute, potentially losing other environment variables fromprev.env. The same override already usesprev.nativeBuildInputs or [ ]to preserve prior values—apply the same pattern toenv:Proposed fix
- env.GTEST_COLOR = "no"; + env = (prev.env or { }) // { + GTEST_COLOR = "no"; + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ci/gha/tests/windows.nix` at line 14, The assignment env.GTEST_COLOR = "no" overwrites the whole env attribute and can drop previously set environment variables; update the override to merge with the prior env (similar to how prev.nativeBuildInputs or [ ] is used) by preserving prev.env and setting/overriding GTEST_COLOR, e.g., buildAttrs or the attribute where env is defined should merge prev.env with the new key so that GTEST_COLOR is set without discarding other entries.src/libstore/gc.cc-311-323 (1)
311-323:⚠️ Potential issue | 🟠 MajorGuard per-entry parse failures from daemon responses.
Line 322 currently lets a single malformed record abort the whole runtime-root fetch path, which can fail GC root discovery entirely when daemon output is partially bad or protocol-skewed.
💡 Suggested hardening
static Roots requestRuntimeRoots(const LocalStoreConfig & config, const std::filesystem::path & socketPath) { Roots roots; auto socket = connect(socketPath); auto socketSource = FdSource(socket.get()); - while (1) { + while (true) { auto line = socketSource.readLine(true, '\0'); - if (line == "") + if (line.empty()) break; - roots[config.parseStorePath(line)].insert(censored); + try { + roots[config.parseStorePath(line)].insert(censored); + } catch (Error & e) { + printError("ignoring malformed runtime root from daemon: %s", e.msg()); + } }; return roots; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/gc.cc` around lines 311 - 323, The requestRuntimeRoots function is vulnerable because a malformed daemon line (read via socketSource.readLine) can cause config.parseStorePath to throw/abort and break the whole roots collection; update requestRuntimeRoots to guard each entry parse by wrapping the parse/insertion for each line in a try/catch (or check parse return) so that on parse failure you skip that line (optionally log the bad input) and continue collecting other roots, ensuring only successful parses are inserted into roots (the map used below) and the loop continues on malformed entries.src/libcmd/command.cc-102-110 (1)
102-110:⚠️ Potential issue | 🟠 MajorHonor the provided
storeConfiginStoreCommand::run.At Line 108,
getStore()can create/use a store from memoized/default config, and Line 109 only checks identity viaassert(disabled in release). That can silently run against a different store than the caller-providedstoreConfig.Proposed fix
void StoreCommand::run(ref<StoreConfig> storeConfig) { - // We can either efficiently implement getStore/createStore with memoization, - // or use the StoreConfig passed in run. - // It's more efficient to memoize, especially since there are some direct users - // of getStore. The StoreConfig in both cases should be the same, though. - auto store = getStore(); - assert(&*storeConfig == &store->config); - run(std::move(store)); + if (!_store || &(_store->config) != &*storeConfig) { + auto store = storeConfig->openStore(); + store->init(); + _store = std::move(store); + } + run(ref<Store>(_store)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libcmd/command.cc` around lines 102 - 110, The current StoreCommand::run(ref<StoreConfig> storeConfig) ignores the provided storeConfig by calling getStore() and only asserting identity, which can run against the wrong store in non-debug builds; replace that by constructing or retrieving a Store that is derived from the provided storeConfig (do not rely on getStore()’s memoized default), e.g. call the codepath that creates/returns a Store for the given StoreConfig (or explicitly construct a Store from storeConfig) and then call run(std::move(store)); remove the assert that checks pointer identity between storeConfig and store->config so the function always honors the caller-provided storeConfig.src/libstore/build/derivation-building-goal.cc-1034-1037 (1)
1034-1037:⚠️ Potential issue | 🟠 MajorOverwrite hook env vars instead of using
emplace.
emplacekeeps pre-existing values, so a pre-setDRV_PATH,OUT_PATHS, orNIX_CONFIGcan leak wrong data into the hook.🔧 Proposed fix
- hookEnvironment.emplace(OS_STR("DRV_PATH"), string_to_os_string(store.printStorePath(drvPath))); - hookEnvironment.emplace( - OS_STR("OUT_PATHS"), string_to_os_string(chomp(concatStringsSep(" ", store.printStorePathSet(outputPaths))))); - hookEnvironment.emplace(OS_STR("NIX_CONFIG"), string_to_os_string(globalConfig.toKeyValue())); + hookEnvironment.insert_or_assign(OS_STR("DRV_PATH"), string_to_os_string(store.printStorePath(drvPath))); + hookEnvironment.insert_or_assign( + OS_STR("OUT_PATHS"), string_to_os_string(chomp(concatStringsSep(" ", store.printStorePathSet(outputPaths))))); + hookEnvironment.insert_or_assign(OS_STR("NIX_CONFIG"), string_to_os_string(globalConfig.toKeyValue()));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/build/derivation-building-goal.cc` around lines 1034 - 1037, The hookEnvironment lines currently call emplace which preserves existing keys and can leak pre-set DRV_PATH/OUT_PATHS/NIX_CONFIG; change these to overwrite existing entries by using insert_or_assign (or operator[] assignment) for the three keys used in build/derivation-building-goal.cc where you construct the values with string_to_os_string(store.printStorePath(drvPath)), string_to_os_string(chomp(concatStringsSep(" ", store.printStorePathSet(outputPaths)))), and string_to_os_string(globalConfig.toKeyValue()) so that DRV_PATH, OUT_PATHS and NIX_CONFIG are always replaced rather than left unchanged.src/libstore/build/substitution-goal.cc-167-174 (1)
167-174:⚠️ Potential issue | 🟠 MajorDon’t collapse real substituter failures into
NoSubstituters.When
substituterFailedis true, this returnsecFailedbut still publishesBuildResult::Failure::NoSubstituterswith the same message. Callers likeStore::ensurePath()insrc/libstore/build/entry-points.cc, Lines 107-110, will then tell users that no substituter exists even though one existed and errored. The “tried and failed” path needs its own failure status/message.src/libstore/build/derivation-goal.cc-111-117 (1)
111-117:⚠️ Potential issue | 🟠 MajorPass
Repairthrough the new drv-output substitution path.Lines 111-112 always create the
PathSubstitutionGoalwith the defaultNoRepair, while the known-path branch on Lines 123-126 still honorsbmRepair. For floating CA outputs,nix-store --repaircan now short-circuit on a registered-but-corrupt path, skip the substitute repair, and then fall back to a source rebuild after we've already registered the drv-output.Suggested fix
- waitees.insert(upcast_goal(worker.makePathSubstitutionGoal(g->outputInfo->outPath))); + waitees.insert(upcast_goal(worker.makePathSubstitutionGoal( + g->outputInfo->outPath, + buildMode == bmRepair ? Repair : NoRepair)));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/build/derivation-goal.cc` around lines 111 - 117, The PathSubstitutionGoal is being created with the default NoRepair (via worker.makePathSubstitutionGoal(g->outputInfo->outPath)), causing repair behavior to be dropped; update the code that builds the substitution goal in the floating-CA branch to pass the actual repair value (bmRepair or g->outputInfo->bmRepair) through the call to worker.makePathSubstitutionGoal (or the appropriate overload), so both the floating and known-path branches construct the PathSubstitutionGoal with the same repair parameter before upcast_goal and awaiting, and then continue to call worker.store.registerDrvOutput({*g->outputInfo, id}) unchanged.src/libstore/include/nix/store/build-result.hh-24-56 (1)
24-56:⚠️ Potential issue | 🟠 MajorInclude
<cstdint>before usinguint8_t.This public header now depends on
uint8_tfor both status enums, but it never includes the standard header that defines it. That makes it compile only when some other include happens to pull it in first.Minimal fix
+#include <cstdint> `#include` <string> `#include` <chrono> `#include` <optional>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/include/nix/store/build-result.hh` around lines 24 - 56, The header uses uint8_t in the enum declarations BuildResultSuccessStatus and BuildResultFailureStatus but doesn’t include the header that defines it; add an `#include` of the C++ fixed-width integer header (<cstdint>) at the top of this file (before these enum declarations) so uint8_t is defined for BuildResultSuccessStatus and BuildResultFailureStatus to avoid relying on transitive includes.src/libcmd/unix/unix-socket-server.cc-82-115 (1)
82-115:⚠️ Potential issue | 🟠 Major
closeListenersleaves stale poll descriptors and can break the serve loop.The callback clears
listeningSocketsbut notfds, so the loop may keep polling/accepting on invalid FDs after listeners are “closed”.Proposed fix
[[noreturn]] void serveUnixSocket(const ServeUnixSocketOptions & options, UnixSocketHandler handler) { std::vector<AutoCloseFD> listeningSockets; + bool stopServing = false; @@ - while (1) { + while (!stopServing) { @@ - handler(std::move(remote), [&]() { listeningSockets.clear(); }); + handler(std::move(remote), [&]() { + listeningSockets.clear(); + fds.clear(); + stopServing = true; + }); } @@ } } + throw Interrupted(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libcmd/unix/unix-socket-server.cc` around lines 82 - 115, The serve loop leaves stale poll descriptors in fds when the handler's close callback only clears listeningSockets, causing poll to operate on closed FDs; update the callback passed to handler (the lambda that currently does listeningSockets.clear()) to also clear or rebuild fds (or set a shared flag like running=false that causes the outer loop to break and fds to be reconstructed) so that after listeningSockets are closed the poll vector is updated before the next poll/accept; ensure references to fds, listeningSockets, handler, and the accept loop (where remote is accepted) are adjusted accordingly to avoid polling invalid file descriptors.packaging/dependencies.nix-118-120 (1)
118-120:⚠️ Potential issue | 🟠 MajorDo not clobber existing
buildInputsin the override.Line 118 replaces
buildInputsentirely, which can drop required upstream dependencies under this branch.Suggested fix
- buildInputs = [ - pkgs.llvmPackages.libunwind - ]; + buildInputs = (attrs.buildInputs or [ ]) ++ [ + pkgs.llvmPackages.libunwind + ];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packaging/dependencies.nix` around lines 118 - 120, The current override replaces buildInputs with a new list containing pkgs.llvmPackages.libunwind which can drop upstream deps; modify the override so it appends pkgs.llvmPackages.libunwind to the existing buildInputs instead of replacing them (e.g. use super.buildInputs or the original buildInputs value and concatenate/append the new item), referencing buildInputs and pkgs.llvmPackages.libunwind in the change.packaging/dependencies.nix-78-78 (1)
78-78:⚠️ Potential issue | 🟠 MajorUse scoped
stdenvfor host-platform checks.Line 78 uses
pkgs.stdenv.hostPlatform.isWindowswhile the rest of the file uses scopedstdenv. This can evaluate against a different platform in cross setups.Suggested fix
- http3Support = !pkgs.stdenv.hostPlatform.isWindows; + http3Support = !stdenv.hostPlatform.isWindows;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packaging/dependencies.nix` at line 78, The http3Support assignment uses pkgs.stdenv.hostPlatform.isWindows which can use the unscoped stdenv in cross-compilation contexts; update the expression to use the scoped stdenv used elsewhere (replace pkgs.stdenv.hostPlatform.isWindows with stdenv.hostPlatform.isWindows) so http3Support correctly evaluates against the intended host platform when setting http3Support.src/libstore/http-binary-cache-store.cc-28-29 (1)
28-29:⚠️ Potential issue | 🟠 MajorAuthority validation condition is keyed off the wrong predicate.
Line 28 should branch on the current URL scheme (
cacheUri.scheme), not whetheruriSchemes()contains"file". With_NIX_FORCE_HTTP=1, this can skip authority validation for non-file URLs.Suggested fix
- if (!uriSchemes().contains("file") && (!cacheUri.authority || cacheUri.authority->host.empty())) + if (cacheUri.scheme != "file" && (!cacheUri.authority || cacheUri.authority->host.empty())) throw UsageError("`%s` Store requires a non-empty authority in Store URL", cacheUri.scheme);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/http-binary-cache-store.cc` around lines 28 - 29, The authority check is using uriSchemes().contains("file") instead of the actual URL scheme; change the condition to inspect cacheUri.scheme so non-file URLs always require an authority. Replace the predicate !uriSchemes().contains("file") with cacheUri.scheme != "file" in the if statement that throws UsageError, keeping the existing authority/null/empty-host checks on cacheUri.authority.src/libexpr/primops.cc-4887-4912 (1)
4887-4912:⚠️ Potential issue | 🟠 MajorBound the regex cache before it becomes evaluator-state leak.
This retains one compiled
std::regexper unique pattern for the lifetime of theEvalState. Because the key is expression-controlled, long-lived evaluators can now grow memory without limit just by seeing many distinct regexes throughbuiltins.match/builtins.split.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr/primops.cc` around lines 4887 - 4912, The RegexCache currently stores an unbounded number of compiled std::regex objects (RegexCache::cache, Entry, RegexCache::get), causing evaluator-state memory leaks; change the implementation to bound the cache size (e.g., add a max_entries setting) and evict old entries when the cap is reached (implement an LRU or simple FIFO eviction policy) so that try_emplace_and_cvisit still returns/sets the cached ref but insertion triggers eviction when cache.size() >= max_entries; ensure thread-safety around eviction and access (protect cache mutation or use a bounded concurrent structure) and expose/configure the max size via EvalState or a constant used by RegexCache.
🟡 Minor comments (12)
src/libstore-tests/register-valid-paths-bench.cc-67-71 (1)
67-71:⚠️ Potential issue | 🟡 MinorUnnecessary
ResumeTiming()at end of cleanup introduces measurement noise.Line 71 resumes timing at the end of cleanup, but the next iteration immediately pauses timing again at line 22. This causes the loop overhead between iterations to be included in the measured time. The timing should remain paused through cleanup until the next iteration's setup completes.
Proposed fix
state.PauseTiming(); localStore.reset(); store.reset(); std::filesystem::remove_all(tmpRoot); - state.ResumeTiming(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore-tests/register-valid-paths-bench.cc` around lines 67 - 71, The cleanup block currently calls state.ResumeTiming() after resetting localStore and store and removing tmpRoot, which re-enables timing during the teardown and adds noise because the next iteration immediately pauses timing again; keep timing paused across cleanup by removing the state.ResumeTiming() call so that timing remains paused until the next iteration's setup completes (look for the sequence with state.PauseTiming(), localStore.reset(), store.reset(), std::filesystem::remove_all(tmpRoot), state.ResumeTiming()) and delete the final state.ResumeTiming() invocation.src/libstore/builtins/fetchurl.cc-67-70 (1)
67-70:⚠️ Potential issue | 🟡 MinorSilent
chmodfailure may mask build issues.The return value of
chmodis discarded. If making the file executable fails (e.g., permission denied, unsupported filesystem), the build will continue but downstream steps expecting an executable may fail with confusing errors.If the removal of error handling is intentional (e.g., for sandboxed environments), consider adding a debug log on failure. Otherwise, consider restoring error handling.
🛡️ Suggested fix with debug logging
if (executable != ctx.drv.env.end() && executable->second == "1") { - chmod(storePath, 0755); + if (chmod(storePath, 0755) != 0) + debug("chmod failed for '%s': %s", storePath, strerror(errno)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/builtins/fetchurl.cc` around lines 67 - 70, The silent chmod call on storePath inside the branch that checks ctx.drv.env for "executable" ignores errors; check chmod's return value and handle failures (e.g., inspect errno, call perror or use processLogger/error logging, and propagate an error or return failure from the function) so a permission or filesystem error doesn't silently continue; update the block that uses ctx.drv.env.find("executable") and the chmod(storePath, 0755) invocation to log the failure with context (storePath and errno/message) and return or surface an error to the caller instead of discarding it.src/libexpr-tests/nix_api_external.cc-88-107 (1)
88-107:⚠️ Potential issue | 🟡 MinorAdd
nix_gc_decrefforvalto prevent reference leak.
nix_create_external_valueincrements the refcount at line 184 ofnix_api_external.cc, and the test never decrements it. The documentation fornix_init_externalstates the value "Will be GC-referenced by the value" but does not indicate ownership transfer. Following the pattern used forresultandtoJsonFn(explicitly dereferenced at lines 105–106),valmust also be dereferenced before the test completes.Fix
ASSERT_EQ("42", json_str); + nix_gc_decref(ctx, val); nix_gc_decref(ctx, result); nix_gc_decref(ctx, toJsonFn);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr-tests/nix_api_external.cc` around lines 88 - 107, The test leaks a reference because the ExternalValue created by nix_create_external_value (stored in val) is never decref'd; after creating val and calling nix_init_external(ctx, value, val) you must call nix_gc_decref(ctx, val) to match the explicit decrefs used for result and toJsonFn. Locate the creation of val in the test (nix_create_external_value / nix_init_external) and add a nix_gc_decref(ctx, val) before the function returns so the external value's refcount is properly released.src/libexpr/include/nix/expr/value.hh-1277-1301 (1)
1277-1301:⚠️ Potential issue | 🟡 MinorConsider explicitly initializing or documenting
tPendingandtAwaitedentries in the lookup table for code clarity.The lookup table at indices 2 and 3 (corresponding to
tPendingandtAwaited) are not explicitly initialized and will default toValueType(0)which isnThunk. While this is correct and within array bounds, sincegetInternalType()can return bothtPendingandtAwaitedvalues, explicitly initializing these entries or adding a clarifying comment would improve maintainability. Currently, the code relies on implicit zero-initialization behavior that isn't immediately obvious to readers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr/include/nix/expr/value.hh` around lines 1277 - 1301, The table lambda leaves entries for tPending and tAwaited unset (they default to ValueType(0) == nThunk), which is implicit and unclear; update the static constexpr auto table in value.hh to explicitly assign t[tPending] = nThunk and t[tAwaited] = nThunk (or add a clear comment next to the table explaining the intentional zero-initialization) so readers see the intent when getInternalType() returns tPending or tAwaited; reference the table, tPending, tAwaited, nThunk and getInternalType() to locate and fix this.src/libstore/builtins/buildenv.cc-54-59 (1)
54-59:⚠️ Potential issue | 🟡 MinorWarning message references the wrong path.
When
maybeStat(srcFile.c_str())returns nullopt, the warning incorrectly referencesdstFile. The dangling symlink issrcFile(the source being processed), notdstFile(which may not exist yet).🐛 Proposed fix
auto srcStOpt = maybeStat(srcFile.c_str()); if (!srcStOpt) { - warn("skipping dangling symlink '%s'", dstFile); + warn("skipping dangling symlink '%s'", srcFile); continue; } auto & srcSt = *srcStOpt;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/builtins/buildenv.cc` around lines 54 - 59, The warning logged when maybeStat(srcFile.c_str()) returns nullopt references the wrong path; update the warn call in buildenv.cc (inside the loop where maybeStat is called) to mention srcFile instead of dstFile so the message correctly reports the dangling symlink being inspected; locate the block around maybeStat, the local variables srcFile and dstFile, and the warn(...) call and change the formatted string/argument to use srcFile.src/libcmd/repl-interacter.cc-76-78 (1)
76-78:⚠️ Potential issue | 🟡 MinorSet
*match = 0in exception path for completion callback.On callback failure, returning
nullptrwithout resetting*matchcan leave stale state for callers.Suggested fix
} catch (...) { + *match = 0; return nullptr; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libcmd/repl-interacter.cc` around lines 76 - 78, In the catch (...) path of the completion callback handler, reset the output match count before returning nullptr to avoid leaving stale state; modify the catch block around the completion callback (where it currently does "return nullptr;") to check if match is non-null and set *match = 0, then return nullptr.src/libstore/include/nix/store/build/drv-output-substitution-goal.hh-23-24 (1)
23-24:⚠️ Potential issue | 🟡 MinorFix typo in TODO rename target.
BuidlTraceEntryGoalappears to be a typo (BuildTraceEntryGoal), which can make future grep-based cleanup harder.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/include/nix/store/build/drv-output-substitution-goal.hh` around lines 23 - 24, The TODO comment contains a typo: change the misspelled identifier "BuidlTraceEntryGoal" to the correct "BuildTraceEntryGoal" in the TODO line (also verify the related reference to "Realisation" if present), so future grep-based refactors find the intended symbol; update the comment text in drv-output-substitution-goal.hh where the TODO appears to use the corrected name "BuildTraceEntryGoal".src/libexpr/include/nix/expr/eval-settings.hh-387-387 (1)
387-387:⚠️ Potential issue | 🟡 MinorFix malformed markdown in deprecation guidance string.
There is a stray backtick in the sentence; it currently renders awkwardly.
Proposed fix
- Deprecated. Use [`lint-short-path-literals`](`#conf-lint-short-path-literals`)` = warn` instead. + Deprecated. Use [`lint-short-path-literals`](`#conf-lint-short-path-literals`) = `warn` instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr/include/nix/expr/eval-settings.hh` at line 387, Fix the malformed Markdown in the deprecation guidance string in src/libexpr/include/nix/expr/eval-settings.hh by removing the stray backticks after the link and around the "= warn" text so it reads: "Deprecated. Use [`lint-short-path-literals`](`#conf-lint-short-path-literals`) = warn instead." Locate the deprecation string near the mention of lint-short-path-literals and update the punctuation accordingly.src/libexpr/eval-settings.cc-18-22 (1)
18-22:⚠️ Potential issue | 🟡 MinorHandle unsupported append semantics explicitly instead of
assert.
appendOrSet()should rejectappend=truewith a normal error path rather thanassert(!append), which can abort in debug builds and be compiled out in release builds.Proposed fix
void DeprecatedWarnSetting::appendOrSet(bool newValue, bool append) { - assert(!append); + if (append) + throw Error("cannot append to deprecated boolean setting '%s'", name); assign(newValue); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libexpr/eval-settings.cc` around lines 18 - 22, The function DeprecatedWarnSetting::appendOrSet currently uses assert(!append); replace this with explicit handling of the unsupported append semantic: if append is true, return a proper error path (e.g., throw a descriptive exception such as std::invalid_argument or std::runtime_error with a message like "append not supported for DeprecatedWarnSetting"), otherwise continue to call assign(newValue); ensure the chosen error mechanism matches the surrounding API/exception style used by other Setting classes so callers receive a recoverable error instead of an assert abort.doc/manual/source/protocols/json/schema/build-trace-entry-v2.yaml-15-19 (1)
15-19:⚠️ Potential issue | 🟡 MinorFix the version-history typo.
Line 15 says
Verision history; this should beVersion history.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/manual/source/protocols/json/schema/build-trace-entry-v2.yaml` around lines 15 - 19, Fix the typo in the YAML doc header: replace the string "Verision history" with the correct "Version history" in build-trace-entry-v2.yaml so the heading reads properly; locate the exact line that currently contains "Verision history" and update it to "Version history" (no other changes).doc/manual/source/protocols/nix-cache-info.md-11-13 (1)
11-13:⚠️ Potential issue | 🟡 MinorAnnotate the fenced examples with a language.
All three blocks trigger MD040 on the new page.
textworks for each.📝 Suggested fix
-``` +```text Key: value@@
-+text
error: binary cache 'https://example.com' is for Nix stores with prefix '/nix/store', not '/home/user/nix/store'@@ -``` +```text StoreDir: /nix/store WantMassQuery: 1 Priority: 30</details> Also applies to: 27-29, 41-45 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@doc/manual/source/protocols/nix-cache-info.mdaround lines 11 - 13, Annotate
the fenced code blocks in the nix-cache-info.md examples with a language tag
(use "text") so they no longer trigger MD040; specifically add ```text before
the blocks containing "Key: value", the error message starting "error: binary
cache 'https://example.com' ...", and the block starting with "StoreDir:
/nix/store" (the blocks at the three example locations noted in the comment).</details> </blockquote></details> <details> <summary>doc/manual/source/protocols/nix32.md-9-11 (1)</summary><blockquote> `9-11`: _⚠️ Potential issue_ | _🟡 Minor_ **Add a language tag to the alphabet block.** This new page hits MD040 here. `text` is enough. <details> <summary>📝 Suggested fix</summary> ```diff -``` +```text 0 1 2 3 4 5 6 7 8 9 a b c d f g h i j k l m n p q r s v w x y z ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@doc/manual/source/protocols/nix32.mdaround lines 9 - 11, The code block
containing the alphabet string "0 1 2 3 4 5 6 7 8 9 a b c d f g h i j k l m n p
q r s v w x y z" needs a language tag to satisfy MD040; update the opening fence
fromtotext so the block reads as a fenced code block with the "text"
language tag.</details> </blockquote></details> </blockquote></details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| auto tmpl = nix::absPath(nix::defaultTempDir() / "tests_nix-store.XXXXXX", nullptr, true); | ||
| nixDir = mkdtemp((char *) tmpl.c_str()); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Looking for mkdtemp calls fed by c_str() casts..."
rg -nP 'mkdtemp\s*\(\s*\(\s*char\s*\*\s*\)\s*[^)]*c_str\s*\(\s*\)\s*\)' -C2
echo
echo "Looking for mkdtemp calls without nearby null-check (manual inspection needed)..."
rg -nP '\bmkdtemp\s*\(' -C3Repository: DeterminateSystems/nix-src
Length of output: 1584
🏁 Script executed:
cat -n src/libstore-test-support/include/nix/store/tests/nix_api_store.hh | sed -n '40,70p'Repository: DeterminateSystems/nix-src
Length of output: 1482
🏁 Script executed:
rg -n "std::filesystem::path.*absPath|absPath.*std::filesystem::path" -A2 -B2 --type cppRepository: DeterminateSystems/nix-src
Length of output: 6400
Use a mutable buffer for mkdtemp instead of casting path::c_str().
mkdtemp mutates its input. Passing (char *) tmpl.c_str() casts away const from std::filesystem::path::c_str() and is undefined behavior. Additionally, mkdtemp can fail and return null; line 57 uses nixDir without checking for null, which would propagate the failure silently.
Proposed fix
- auto tmpl = nix::absPath(nix::defaultTempDir() / "tests_nix-store.XXXXXX", nullptr, true);
- nixDir = mkdtemp((char *) tmpl.c_str());
+ auto tmpl = nix::absPath(
+ nix::defaultTempDir() / "tests_nix-store.XXXXXX",
+ nullptr,
+ true).string();
+ std::vector<char> tmplBuf(tmpl.begin(), tmpl.end());
+ tmplBuf.push_back('\0');
+ char * created = mkdtemp(tmplBuf.data());
+ if (!created) throw std::runtime_error("mkdtemp failed");
+ nixDir = created;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| auto tmpl = nix::absPath(nix::defaultTempDir() / "tests_nix-store.XXXXXX", nullptr, true); | |
| nixDir = mkdtemp((char *) tmpl.c_str()); | |
| auto tmpl = nix::absPath( | |
| nix::defaultTempDir() / "tests_nix-store.XXXXXX", | |
| nullptr, | |
| true).string(); | |
| std::vector<char> tmplBuf(tmpl.begin(), tmpl.end()); | |
| tmplBuf.push_back('\0'); | |
| char * created = mkdtemp(tmplBuf.data()); | |
| if (!created) throw std::runtime_error("mkdtemp failed"); | |
| nixDir = created; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore-test-support/include/nix/store/tests/nix_api_store.hh` around
lines 53 - 54, The code passes tmpl.c_str() (from nix::absPath result) to
mkdtemp which mutates its argument and casts away const — instead create a
mutable buffer (e.g., std::string or std::vector<char>) from tmpl.string(),
ensure it is null-terminated, call mkdtemp with that buffer's data(), check the
returned pointer for nullptr and handle the error (log/throw/exit) before
assigning nixDir; update references to tmpl, nixDir, and the mkdtemp call in
this function accordingly.
| std::pair{ | ||
| "with-signature", | ||
| [&] { | ||
| auto r = simple; | ||
| // FIXME actually sign properly | ||
| r.signatures = { | ||
| Signature{.keyName = "asdf", .sig = std::string(64, '\0')}, | ||
| }; | ||
| return r; | ||
| }(), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Read the relevant section of the file with context
sed -n '60,85p' src/libstore-tests/realisation.ccRepository: DeterminateSystems/nix-src
Length of output: 822
🏁 Script executed:
# Check the broader context to understand if the lambda is truly at namespace scope
sed -n '1,80p' src/libstore-tests/realisation.ccRepository: DeterminateSystems/nix-src
Length of output: 2055
🏁 Script executed:
# Also check for any function scope this might be in
rg -B 20 "with-signature" src/libstore-tests/realisation.cc | head -50Repository: DeterminateSystems/nix-src
Length of output: 616
🌐 Web query:
C++ lambda capture namespace scope ill-formed standard
💡 Result:
In the current C++ working draft, a lambda at namespace scope (i.e., whose innermost enclosing scope is a namespace scope, not a block scope) is ill-formed if its lambda-introducer contains either:
- a capture-default (
[=],[&]), or - any simple-capture (e.g.
[x],[&x],[this],[*this])
because the standard explicitly restricts those to lambdas whose innermost enclosing scope is a block scope, with limited exceptions for default member initializers and contract assertions:
“A lambda-expression shall not have a capture-default or simple-capture … unless its innermost enclosing scope is a block scope … or [the listed exceptions].” [1]
So, for example, at namespace scope:
int x = 1;
auto a = [x]{ return x; }; // ill-formed (simple-capture at namespace scope)
auto b = [=]{ return x; }; // ill-formed (capture-default at namespace scope)Note: this rule is about capture-default / simple-capture. An init-capture (e.g. [y = 123]) is not a simple-capture, so this specific prohibition does not forbid it. The key point is that you can’t capture enclosing local entities from namespace scope, because simple-captures/capture-defaults aren’t permitted there. [1]
Sources
[1] C++ working draft, [expr.prim.lambda.capture], p3 (captures restriction) https://eel.is/c++draft/expr.prim.lambda.capture
🌐 Web query:
C++ lambda with reference capture at namespace scope valid
💡 Result:
A lambda at namespace (global) scope may not use a capture-default ([&], [=]) or any “simple-capture” ([x], [&x], [this], etc.). Such a lambda is ill-formed unless the lambda’s innermost enclosing scope is a block scope (or it’s inside a default member initializer). [1]
So these are invalid at namespace scope:
auto a = [&]{ }; // ill-formed
auto b = [&x]{ }; // ill-formed (also: x would have to be a local automatic anyway)What is valid at namespace scope:
- No capture at all (and just refer to globals directly—globals don’t need capturing):
int g = 0;
auto f = []{ return g; }; // OKNon-local variables (static / namespace-scope) cannot be captured, but they can be used without capture. [2]
- Init-capture is allowed (the restriction is on capture-default / simple-capture, not init-capture), so you can create a reference member explicitly:
int g = 0;
auto f = [&r = g]{ return ++r; }; // OK (r is a reference member bound to g)Init-capture semantics (including by-reference init-capture) are defined separately from simple-capture. [2]
Sources:
[1] C++ working draft N4868, [expr.prim.lambda.capture]p3 (restriction on capture-default / simple-capture) (isocpp.org)
[2] cppreference: lambda captures (non-local variables cannot be captured; init-capture rules) (cppreference.com)
Citations:
- 1: https://isocpp.org/files/papers/N4868.pdf
- 2: https://www.cppreference.com/w/cpp/language/lambda.html
Use a captureless lambda here.
This IIFE is declared at namespace scope, so [&] is ill-formed per the C++ standard. simple is a namespace-scope variable and does not need to be captured; [] is sufficient and allows the file to compile.
🔧 Minimal fix
- [&] {
+ [] {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| std::pair{ | |
| "with-signature", | |
| [&] { | |
| auto r = simple; | |
| // FIXME actually sign properly | |
| r.signatures = { | |
| Signature{.keyName = "asdf", .sig = std::string(64, '\0')}, | |
| }; | |
| return r; | |
| }(), | |
| std::pair{ | |
| "with-signature", | |
| [] { | |
| auto r = simple; | |
| // FIXME actually sign properly | |
| r.signatures = { | |
| Signature{.keyName = "asdf", .sig = std::string(64, '\0')}, | |
| }; | |
| return r; | |
| }(), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore-tests/realisation.cc` around lines 68 - 77, The lambda used to
construct the "with-signature" test case incorrectly uses a captured lambda
`[&]` at namespace scope; change the lambda capture to a captureless `[]` so it
compiles—update the IIFE that returns `r` (the block that copies `simple`,
assigns `r.signatures = { Signature{.keyName = "asdf", .sig = std::string(64,
'\0')} }`, and returns `r`) to use `[] { ... }()` instead of `[&] { ... }()`.
| narInfo->compression = config.compression.to_string(); // FIXME: Make NarInfo use CompressionAlgo | ||
| auto [fileHash, fileSize] = fileHashSink.finish(); | ||
| narInfo->fileHash = fileHash; | ||
| narInfo->fileSize = fileSize; | ||
| narInfo->url = "nar/" + narInfo->fileHash->to_string(HashFormat::Nix32, false) + ".nar" | ||
| + (config.compression == "xz" ? ".xz" | ||
| : config.compression == "bzip2" ? ".bz2" | ||
| : config.compression == "zstd" ? ".zst" | ||
| : config.compression == "lzip" ? ".lzip" | ||
| : config.compression == "lz4" ? ".lz4" | ||
| : config.compression == "br" ? ".br" | ||
| : ""); | ||
| + (config.compression == CompressionAlgo::xz ? ".xz" | ||
| : config.compression == CompressionAlgo::bzip2 ? ".bz2" | ||
| : config.compression == CompressionAlgo::zstd ? ".zst" | ||
| : config.compression == CompressionAlgo::lzip ? ".lzip" | ||
| : config.compression == CompressionAlgo::lz4 ? ".lz4" | ||
| : config.compression == CompressionAlgo::brotli ? ".br" | ||
| : ""); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify CompressionAlgo enum values and the "none" case
# Find CompressionAlgo enum definition
rg -n "enum.*CompressionAlgo|CompressionAlgo\s*\{" --type cpp --type-add 'cpp:*.hh' -A 15
# Check if there's a "none" variant
rg -n "CompressionAlgo::none|CompressionAlgo::None" --type cpp --type-add 'cpp:*.hh'Repository: DeterminateSystems/nix-src
Length of output: 1812
🏁 Script executed:
#!/bin/bash
# Find the NIX_FOR_EACH_COMPRESSION_ALGO macro definition
rg -n "NIX_FOR_EACH_COMPRESSION_ALGO\(" --type cpp --type-add 'cpp:*.hh' -B 2 -A 20
# Also search for the actual compression algo values
rg -n "CompressionAlgo::" --type cpp --type-add 'cpp:*.hh' | head -20Repository: DeterminateSystems/nix-src
Length of output: 8107
Fix incomplete compression algorithm handling in URL generation.
The code only explicitly handles 6 of the 13 defined compression algorithms (xz, bzip2, zstd, lzip, lz4, brotli). The default case silently produces an empty extension for unsupported methods including CompressionAlgo::gzip, CompressionAlgo::lzma, CompressionAlgo::compress, CompressionAlgo::lrzip, CompressionAlgo::lzop, and CompressionAlgo::grzip. This generates incorrect NAR file URLs with missing extensions.
While the FIXME on line 168 appropriately notes the type-safety issue (storing as string rather than enum), the more pressing concern is that the nested conditional at lines 173-179 should either handle all supported compression methods or explicitly reject unsupported ones—not silently treat them as uncompressed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore/binary-cache-store.cc` around lines 168 - 179, The URL extension
generation silently omits extensions for several CompressionAlgo values; update
the block that sets narInfo->url (which uses config.compression and
narInfo->fileHash->to_string(HashFormat::Nix32, false)) to either map every
CompressionAlgo enum to the correct extension or explicitly fail (e.g.,
throw/log and return an error) for unsupported values instead of using the empty
default; ensure you cover gzip, lzma, compress, lrzip, lzop, grzip (and any
remaining enums) so narInfo->url is always correct or the operation aborts with
a clear error.
| void ChildEvents::pushChildEvent(ChildEOF event) | ||
| { | ||
| if (childTimeout) | ||
| return; // Already timed out, ignore | ||
| assert(!childEOF); | ||
| childEOF = std::move(event); |
There was a problem hiding this comment.
Queue EOFs; a single optional breaks multi-FD children.
src/libstore/build/worker.cc, Lines 460-497, can call handleEOF() once per descriptor in j->channels. With childEOF as a single optional and assert(!childEOF), closing stdout and stderr before the coroutine drains the first event asserts in debug builds and silently drops one EOF in release builds. EOFs need to be queued or tracked per-fd, just like ChildOutput.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore/build/goal.cc` around lines 24 - 29, ChildEvents::pushChildEvent
uses a single optional childEOF (and asserts it's empty) which drops or asserts
when multiple descriptors produce EOF; change storage to queue or per-FD
tracking (e.g., a std::deque<ChildEOF> or map keyed by fd similar to
ChildOutput) and remove the assert so each pushChildEvent(appending a ChildEOF)
always enqueues the EOF; update consumers (handleEOF / code that reads childEOF)
to drain and pop from the queue until empty, preserving order and allowing one
EOF per descriptor in j->channels to be handled without assertion or loss.
| struct curlFileTransfer : public FileTransfer | ||
| { | ||
| CURLM * curlm = 0; | ||
| const FileTransferSettings & settings; |
There was a problem hiding this comment.
Avoid storing FileTransferSettings by reference in curlFileTransfer.
curlFileTransfer keeps a reference to caller-owned settings, so makeFileTransfer(settings) can produce a transfer that outlives settings and dereferences a dangling object.
Suggested fix
- const FileTransferSettings & settings;
+ FileTransferSettings settings;
@@
- curlFileTransfer(const FileTransferSettings & settings)
- : settings(settings)
+ curlFileTransfer(const FileTransferSettings & settings)
+ : settings(settings)
, mt19937(rd())Also applies to: 828-830, 1073-1075
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore/filetransfer.cc` at line 77, The class curlFileTransfer
currently stores a caller-owned reference "const FileTransferSettings &
settings;", which can dangle when makeFileTransfer(settings) outlives the
original settings; change the member to own a copy (FileTransferSettings
settings;) and update curlFileTransfer's constructor/initializer to copy or move
the provided FileTransferSettings argument so the object owns its settings;
apply the same fix wherever a FileTransferSettings reference member is used
(e.g., other curlFileTransfer-like constructors/uses such as those flagged
around the other occurrences) to ensure no dangling references.
| ///@file | ||
|
|
||
| #include "nix/util/signature/local-keys.hh" | ||
| #include "nix/util/compression-settings.hh" |
There was a problem hiding this comment.
Build-blocking include path needs correction or export update.
Line 4 includes nix/util/compression-settings.hh, and static analysis reports this header as missing. Please fix the include path or ensure the header is exported/available in the target include graph.
🧰 Tools
🪛 Clang (14.0.6)
[error] 4-4: 'nix/util/compression-settings.hh' file not found
(clang-diagnostic-error)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore/include/nix/store/binary-cache-store.hh` at line 4, The include
"nix/util/compression-settings.hh" in binary-cache-store.hh is unresolved by the
build; either update the include to the correct header path/name or export the
header from the target so it is visible to this file. Locate the reference to
"nix/util/compression-settings.hh" in
src/libstore/include/nix/store/binary-cache-store.hh and (a) correct the include
to the actual header path provided by the project (or the correct filename) or
(b) add the header to the target's public includes / install/export list so the
compiler can find it during builds.
| Setting<AbsolutePath> secretKeyFile{ | ||
| this, "", "secret-key", "Path to the secret key used to sign the binary cache."}; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== AbsolutePath definition and validation =="
rg -n --type=cpp '\b(class|struct)\s+AbsolutePath\b|using\s+AbsolutePath\b|typedef\s+.*AbsolutePath' src
rg -n --type=cpp 'AbsolutePath\s*\(|isAbsolute|must be absolute|absolute path|throw.*absolute' src/libutil src/libstore
echo "== Current Setting<AbsolutePath> usages/defaults =="
rg -n --type=cpp 'Setting<\s*AbsolutePath\s*>\s+\w+\s*\{' src
rg -n --type=cpp 'Setting<\s*AbsolutePath\s*>\s+\w+\s*\{[^}]*""' src
echo "== Optional AbsolutePath pattern =="
rg -n --type=cpp 'Setting<\s*std::optional<AbsolutePath>\s*>' srcRepository: DeterminateSystems/nix-src
Length of output: 2914
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== parseAbsolutePath implementation =="
sed -n '486,530p' src/libutil/configuration.cc
echo -e "\n== Setting<AbsolutePath> declarations with full context =="
rg -n -A 2 'Setting<\s*AbsolutePath\s*>\s+\w+\s*\{' src/libstore/include/nix/store/binary-cache-store.hh
rg -n -A 2 'Setting<\s*AbsolutePath\s*>\s+\w+\s*\{' src/libstore/include/nix/store/filetransfer.hh | head -20
rg -n -A 2 'Setting<\s*AbsolutePath\s*>\s+\w+\s*\{' src/libstore/include/nix/store/common-ssh-store-config.hh
echo -e "\n== AbsolutePath constructor logic =="
sed -n '225,250p' src/libutil/include/nix/util/configuration.hhRepository: DeterminateSystems/nix-src
Length of output: 2148
Fix Setting<AbsolutePath> default value to use std::optional or a valid path.
Setting<AbsolutePath> with a default of "" will throw UsageError("setting 'secret-key' is a path and paths cannot be empty") during config initialization. The parseAbsolutePath() function explicitly rejects empty strings. Use Setting<std::optional<AbsolutePath>> with std::nullopt as the default to indicate an unset value, or provide a valid absolute path. This same issue affects sshKey at src/libstore/include/nix/store/common-ssh-store-config.hh:17.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/libstore/include/nix/store/binary-cache-store.hh` around lines 42 - 43,
The Setting<AbsolutePath> named secretKeyFile is initialized with an
empty-string default which parseAbsolutePath rejects; change it to use
Setting<std::optional<AbsolutePath>> secretKeyFile with default std::nullopt (or
supply a valid non-empty AbsolutePath) so an unset value is represented safely;
do the same fix for sshKey (Setting<AbsolutePath> sshKey ->
Setting<std::optional<AbsolutePath>> sshKey with std::nullopt) and update any
code that reads these settings to handle the optional (check has_value() /
value()).
Motivation
Context
Summary by CodeRabbit