How to use GitHub
- Please use the 👍 reaction to show that you are interested into the same feature.
- Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
- Subscribe to receive notifications on status change and new comments.
Feature request
Which Nextcloud Version are you currently using
33.0.1
Is your feature request related to a problem? Please describe.
It is noted in #218 (comment) that querying the storage statistics takes a long time. There is a background task that runs the query according to some interval (by default three hours, according to the cited link). The query seems to slow down my entire Nextcloud instance.
Describe the solution you'd like
An option somewhere to disable storage statistics, while still being able to access all other (non-expensive) server info.
Describe alternatives you've considered
It might be possible to refactor this functionality so that the results of the last query are cached, and the background job can just update the cache. This way, the server info generation should not take a long time since it should never trigger the expensive query. The stats object could add a last_run key to reflect how up-to-date the stats are:
{
"num_users": 2,
"num_disabled_users": 0,
"num_files": 2462162,
"num_storages": 3,
"num_storages_local": 1,
"num_storages_home": 2,
"num_storages_other": 0,
"size_appdata_storage": -1,
"num_files_appdata": 1927695,
"last_run": "2026-03-27T00:35:01+00:00" // this could be added
}
Additional context
It's possible I might be wrong about some details of what's happening, but I am experiencing slowdowns of my Nextcloud instance that appear in my uptime monitoring as random failures due to HTTP timeout.
Scenario:
- Uptime Kuma requests
https://cloud.trwnh.com/ocs/v2.php/apps/serverinfo/api/v1/info?format=json every 5 minutes. The request has a timeout of 48 seconds.
- Every once in a while (1-3 times per day, inconsistent times of day), I get notified that the Nextcloud serverinfo healthcheck timed out.
- I have ruled out most other potential causes of slowdown -- no external storages, previews completely disabled, and so on. Running
occ background-job:list shows the last_run timestamp of each background job, and whenever the timeouts occur, no other suspicious jobs are running within a nearby time window.
How to use GitHub
Feature request
Which Nextcloud Version are you currently using
33.0.1
Is your feature request related to a problem? Please describe.
It is noted in #218 (comment) that querying the storage statistics takes a long time. There is a background task that runs the query according to some interval (by default three hours, according to the cited link). The query seems to slow down my entire Nextcloud instance.
Describe the solution you'd like
An option somewhere to disable storage statistics, while still being able to access all other (non-expensive) server info.
Describe alternatives you've considered
It might be possible to refactor this functionality so that the results of the last query are cached, and the background job can just update the cache. This way, the server info generation should not take a long time since it should never trigger the expensive query. The stats object could add a
last_runkey to reflect how up-to-date the stats are:{ "num_users": 2, "num_disabled_users": 0, "num_files": 2462162, "num_storages": 3, "num_storages_local": 1, "num_storages_home": 2, "num_storages_other": 0, "size_appdata_storage": -1, "num_files_appdata": 1927695, "last_run": "2026-03-27T00:35:01+00:00" // this could be added }Additional context
It's possible I might be wrong about some details of what's happening, but I am experiencing slowdowns of my Nextcloud instance that appear in my uptime monitoring as random failures due to HTTP timeout.
Scenario:
https://cloud.trwnh.com/ocs/v2.php/apps/serverinfo/api/v1/info?format=jsonevery 5 minutes. The request has a timeout of 48 seconds.occ background-job:listshows the last_run timestamp of each background job, and whenever the timeouts occur, no other suspicious jobs are running within a nearby time window.