Skip to content

fix: opt-in cursor-based pagination for playlists and folders#403

Open
rattboi wants to merge 1 commit intoEbbLabs:mainfrom
rattboi:rattboi/playlist-pagination-cursor
Open

fix: opt-in cursor-based pagination for playlists and folders#403
rattboi wants to merge 1 commit intoEbbLabs:mainfrom
rattboi:rattboi/playlist-pagination-cursor

Conversation

@rattboi
Copy link
Copy Markdown

@rattboi rattboi commented Apr 13, 2026

The v2 my-collection/playlists/folders endpoint uses cursor-based pagination, but playlists() and playlist_folders() were using offset-based requests via map_request(). This caused all pages to return the same first 50 results, since the offset parameter is silently ignored by the API.

Note: I believe this means some of the existing tests that try to use offset-based calls to playlist() are inaccurately passing in sitations they shouldn't. I left those tests as-is for now, but added one specifically to test cursor-based pagination.

I added a new defaulted parameter at the end of the function arguments in order to try to not break any API interfaces. Unfortunately, that meant hiding a little state in the class to remember the previous cursor from pagination call to call.

Switched to raw request + map_json to preserve the cursor from the JSON response. Add an optional cursor parameter to playlists() and playlist_folders(). Rewrite playlists_paginated() to loop using the cursor instead of parallel offset-based fetching via get_items(). Unfortunately because each call returns the next cursor, I don't believe parallel fetch is possible anymore.

The v2 my-collection/playlists/folders endpoint uses cursor-based
pagination, but playlists() and playlist_folders() were using
offset-based requests via map_request(). This caused all pages to
return the same first 50 results, since the offset parameter is
silently ignored by the API.

Note: I believe this means some of the existing tests that try to use
offset-based calls to playlist() are inaccurately passing in sitations
they shouldn't. I left those tests as-is for now, but added one
specifically to test cursor-based pagination.

I added a new defaulted parameter at the end of the function arguments
in order to try to not break any API interfaces. Unfortunately, that
meant hiding a little state in the class to remember the previous cursor
from pagination call to call.

Switched to raw request + map_json to preserve the cursor from the JSON
response. Add an optional cursor parameter to playlists() and
playlist_folders(). Rewrite playlists_paginated() to loop using the
cursor instead of parallel offset-based fetching via get_items().
Unfortunately because each call returns the next cursor, I don't believe
parallel fetch is possible anymore.
@tehkillerbee
Copy link
Copy Markdown
Collaborator

tehkillerbee commented Apr 13, 2026

The "my-collection/playlist/folders" endpoint still uses offsets and limits as parameters, so I am surprised if that does not work anymore.

So you have confirmed that the API silently ignores these parameters and instead relies on cursor-based pagination? Will verify this later today.

EDIT: Thanks for the PR btw 👍

@rattboi
Copy link
Copy Markdown
Author

rattboi commented Apr 13, 2026

I moved the test over against the existing implementation, no cursor stuff:

pytest tests/test_user.py::test_get_user_playlists_paginated                                                                                         
============================================================================================== test session starts ==============================================================================================
platform linux -- Python 3.13.12, pytest-7.4.4, pluggy-1.6.0
rootdir: /home/rattboi/code/mopidy/python-tidal
plugins: mock-3.15.1
collected 1 item                                                                                                                                                                                                

tests/test_user.py F                                                                                                                                                                                      [100%]

=================================================================================================== FAILURES ====================================================================================================
_______________________________________________________________________________________ test_get_user_playlists_paginated _______________________________________________________________________________________

session = <tidalapi.session.Session object at 0x7f0ead3a27b0>

    def test_get_user_playlists_paginated(session):
        expected_count = session.user.favorites.get_playlists_count()
        all_playlists = session.user.favorites.playlists_paginated()
        assert len(all_playlists) == expected_count
        unique_ids = set(x.id for x in all_playlists)
>       assert len(unique_ids) == expected_count
E       AssertionError: assert 50 == 189
E        +  where 50 = len({'010d319a-09ae-4f60-b334-3a316ec28916', '0aebe2ca-00be-4b5c-8796-8dad8320f7ab', '0afad4b2-b7f0-4832-b27d-aa4ddfbbcc6e...e844-31da-48bc-ae13-d0d9d641b9cf', '185c31e2-7e1f-4a0d-a42a-568a625f7150', '20e926e4-85ca-4859-9688-ef2f0f5fbd4f', ...})

tests/test_user.py:71: AssertionError
=============================================================================================== warnings summary ================================================================================================
tests/test_user.py::test_get_user_playlists_paginated
  /home/rattboi/code/mopidy/python-tidal/tidalapi/session.py:749: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
    self.expiry_time = datetime.datetime.utcnow() + datetime.timedelta(

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============================================================================================ short test summary info ============================================================================================
FAILED tests/test_user.py::test_get_user_playlists_paginated - AssertionError: assert 50 == 189

@rattboi
Copy link
Copy Markdown
Author

rattboi commented Apr 13, 2026

A thought I just had about testing these cases going forward:

If limit works as expected, perhaps the tests should use small limits just to test pagination, like limit=1. That'd allow simple 1-2 playlist add/removes in the integration tests.

To play well with get_items, you'd need to be able to pass through a chunk_size though, for the offset-based APIs.

@tehkillerbee
Copy link
Copy Markdown
Collaborator

tehkillerbee commented Apr 15, 2026

since the offset parameter is silently ignored by the API.

I just tested this. The limit / offset logic still works as intended and is not ignored. In my case, I have 52 playlists and all of them are returned - with pagination. I did try to change the chunk_size as an experiment, and it also works fine with smaller chunk sizes. But I believe they are not allowed to be larger than 50.

So I am not sure how this issue occurs for you.

@rattboi
Copy link
Copy Markdown
Author

rattboi commented Apr 15, 2026

I'm really confused how we could be getting two different views on this same thing. Can you try my test I pasted above?

    def test_get_user_playlists_paginated(session):
        expected_count = session.user.favorites.get_playlists_count()
        all_playlists = session.user.favorites.playlists_paginated()
        assert len(all_playlists) == expected_count
        unique_ids = set(x.id for x in all_playlists) 
        assert len(unique_ids) == expected_count

I would expect it to show 50 != 52, in your case

@rattboi
Copy link
Copy Markdown
Author

rattboi commented Apr 15, 2026

To be very specific, with the current implementation for pagination, I would expect you to get 52 results back.

The thing is: items 51 and 52 in the list are repeats of items 1 and 2.

@tehkillerbee
Copy link
Copy Markdown
Collaborator

To be very specific, with the current implementation for pagination, I would expect you to get 52 results back.

The thing is: items 51 and 52 in the list are repeats of items 1 and 2.

Yes, I see what you mean now. It seems the offset is simply ignored and the same items are returned yet again.

Since the parallel fetch will not work until the offset field is fixed (if it will ever be), it might be a better idea to just remove the offset field and the pagination altogether (we can always add it back if tidal fixes the endpoint).

Integration tests should of course catch these kind of issues in the future, by checking for unique IDs and not just count, as you also point out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants