CI: testing python 3.14 and sphinx9 and docutils 0.22#704
CI: testing python 3.14 and sphinx9 and docutils 0.22#704bsipocz wants to merge 14 commits intoexecutablebooks:mainfrom
Conversation
afc0b8d to
a43342b
Compare
49b335d to
428a4ef
Compare
|
OK, so we have some docutils 0.22 incompatibilities for the tests. I've just merged something for sphinx-tabs that may be good enough workaround here, too, and I'm planning to get back to trying it next week. |
|
It looks like this is done except that some of the jobs are picking up newer pillow (12.x) or are not compatible with the one we pin (py3.14 and pillow 11.0) -- So I'm a little bit stuck. @choldgraf - would you help out, should I just: xfail the few tests that are effected if it's not the correct pillow version; or update it to be 12.0.0 everywhere? If the latter; what is the way to update the expected outputs? |
e6509c0 to
91273a6
Compare
| if docutils.__version_info__ < (0, 22): | ||
| data = data.replace('linenos="False"', 'linenos="0"') | ||
| data = data.replace('nowrap="False"', 'nowrap="0"') | ||
| data = data.replace('linenos="True"', 'linenos="1"') | ||
| data = data.replace('internal="True"', 'internal="1"') |
There was a problem hiding this comment.
why not do it the other way around? then you don’t need to change all the XML and you can just remove this block once 0.22 is the lowest supported version.
There was a problem hiding this comment.
Because this way the files are docutils 0.22 compatible and this workaround can be removed when the older versions are dropped.
The problem with this pr has nothing to do with docutils any more, but are stuck with the image tests.
98fd0e3 to
bf2970e
Compare
|
Well, I'm not happy, I cannot reproduce the failing builds locally, the same version combo just passes all tests. Could this be OSX vs linux related? |
9299767 to
5ca2e87
Compare
c315f75 to
bd4943d
Compare
41b7878 to
a45b9e0
Compare
|
I have a Mac, I can test it out this week - would that help? |
|
With changing the OS we're down to only 2 tests failing as opposed to the 6. I'm really on the verge of somehow just ignore those image outputs as this is getting ridiculous. |
|
IMO - ignore them, don't worry about it. This feels too defensive to be worth the hassle |
No, we need the opposite, a linux box to generate outputs as I couldn't do that on my mac (and frankly, some of these were always failing locally, so the opposite was kind of the status quo) |
|
Got it - imo I suggest we not worry about this and just skip the tests on some platforms |
|
Yes, I'll do some skipping/rearanging, but it has to wait until tomorrow |
|
Yeah, this image stuff is a huge problem for our tests too, and we don’t even test on multiple platforms.
Matplotlib even has a I’d love to help if we get a conversation going in the community how to fix this once and for all. (Maybe use a font renderer that was compiled to wasm using a fully reproducible setup, so we can use the same version as long as we want and get pixel-perfect results) |
|
@flying-sheep - Well, other projects do use pytest-mpl which gives the usual relative and absolute tolerance options, but I don't immediately see how that could be plugged into this package and certainly don't have enough time to make it happen. |
|
pytest-mpl just wraps the APIs I just mentioned, the tolerance options come from them. But that's exactly what I've been talking about: tolerance isn't very meaningful when everything shifting around by a pixel every few months causes you to bump the tolerance every few months until the test is meaningless. I don't think we can do anything here, just shouting out in case someone has a rigorous solution. |
This is to close #680 and other cleanups