Skip to content

added a simple program to export files in .vdb format#148

Open
spyke7 wants to merge 23 commits intoMDAnalysis:masterfrom
spyke7:add_openvdb
Open

added a simple program to export files in .vdb format#148
spyke7 wants to merge 23 commits intoMDAnalysis:masterfrom
spyke7:add_openvdb

Conversation

@spyke7
Copy link

@spyke7 spyke7 commented Dec 27, 2025

Hi @orbeckst
I have added OpenVDB.py inside gridData that simply export files in .vdb format. Also I have added test_vdb.py inside tests and it successfully passes.
fix #141

Required Libraries -
openvdb

  • conda install -c conda-forge openvdb

There are many things that need to be updated like docs, etc, but I have just provided the file and test so that you can review it, and I can fix the problems. Please let me know if anything needs to be changed and updated.

@codecov
Copy link

codecov bot commented Dec 27, 2025

Codecov Report

❌ Patch coverage is 90.19608% with 10 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.23%. Comparing base (b29c1f4) to head (2c75469).

Files with missing lines Patch % Lines
gridData/OpenVDB.py 90.32% 5 Missing and 1 partial ⚠️
gridData/core.py 89.47% 3 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #148      +/-   ##
==========================================
+ Coverage   88.20%   88.23%   +0.02%     
==========================================
  Files           5        6       +1     
  Lines         814      884      +70     
  Branches      107      122      +15     
==========================================
+ Hits          718      780      +62     
- Misses         56       62       +6     
- Partials       40       42       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@spyke7
Copy link
Author

spyke7 commented Dec 27, 2025

@orbeckst , please review the OpenVDB.py file. After that, I will add some more test covering all the missing parts

@orbeckst
Copy link
Member

orbeckst commented Dec 27, 2025 via email

Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution. Before going further, can you please try your own code and demonstrate that it works? For instance, take some of the bundled test files such as 1jzv.ccp4 or nAChR_M2_water.plt, write it to OpenVDB, load it in blender, and show an image of the rendered density?

Once we know that it's working in principle, we'll need proper tests (you can look at PR #147 for good example of minimal testing for writing functionality).

CHANGELOG Outdated
Comment on lines 24 to 26
Fixes

* Adding openVDB formats (Issue #141)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not a fix but an Enhancement – put it into the existing 1.1.0 section and add you name there.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the CHANGELOG, this PR and issue are in the 1.1.0 release, so should I add my name in the 1.1.0 release or remove those lines and put them in the new section?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, now move it to the new section above since we released 1.1.0.

Comment on lines 183 to 188
for i in range(self.grid.shape[0]):
for j in range(self.grid.shape[1]):
for k in range(self.grid.shape[2]):
value = float(self.grid[i, j, k])
if abs(value) > threshold:
accessor.setValueOn((i, j, k), value)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really slow — iterating over a grid explicitly. For a start, you can find all cells above a threshold with numpy operations (np.abs(g) > threshold) and then ideally use it in a vectorized form to set the accessor.

@orbeckst orbeckst self-assigned this Jan 9, 2026
@spyke7 spyke7 requested a review from orbeckst January 18, 2026 06:48
@spyke7
Copy link
Author

spyke7 commented Jan 18, 2026

fixed the CHANGELOG and OpenVDB.py. I didn't get the time to work on the blender part due to exams. I will surely try do it!

@spyke7
Copy link
Author

spyke7 commented Jan 18, 2026

Screenshot (125) Screenshot (126) Screenshot (128) Screenshot (129)

The first two are for naChR_M2_water.vdb and the last two are for 1jzv.vdb.
Also in the OpenVDB.py, the function should be transform.preTranslate which I will fix with the new tests.
Can you please confirm that these are the correct rendering?
I can provide the .vdb files as well here.

@orbeckst
Copy link
Member

Good that you're able to load something into Blender. From a first glance I don;t recognize what I'd expect but this may be dependent on how you render in Blender. As I already said on Discord: Try to establish yourself what "correct" means. Load the original data in a program where you can reliably look at it. ChimeraX is probably the best for looking at densities; it can definitely read DX.

Btw, the M2 density should look similar to the blue "blobs" on the cover of https://sbcb.bioch.ox.ac.uk/users/oliver/download/Thesis/OB_thesis_2sided.pdf

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Screenshot (131) Screenshot (132)

The first one is for 1jzv.vdb and second for nAChR_M2_water.vdb (not done the shading/coloring)
I think the .vdb files as generated by the OpenVDB.py are now correctly rendering in blender.

Can I proceed with the tests part?

@BradyAJohnston
Copy link
Member

Mentioned in the Discord but also bringing up here: In your current examples (most obvious with the pore) is that the axis is flipped so that X is "up" compared to atomic coordinates which would have Z as up.

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Mentioned in the Discord but also bringing up here: In your current examples (most obvious with the pore) is that the axis is flipped so that X is "up" compared to atomic coordinates which would have Z as up.

Thank you for the update! will try to fix this

@spyke7
Copy link
Author

spyke7 commented Jan 19, 2026

Screenshot (133) Screenshot (136)

I think this fixes the axis..

@BradyAJohnston
Copy link
Member

Ideally we would see this alongside the atoms or density from MN as well - to double check alignment because you might also need to flip one of the X or Y axes.

@BradyAJohnston
Copy link
Member

The scales might be different (larger or smaller by factors of 10) but you can just scale inside of Blender by that amount to align the scales, but we want to be double checking alignemnt and axes.

@spyke7
Copy link
Author

spyke7 commented Jan 20, 2026

Hi @BradyAJohnston
Screenshot (138)
Screenshot (139)

I have first of all added the MolecularNode add-on as given in the https://github.com/BradyAJohnston/MolecularNodes, and imported the 1jzv.pdb. After that import the .vdb file and there was difference in size of two. So I made the size the .pdb bigger. The centers of both of them are same and I didn't flipped any of the axes in the ss provided.

I wrote a small blender py script to compare bounding boxes of the pdb and vdb objects to verify centroids, extents and axis alignment-

import bpy
from mathutils import Vector

def bbox_world(obj):
    bbox = [obj.matrix_world @ Vector(c) for c in obj.bound_box]
    mn = Vector((min(p[i] for p in bbox) for i in range(3)))
    mx = Vector((max(p[i] for p in bbox) for i in range(3)))
    return mn, mx

def centroid_world(obj):
    mn, mx = bbox_world(obj)
    return (mn + mx) / 2.0

def size_world(obj):
    mn, mx = bbox_world(obj)
    return mx - mn

pdb = bpy.data.objects.get("1jzv.001")
vdb = bpy.data.objects.get("1jzv")

print("pdb centroid:", centroid_world(pdb))
print("pdb size:", size_world(pdb))
print("vdb centroid:", centroid_world(vdb))
print("vdb size:", size_world(vdb))

output -
pdb centroid: <Vector (7.6985, 23.7885, 76.0560)>
pdb size: <Vector (33.1410, 45.4170, 29.3960)>
vdb centroid: <Vector (8.7238, 23.4452, 76.7628)>
vdb size: <Vector (43.6190, 52.3429, 40.3425)>

The centroids are almost same I guess...
the data seems to be correctly aligned

@BradyAJohnston BradyAJohnston self-assigned this Jan 20, 2026
@BradyAJohnston
Copy link
Member

@spyke7 It's still not 100% clear from your screenshots - can you import with the pore instead as that is more clear? And when you are taking a screenshot it would be more helpful to have the imported density in the centre of the screen rather than mostly empty space.

@PardhavMaradani
Copy link

Looks like you are attempting a standalone export to .vdb files from GridDataFormats. (If your end use case is to use this only within Blender, I'd strongly recommend using MolecularNodes to import various grid formats as it already uses GridDataFormats internally and provides a lot of cool features like varying ISO values, different colors for positive and negative ISO values, slicing along all three major axes, showing contours, centering, inverting etc - both from GUI and API) From a quick scan of the code, you seem to want to support both pyopenvdb (the older one) and openvdb (the newer one) - note that there are some minor differences to take into account between them. You can take a look at the grid_to_vdb method from an earlier version in MN that shows the differences and handles the export to .vdb within MolecularNodes. Hope this helps. Thanks

@BradyAJohnston
Copy link
Member

If this functionality can be added directly to GDF then we can also take advantage of that in MN going forwards.

@PardhavMaradani
Copy link

If this functionality can be added directly to GDF then we can also take advantage of that in MN going forwards.

Agreed. In addition to exporting to .vdb format, we also add some additional metadata (currently, info about inversion, centered) that we later use. So as long as the metadata for Grids is carried over during export, we should probably be good. Thanks

@BradyAJohnston
Copy link
Member

In addition to exporting to .vdb format, we also add some additional metadata

This is a good point and something to consider as well. As far as I am aware Blender / MN (and other 3D animation packages) might be the only ones who use .vdb as a format rather than any scientific packages / pipelines.

If there is anything out there that does take .vdb then we might want to consider if any relevant metadata should be saved. We might want to standardise on relevant metadata entries (we could either re-use from MN or update inside of MN to more general ones) so that GDF interactions with .vdb attempt to approach some kind of standard. This might be a larger question outside of scope for a simple read / write, but certainly functionality to pass in custom metadata like we do in MN would be ideal.

@orbeckst
Copy link
Member

orbeckst commented Jan 29, 2026

Yes, sort of: you need to add additional explicit keyword arguments to the top-level export() method and then add the specific keywords to the _export_vdb() method; still keep the **kwargs as this will swallow all other keywords that are not relevant for vdb.

@orbeckst
Copy link
Member

See #149 (comment) for a discussion for why we want to have explicit keywords.

@spyke7
Copy link
Author

spyke7 commented Feb 2, 2026

@PardhavMaradani can you please explain the center variable, more? As I cannot understand what to do for this..

@PardhavMaradani
Copy link

PardhavMaradani commented Feb 2, 2026

@PardhavMaradani can you please explain the center variable, more? As I cannot understand what to do for this..

The center param is to allow the users to specify whether they want the imported volume object (in tools like Blender, etc) to be centered around the world origin or not. This is a world space transform that determines the positioning of the volume object in 3D space. When True, the center of the entire volume (box) is at origin - this is useful for visualization cases that involve only the grid. When False, the volume object is positioned as per it's origin info in the grid - this is useful for cases where one needs to have the grid data align with a trajectory or molecule (like this example).

Here is an example of a density file (apbs.dx.gz) that is centered (left) and not (right):

density-centered-vs-original

Here is a front view of the above:

density-centered-vs-original-fv

Here is a snippet from the code I pointed out in a previous comment:

      if center:
          offset = -np.array(grid.shape) * 0.5 * gobj.delta
      else:
          offset = np.array(gobj.origin)

      # apply transformations
      vdb_grid.transform.preScale(np.array(gobj.delta) * world_scale)
      vdb_grid.transform.postTranslate(offset * world_scale)

If center is enabled, we transform based on the size of the grid so that the box center is at the origin. If not, we transform it based on the origin info of the grid object. As you can see above, because this is a world transform, this goes hand in hand with the scale transform and the scale value. Hope this helps. Thanks

@PardhavMaradani
Copy link

Thinking about this a bit more - @BradyAJohnston , given that the centering and scaling are just world transforms, do we really need to impose this upon GDF? We used openvdb for these transforms as we were anyway exporting the file and this was easiest to do right there. Since our use of GDF, we now have a common way to access to the underlying grid data in our density entity, so once we create the density object, we can just scale and transform our Blender object as we need. Our ask of GDF then reduces to having just metadata support in export. Your thoughts? Thanks

Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor changes, please run black on the files to get all formatting consistent.

Regarding the transformations it actually looks reasonable to me, but I want to hear more from @BradyAJohnston and @PardhavMaradani .

assert tmpdir.join("auto.vdb").exists()

def test_write_vdb_with_metadata(self, tmpdir):
data = np.ones((3, 3, 3), dtype=np.float32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could use grid345 and then add metadata.


class TestVDBWrite:
def test_write_vdb_from_grid(self, tmpdir, grid345):
data,g = grid345
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space after ,

got = acc.getValue((i, j, k))
assert got == pytest.approx(float(data[i, j, k]))

def test_write_vdb_default_grid_name(self, tmpdir):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use fixture grid345?


voxel_size = grid_vdb.transform.voxelSize()

spacing=[voxel_size[0], voxel_size[1], voxel_size[2]]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space around =

vdb_field.write(outfile)

grids, metadata = vdb.readAll(outfile)
assert grids[0].name == 'direct_test'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert shape/content


spacing = [voxel_size[0], voxel_size[1], voxel_size[2]]

assert_allclose(spacing, [1.0, 2.0, 3.0], rtol=1e-5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of [1.0, 2.0, 3.0] use the variable

Suggested change
assert_allclose(spacing, [1.0, 2.0, 3.0], rtol=1e-5)
assert_allclose(spacing, delta, rtol=1e-5)

Comment on lines 178 to 179
assert acc.getValue((2, 3, 4)) == pytest.approx(5.0)
assert acc.getValue((7, 8, 9)) == pytest.approx(10.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of hard coding 5.0 and 10.0, access data

Suggested change
assert acc.getValue((2, 3, 4)) == pytest.approx(5.0)
assert acc.getValue((7, 8, 9)) == pytest.approx(10.0)
assert acc.getValue((2, 3, 4)) == pytest.approx(data[2, 3, 4])
assert acc.getValue((7, 8, 9)) == pytest.approx(data[7, 8, 9])

(and one could just make it a loop over index tuples if there were more than 2)

grid_vdb = grids[0]
acc = grid_vdb.getAccessor()

assert acc.getValue((1, 1, 1)) == pytest.approx(1.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

access data

]

vdb_grid.background = 0.0
vdb_grid.transform = vdb.createLinearTransform(matrix)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw the tests (eg test_write_vdb_with_delta_matrix) that checked that reading the VDB file would reproduce the original delta and my understanding is that this works because of the transformations added here. I think it's quite important that we can roundtrip consistently so I would leave the transformations as they are as a default. (Correct me if I am wrong, please.)

If MN/Blender needs to scale/shift then we should make this possible on top of the default.

@spyke7
Copy link
Author

spyke7 commented Feb 3, 2026

In this recent push, I have just applied the changes as asked in test_vdb.py. Will soon implement the scale and center in core.py as well as OpenVDB.py

@PardhavMaradani
Copy link

Regarding the transformations it actually looks reasonable to me, but I want to hear more from @BradyAJohnston and @PardhavMaradani .

  • I am fine with the exporter having the center and scale support. (i.e., ignore my previous comment) Not having this would add an additional step for MN. Given these don't change anything in the index space, these should not impact the import from vdb support later
  • The current transform in the code adds an additional offset of -ve half delta - @spyke7 , I presume you added this to make this cell-centered? I would leave the default of vertex-centered in OpenVDB as is. Blender already accounts for this (see Blender PR #138449). The current code would cause a tiny offset
    • If at all a cell-centered transform is needed, this should probably be passed as an additional param and not hard-coded
  • I see that the exporter only creates a float grid. Given that OpenVDB supports different grid types, maybe use the data type to determine the corresponding grid type?

@spyke7
Copy link
Author

spyke7 commented Feb 3, 2026

  • The current transform in the code adds an additional offset of -ve half delta - @spyke7 , I presume you added this to make this cell-centered? I would leave the default of vertex-centered in OpenVDB as is. Blender already accounts for this (see Blender PR #138449). The current code would cause a tiny offset

    • If at all a cell-centered transform is needed, this should probably be passed as an additional param and not hard-coded
  • I see that the exporter only creates a float grid. Given that OpenVDB supports different grid types, maybe use the data type to determine the corresponding grid type?

Yeah!, I added that 0.5 * delta offset because GDF uses a cell-centered convention. I will remove this, as this is just creating additional offset.

Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks all really good to me.

From my perspective, we only need to decide if the transformations should stay.

EDIT: Only just saw #148 (comment) — so we're keeping the transformation but remove the offset.

@BradyAJohnston @PardhavMaradani want some way to tweak the exports. Could you please leave a (blocking) review describing what you need to have added so that MN can make best use of the functionality?

@BradyAJohnston
Copy link
Member

Sorry should have time to look over this tomorrow. Adding the offset / centering on export is definitely something that could be handled by MN, but adding some transformation to the grid on export might still be useful more generally (or adding a transform as a Grid before export?). Will look over in more detail tomorrow.

Copy link

@PardhavMaradani PardhavMaradani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please leave a (blocking) review describing what you need to have added so that MN can make best use of the functionality?

Added what MN additionally needs (metadata support - apart from scale and center) and some general comments. Thanks

gridData/core.py Outdated
Comment on lines 717 to 719
grid_name = self.metadata.get('name', 'density')

vdb_field = OpenVDB.OpenVDBField(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these have to be passed as params to __init__? (they are also currently marked as required params). This will have to be rewritten when import support is added as that will not have any of these values. I think it is better to keep the interfaces clean from the beginning. You can take a look at the mrc support on how this is handled for both cases. metadata will also need to be be available in the exported file.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

        grid_name = self.metadata.get('name', 'density')

        vdb_field = OpenVDB.OpenVDBField(
            grid=self.grid,
            origin=self.origin,
            delta=self.delta,
            name=grid_name
        )

The comment above is for the above lines...


"""

def __init__(self, grid, origin, delta, name="density", tolerance=1e-10):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See previous comment about params here. About tolerance - shouldn't this be 0 by default so as to export the grid as is? How was this number determined and is this a generic value? My understanding is that this that OpenVDB sets any values around this value of the background to the background value. MN has a way to filter out the noise in a configurable way. I would try and avoid an arbitrary value as a default if possible. (If we know more why this was added - was this for reducing the file size, noise seen after import etc, we could see if there is a better solution)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have written for mainly reducing file size, or else, every value smaller than this will be there. though, it can be removed. It's definitely better to avoid an arbitrary value.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well. Should I remove the tolerance part completely from this?

as for BoolGrid, the function copyFromArray() does not accept tolerance, and thus inside prune(tolerance=False), should be present. So if I keep the tolerance part, then for float Grid, we need to mention it differently. Any thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Set tolerance=None by default and treat None as the case where nothing is done to the data. If users want to change it then they can and if tolerance is not None or tolerance != 0 then run the pruning.

if grid.ndim != 3:
raise ValueError(f"OpenVDB only supports 3D grids, got {grid.ndim}D")

self.grid = grid.astype(numpy.float32)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why everything is converted to float32's? Both GDF and OpenVDB support different grid data types, so we should make this generic? We will lose precision when grids have double values and be less memory efficient when we can use half grid etc.


"""

vdb_grid = vdb.FloatGrid()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to a comment above. OpenVDB supports different grid types and we should probably create one that corresponds to the grid data type?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

openvdb.GridTypes - gives this - [<class 'openvdb.FloatGrid'>, <class 'openvdb.BoolGrid'>, <class 'openvdb.Vec3SGrid'>],

So for this I guess - FloatGrid, BoolGrid, and Vec3SGrid these three are by deafult.

openvdb docs

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the same link above:

The Python module supports a fixed set of grid types. If the symbol PY_OPENVDB_WRAP_ALL_GRID_TYPES is defined at compile time, most of the grid types declared in openvdb.h are accessible in Python, otherwise only FloatGrid, BoolGrid and Vec3SGrid are accessible.

It looks like even the official module on conda-forge has only these fixed types:

>>> import openvdb                                                                                                               
>>> openvdb.LIBRARY_VERSION
(13, 0, 0)
>>> openvdb.GridTypes                                                                                                            
[<class 'openvdb.FloatGrid'>, <class 'openvdb.BoolGrid'>, <class 'openvdb.Vec3SGrid'>]                                           
>>> hasattr(openvdb, "Int32Grid") 
False

Blender packages its own version of openvdb and here is the output from Blender's Python Console:

>>> import openvdb
>>> openvdb.LIBRARY_VERSION
(12, 0, 0)
>>> openvdb.GridTypes
[<class 'openvdb.FloatGrid'>, <class 'openvdb.DoubleGrid'>, <class 'openvdb.BoolGrid'>, <class 'openvdb.Int32Grid'>, <class 'openvdb.Int64Grid'>, <class 'openvdb.Vec3SGrid'>, <class 'openvdb.Vec3IGrid'>, <class 'openvdb.Vec3DGrid'>, <class 'openvdb.PointDataGrid'>]
>>> hasattr(openvdb, "Int32Grid")
True

This is not ideal. MN has so far created the grids based on the corresponding data types (defaulting to float32 when there is no match) and this wasn't a problem because it runs within Blender. The hasattr checks are one way to check. There could be files with very different data types (the test nAChR_M2_water.plt file is float64 for example). I will defer to others on how best to deal with this. Thanks

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll have to work with what's available and this seems to be FloatGrid, BoolGrid (and Vec3SGrid, which we don't care about because all our densities are scalar).

Let's add a check that selects BoolGrid if the input array is a python (bool) or numpy bool numpy.bool and chooses FloatGrid for anything else. Add a note to the docs that limitations in OpenVDB can lead to loss of precision when input data is float64 (double) as the FloatGrid is float32 (single).

Comment on lines 201 to 203
# this is an explicit linear transform using per-axis voxel sizes
# world = diag(delta) * index + corner_origin
corner_origin = self.origin - 0.5 * self.delta

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since only the official openvdb module is being supported (I don't see pyopenvdb imports), it is probably simpler to use the preScale and postTranslate transforms directly.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, removed the corner_orgin, as the matrix is not needed anymore

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's pyopenvdb – do you mean https://github.com/theNewFlesh/pyopenvdb ? It says it only supports Python 3.7 and 3.8 ... ???

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inside of Blender openvdb was previously available as a module via import pyopenvdb but has been changed since 4.5 to import openvdb.

vdb_grid.copyFromArray(self.grid, tolerance=self.tolerance)
vdb_grid.prune()

vdb.write(filename, grids=[vdb_grid])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MN would need metadata support in the exported file - either all the current grid metadata or something explicitly passed during the export.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please check the metadata part as implemented in the recent push

@PardhavMaradani
Copy link

This might be a bit too late, but some thoughts on the design (maybe for the future) after reviewing the current implementation:

The current OpenVDBField class seems to be just export focussed. The openvdb grid object created for example is not accessible outside. Instead, if the main class provided a way to take in an existing GDF Grid object and provided an access to corresponding openvdb grid representation, a lot of things could get simplified. It also becomes a lot more extensible. It would allow users (say those familiar with OpenVDB) to work on it as they please - to add additional transforms, metadata or any of the many things possible that we don't necessarily know. The current scale/shift/metadata being added for MN are things that could be dealt with the same way. This would also not tie down users to require a GDF change for something they could easily extend - for example, MN cannot use this exporter till there is a way to add metadata, but if there was an access to the openvdb grid object, it could have just added the same and not imposed this requirement on GDF. The main class could deal with the io (read/write) and any additional features and still plug into the exisitng core Grid framework for regular import/export. Thanks

@orbeckst
Copy link
Member

orbeckst commented Feb 5, 2026

Are you looking for a workflow such as the following @PardhavMaradani

g = gdf.Grid("density.dx")

# make our VDB-like object that contains .vdb_grid as the VDB grid (eg FloatGrid)
gdf_vdb = gdf.OpenVDB.field(g)

# Then work with the VDB instance `vdb_grid`
gdf_vdb.vdb_grid.transform = createLinearTransform(matrix) 
...

If you provide code examples for how you would like to be able to use gdf then this would make things clear.

@PardhavMaradani
Copy link

Are you looking for a workflow such as the following @PardhavMaradani

Hi @orbeckst , yes, something along those lines. Here are some examples:

Regular export from GDF:

g = gdf.Grid("density.dx")
g.export("density.vdb", ...)

The export options for above are any minimally required ones for basic functionality.

Regular import into GDF:

g = gdf.Grid("density.vdb")

For someone like MN or others who want to add additional transforms and metadata to the vdb grid:

g = gdf.Grid("density.dx")
vdb_grid = gdf.OpenVDB.grid_to_vdb(g)
vdb_grid.transform.preScale(...)
vdb_grid.transform.postTranslate(...)
vdb_grid["metadata_key_1"] = supported_type_value1
gdf.OpenVDB.write(vdb_grid, "/tmp/custom_grid.vdb", ...)

OpenVDB support multiple grids within a single .vdb file. Here is a workflow for someone who would like to add multiple grids which might not be supported natively by GDF:

import openvdb
g1 = gdf.Grid("density1.dx")
g2 = gdf.Grid("density2.ccp4")
vdb_grid1 = gdf.OpenVDB.grid_to_vdb(g1)
vdb_grid2 = gdf.OpenVDB.grid_to_vdb(g2)
openvdb.write("/tmp/multiple_grids.vdb", grids=[vdb_grid1, vdb_grid2])

The last two examples show how access to the openvdb grid can help with extensibility.

Based on the above use cases, gdf.OpenVDB can even be a simple module that is a wrapper and provides the following functionality:

  • read(filename, grid_name=None, ...) -> gdf.Grid
  • write(grid: gdf.Grid | openvdb.GridBase, filename, grid_name=None, ...) -> None
  • grid_to_vdb(grid: gdf.Grid) -> openvdb.GridBase

The exporter could look something like:

    def _export_vdb(self, filename, ...):
        ...
        gdf.OpenVDB.write(self, filename, ...)

The importer could look like:

    def _load_vdb(self, filename, ...):
        ...
        g = gdf.OpenVDB.read(filename, ...)
        self._load(grid=g.grid, edges=g.edges, ...)

Others who require additional OpenVDB functionality can use the gdf.OpenVDB.grid_to_vdb to get access to an individual grid. Thanks

Comment on lines 59 to 62
from gridData import OpenVDB
vdb_field = OpenVDB.field('density')
vdb_field.populate(grid, origin, delta)
vdb_field.write('output.vdb')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will need updating

Copy link
Author

@spyke7 spyke7 Feb 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please check

Comment on lines 93 to 96
vdb_field = OpenVDB.field('density')
vdb_field.populate(grid, origin, delta)
vdb_field.write('output.vdb')

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will need updating

@orbeckst
Copy link
Member

orbeckst commented Feb 6, 2026

Thanks for the use cases @PardhavMaradani, that's very helpful to see.

We might be able to have gdf.OpenVDB contain simple functions and then introduce "convertors" for API interoperability (similar to what MDAnalysis is offering in the converter module. For instance,

g = gdf.Grid("density.dx") # -> gdf.Grid
v = g.convert_to("vdb")    # -> openvdb.GridBase

Once we have this functionality, export is just doing this conversion before calling openvdb.write(filename, grids=[v]). Powerusers can use v as they like.

We can then also consider extending the converters to MRC objects.

Eventually we could also add the functionality to drop OpenVDB or MRC objects into Grid() for full two-way API-level interoperability.

If we do a converter-style API then the gdf.OpenVDB module can be pretty light-weight because we don't really expect users to directly work with it. Does this sound like an interesting/clean way forward?

@orbeckst
Copy link
Member

orbeckst commented Feb 6, 2026

@spyke7 I wanted to say that you're doing good work here! Don't be discouraged by the long discussions and the possibility that we want to change things again. You've demonstrated that the core of your code is working, now we can think about how this will best work long-term. Creating code that is actually used by people requires thought and discussion. The fact that we're having these discussions over your code means that this is something that we believe will have a long-term impact and is important enough to get right.

@spyke7
Copy link
Author

spyke7 commented Feb 6, 2026

@spyke7 I wanted to say that you're doing good work here! Don't be discouraged by the long discussions and the possibility that we want to change things again. You've demonstrated that the core of your code is working, now we can think about how this will best work long-term. Creating code that is actually used by people requires thought and discussion. The fact that we're having these discussions over your code means that this is something that we believe will have a long-term impact and is important enough to get right.

yeah ofcourse, I will try the best updating the changes. And I can keep track of messages and reviews! Thanks.

@PardhavMaradani
Copy link

If we do a converter-style API then the gdf.OpenVDB module can be pretty light-weight because we don't really expect users to directly work with it. Does this sound like an interesting/clean way forward?

Using a generic convert_to approach is a great idea. It is definitely much cleaner and extensible. Power users who modify the vdb grid will need to write it back, but using the gdf.OpenVDB methods or openvdb directly (like in the last two use cases) in such cases should be ok.

@PardhavMaradani
Copy link

We'll have to work with what's available and this seems to be FloatGrid, BoolGrid

Let's add a check that selects BoolGrid if the input array is a python (bool) or numpy bool numpy.bool and chooses FloatGrid for anything else.

Bringing this up from a review comment above to see if there is any possible way to address this as it seems a bit limiting.

openvdb is an optional dependency when installing GDF. When GDF is used in MN (Blender context), the openvdb package available there has support for a lot more grid types - note that these are all standard OpenVDB grid types and not Blender specific. Would it be bad design to have hasattr checks for these grid types dynamically and use those? At a very high level, it seems no different from handling differences between module versions for backward compatibility etc, but I don't know if there are any pitfalls here. We could even include an option if we want to stick only to the default types or better yet allow for a type (numpy type) specification (in convert_to("vdb", ...)) to be more flexible. The most recent version of OpenVDB has support for Half Grids (float16) and I'm sure Blender will include it in its openvdb module going forward. We also have requirements in MN to support animation of densities - each of them being separate .vdb files. With these, just wanted to make sure we don't take a efficiency hit because of a limitation that maybe can be addressed. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

add OpenVDB format

4 participants