-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Add glTF draco support #2591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add glTF draco support #2591
Conversation
|
Thanks, from a first look it seems solid.
we have the formatter configs here (should work with eclipse too). But don't worry about that. Is it ok if i push some changes to your branch for the gradle dependency and to fix the formatting? |
Yes. About the GL constants: I can change/inline them. But think that some (Spoiler: That's the wrong value there 🙂 there's a reason why I once created https://javagl.github.io/GLConstantsTranslator/GLConstantsTranslator.html , but I think it's better to not have to use it) Do you think that it could make sense to define the required constants as |
Yes, that or on a dedicated "constants" class inside the loader package would be a good solution. |
|
I made some changes:
the ide should pick that up automatically, if not
|
| /** | ||
| * The default log level | ||
| */ | ||
| private static final Level level = Level.INFO; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
REMINDER: probably should set this to FINE or lower
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes - it is pulled out into a constant (as opposed to using logger.fine... everywhere) exactly to have it be INFO during test and development, and be able to quickly switch it to FINE later.
|
I don't know if this is related with the other issues, but the sample "BarramundiFish" seems to have half of the mesh (or texture?) offsetted compared to the non draco compressed version |
| * The attribute data | ||
| * @return The vertex buffer | ||
| */ | ||
| private static VertexBuffer createFloatAttributeVertexBuffer(JsonObject accessor, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this handle normalization too? Maybe it can repurpose GltfLoader.readAccessorData made public static and moved into GltfUtils?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Draco spec itself does not explicitly mention normalization. And I think that there is no asset where the a normalized accessor was draco-encoded. But I can try to create some test data and see what happens.
A bit more generally: There might be snippets of functionality that could have overlap with GltfLoader or GltfUtils, or where minor changes on either side allow functions to be used on the other side. I try to avoid redundancies, but don't have a perfect overview over all existing functions yet.
|
Hah! 💡 The broken At https://github.com/openize-com/openize-drako-java/blob/4d99a762fbd2a167922bf3e27535b5651e114a30/src/utils/dev/fileformat/drako/Unsafe.java#L159 it is only decoding the first element of the float array (apparently, the quantization offsets here). When changing this line to use That's good. It's a trivial fix. As mentioned in that issue: I could open a PR for that. But maybe there will be further feedback in that issue, so that we can sort out whether the other issues can also be fixed quickly. (In doubt, maybe we can push for a (BTW: I'll probably do another pass and further work on all this PR in the next days, e.g. during the weekend) |
|
@riccardobl I'm looking at that fork with a bit of scrutiny. It is changed completely (78 changed files). This fork can never be synced back to the original repo. And the actual change, literally changing an |
|
Nice! So i've forked it jMonkeyEngine/openize-drako-java, cleaned it up a bit, applied the fish-patch and redeployed it to github registry, this seems to have fixed also the issue with the "Lantern" sample. I've updated this PR to use the forked artifact on maven·rblb it (just a github registry proxy without authentication), we are likely going to deploy it to maven central later. If you want to test with a locally built artifact you can just and then replace these 3 lines with just |
|
^ There was a race condition in the comments. Again: I assume that this is a workaround for now, and maybe we can find a less disruptive solution here. |
🤣
the original code is transpiled and it was going to pull too much dependencies on nio, javax and other stuff at compile time and likely breaking on transpilation to javascript etc... most of the changes are just unused classes, dead code and random/wrong interface usage being removed, so i expect merging upstream changes shouldn't be too hard. But i wouldn't expect a transpiled library to be maintained tbh. |
|
Yeah. I have seen some of the code, and ... it raised a few question marks for me. But let's skip the details for now, and say that it's good that there is a pure-Java solution for decoding Draco. How exactly that code was created is also not clear. There apparently is an auto-generated part, but the However, we could speculate and argue back and forth, but let's see whether there is any feedback in the issue, and what could be a sensible path forward based on that. |
There is an "index file" at (Not in the scope of this PR, though...)
It looks like this inserted TABS in the
There are currently 16 appearances of (I'll probably post further updates a bit later today) |
We usually track problems through GitHub issues. The TODO/FIXME comments are only meant to highlight areas that may need additional care or contain hacky or incomplete behavior, but which either don’t depend on us or are not inherently broken in their current state.
Good idea. This sounds like something better suited for a standalone model tester. We try to keep the examples fairly small, since people sometimes use them as reference.
The issue is likely on my side. VS Code didn't replace the tabs, so I might have some conflicting settings, or it doesn't like mixing tabs and spaces 🤷♂️ will investigate. |
|
The Concerning the comment above:
As I said: It may have to handle normalization. But I had a look at how normalization is currently handled:
The whole call chain is pretty convoluted, and assumes certain structures at every point - i.e. it requires things like However: I just added some functionality to dequantize the data from But... unfortunately, there seems to be another issue with the Here is a test asset that I created, and that contains unitSquare11x11_unsignedShortTexCoords-draco.zip Trying to load it will go through the path that does the dequantization, but given that the input data is all zeros, there is no reasonable way to actually test this. And I added a comment at openize-com/openize-drako-java#5 (comment) with additional details. |
|
BTW: I locally changed the private void loadModelSample(String name, String type) {
String path = "Models/" + name;
String ext = "gltf";
switch (type) {
case "draco":
path += "/glTF-Draco/";
ext = "gltf";
break;
case "glb":
path += "/glTF-Binary/";
ext = "glb";
break;
default:
path += "/glTF/";
ext = "gltf";
break;
}
path += name + "." + ext;
loadModeFromPath(path);
}
private void loadModeFromPath(String path) {
Spatial s = loadModel(path, new Vector3f(0, 0, 0), 1f);
...that can be called with |
|
A draft for fixing the issues with the (BTW: @riccardobl They regenerated the auto-generated files with two pull requests yesterday, so your fork is pretty much obsolete. They also changed the package name (which would mean that it's a different library now, or at least a new major version number...). But let's wait for the dust to settle here...) The change does not yet fix the skinning issue, but it's not unlikely that I'm doing something wrong when reading the draco-decoded data and passing it to jME - will continue to investigate (some time next week, probably...) |
|
Right now, it looks like the issues from openize-com/openize-drako-java#5 have been resolved with the latest (1.4.4) release. It may look a bit underwhelming, but ... everybody who knows the drill knows why I'm posting this here: But ...(and that's a huge "but") the current state for creating this is probably a bit hacky. I'll skip a few details, but some of this revolves around this call The point is: Trying to understand what is happening there, and why, could involve a considerable amount of reverse engineering. Or... I could simply accept that ~"I have to call that 🤷♂️". So now I called it somewhere in the draco decoding, and added some As already mentioned in the issue: There certainly is some "room for improvement". And I think that it could be worthwhile to restructure some aspects of the glTF loading (regardless of the Draco support). But maybe that should be done in a separate PR. (And I'd probably actually start doing that, now that I still have much of that in my "working memory") |
|
Awesome !
Don't worry too much about the inner workings of the engine, we know our gltf importer is a bit hacky and some (most?) of the engine internals are undocumented. Is this PR ready for review? |
No. The state with the fixed skinning is not pushed, and my local version of that is a mess, but I'll do a cleanup pass today and update this PR.
I'd prefer to not have too many code changes directly in the PR. You originally asked about things like adding the drako dependency and formatting, and even though these are details that don't change the code itself, the latter already caused some hassle. Changes like adding the Looking at some of the glTF-related code, I see long functions, many parameters, everything |
|
I have updated the PR with a cleanup. The Important for a review: There was a change in the There also are still a few markers Apart from that, I think that this could go through a first "more serious" review pass. (I do not yet know what the usual procedures are for jME - i.e. how much "iteration" is supposed to happen here, or when is the right time to actually mark it as 'Ready For Review') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Looks good to me, I did an initial review and tested everything (with the regression to openize drako fixed) and it works great!
I think the change to Mesh is acceptable, most likely even an improvement over the current behavior since it now lets reset the bind pose more easily.
If you feel like addressing my two comments, I'll do another review afterward and I think this is done.
| mesh.generateBindPose(); | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What you actually need in the drako extension is just
GltfUtils.handleSkinningBuffers(mesh, skinBuffers);
mesh.generateBindPose();since it needs to update the mesh with the decompressed weights.
The rest of the code here is just to prime the some buffers that signal the skinning and morph controls that the mesh supports hardware skinning, they are then used by the control to efficiently pass the indexes to the shader to lookup the animation frame.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this if-block was originally executed in processMeshPrimitive. The original block was
handleSkinningBuffers(mesh, skinBuffers);
if (mesh.getBuffer(VertexBuffer.Type.BoneIndex) != null) {
...
mesh.setBuffer(indicesHW);
mesh.generateBindPose();
}
i.e. it did include the setup of the buffers.
This is now moved into this postProcessSkinning function. And this function is called in processMeshPrimitive and after the Draco data was decoded.
So it's not directly possible to omit the creation of the indicesHW/weightsHW buffers here. It's true that these buffers would not have to be created (again) in the second call. But I tried to have that one function that ~"does whatever has to be done to get the skinning right".
If you prefer, this could be solved differently. For example, by pulling out the "buffer setup" into a function:
void postProcessSkinning(Mesh mesh) {
GltfUtils.handleSkinningBuffers(mesh, skinBuffers);
if (mesh.getBuffer(VertexBuffer.Type.BoneIndex) != null) { // Is this necessary?
mesh.generateBindPose();
}
}
void setupSkinningBuffers(Mesh mesh) {
if (mesh.getBuffer(VertexBuffer.Type.BoneIndex) != null) { // Is this necessary?
// the mesh has some skinning, let's create needed buffers for HW skinning
// creating empty buffers for HW skinning
// the buffers will be set up if ever used.
VertexBuffer weightsHW = new VertexBuffer(VertexBuffer.Type.HWBoneWeight);
VertexBuffer indicesHW = new VertexBuffer(VertexBuffer.Type.HWBoneIndex);
// setting usage to cpuOnly so that the buffer is not sent empty to the GPU
indicesHW.setUsage(VertexBuffer.Usage.CpuOnly);
weightsHW.setUsage(VertexBuffer.Usage.CpuOnly);
mesh.clearBuffer(weightsHW.getBufferType());
mesh.setBuffer(weightsHW);
mesh.clearBuffer(indicesHW.getBufferType());
mesh.setBuffer(indicesHW);
}
}Usage:
// In processMeshPrimitives:
setupSkinningBuffers(mesh); // Before or after handleSkinningBuffers?
postProcessSkinning(mesh);
// After Draco decoding
// No additional "setupSkinningBuffers" here!
postProcessSkinning(mesh);
But... I think that this would make things unnecessarily complicated. Does the caller need to know whether mesh.getBuffer(VertexBuffer.Type.BoneIndex) != null? Under which conditions is this the case? Are there functions that have to be called before this one? Where does handleSkinningBuffers come into play? I think that the caller should not have to know the whole state space, and which buffers may, may not be, or have to be present in the mesh.
As mentioned in the comment above: Functions like GltfUtils.handleSkinningBuffers are sorely lacking documentation. Sure, skinning is complicated, and even if this single function was documented extensively, it could not capture the complexity of the skinning implementation in jME as a whole. But I think that one aspect of keeping this complexity manageable is to makre sure that a function like postProcessSkinning can be called, cluelessly, without having to care about the state and what it is doing. It should do whatever has to be done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is ok for a mesh to not have the indicesHW and weightsHW buffers in jme , it means it does not support hardware skinning. The gltf loader always assumes the mesh does and so it creates those two empty buffers it in readMeshPrimitives.
But we should not create these empty buffers from the drako extension if BoneIndex is supposed to be always available in readMeshPrimitives, since they are part of the initial mesh creation (nb. i personally don't know, since i am not sure if drako is supposed to fill-in buffers or if it can also create new ones).
If there is a chance for BoneIndex to not be available in readMeshPrimitives, and just appear after the drako extraction, then i understand your point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we should not create these empty buffers from the drako extension if BoneIndex is supposed to be always available in readMeshPrimitives, since they are part of the initial mesh creation (nb. i personally don't know, since i am not sure if drako is supposed to fill-in buffers or if it can also create new ones).
There is another thing to consider, the new code is replacing the hw buffers from the extension , basically. This is not something that would be expected by a jme dev, it might cause some unexpected behavior for other extensions in the future (eg. if they pass a reference to them to a controller or something).
So if we can't be 100% sure that BoneIndex is available in readMeshPrimitives, the postProcessSkinning method should at least check if the hw buffers are already set.
jme3-plugins/src/gltf/java/com/jme3/scene/plugins/gltf/DracoMeshCompressionExtensionLoader.java
Outdated
Show resolved
Hide resolved
|
I moved two functions into There are still two functions marked with The skinning buffer handling (inlined comment) certainly has a few degrees of freedom. Some of this might also be affected by that follow-up PR, but I'd probably have to read and test a few more things. The primary focus of the follow-up would be to reduce possible duplicate code, and "small fixes" like the buffer file extensions that I mentioned in the issue. I think that this PR is basically in a shape that could be "Ready For Review". But I'd like to give the |
| int indicesComponentType = getAsInt(indicesAccessor, "accessor " + indicesAccessorIndex, | ||
| "componentType"); | ||
| VertexBuffer indicesVertexBuffer = createIndicesVertexBuffer(loader, indicesComponentType, indices); | ||
| mesh.clearBuffer(VertexBuffer.Type.Index); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to use
vb.setupData(...)
here, instead of replacing the buffer


Addesses #2118
This is a first draft of adding support for the
KHR_draco_mesh_compressionextension when loading glTF.Summary
This adds a
DracoMeshCompressionExtensionLoaderclass. It is registered as the extension loader for theKHR_draco_mesh_compressionextension. When the main glTF loader is loading the mesh primitives inGltfLoader#readMeshPrimitives, it will call theDracoMeshCompressionExtensionLoaderto decode the Draco data that was found via the extension object.The Draco data is found in the
bufferViewthat is defined in the extension object. The loader will fetch the data from this buffer view, and pass it to theDraco.decodefunction of https://github.com/openize-com/openize-drako-java . The result will be aDracoMeshthat contains the decoded attribute data. This attribute data is then extracted and translated into the structure that is required for using it as jME mesh attributes. This decoded data will then replace the previous (empty) attributes of the mesh.The
TestGltfLoadingclass contains a small snippet to easily load the glTF sample assets that have a Draco version available. (This requires the respective glTF Sample assets to be copied into thejme3-testdataproject - instructions are in inlined comments).Remaining tasks
A pretty fundamental one: Where and how should the dependency to the draco decoding library be declared?
EDIT 2026-01-30, 12:46: This was resolved in the meantime. Riccardo added the dependency in 77d611f . There are a few open questions about issues in the
openize-drako-javalibrary, and in which version they will be fixed, but that will just be a version number update now.(Omitting some obsolete parts of my question here)
There are a few
TODO_DRACOmarkers in the code. These are mostly about functions that could be moved to different classes if they are considered to be more widely useful.I found some hints about the code formatting preferences. Is there some magic
gradle nowFormatEverythingProperlycommand for this? (For now, I used the default Eclipse formatting)
Open issues
These have been described in more detail in a comment in the issue. Just to quickly summarize them here:
SunglassesKhronosmodel cannot be loaded. The rason for that is likely also a bug in the Draco decoder, and also described in Failure to decode some Draco data from glTF Sample Assets openize-com/openize-drako-java#5 in more detailVirtualCitymodel cannot be loaded. I assume that the Draco-compressed version of this model is invalid. Details in Draco-compressed version of VirtualCity might be invalid KhronosGroup/glTF-Sample-Assets#264translation, then the translation appears to be wrong.EDIT 2026-01-29, 19:25 The last point is somewhat obsolete, as of #2591 (comment) , and can be solved with a trivial fix in the
openize-drako-javalibrary. (Removing additional details that I had mentioned here about the state of the investigation)closes #2118