Skip to content

Draping Imagery on 3D Tiles #12567

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 17 commits into
base: main
Choose a base branch
from
Draft

Draping Imagery on 3D Tiles #12567

wants to merge 17 commits into from

Conversation

javagl
Copy link
Contributor

@javagl javagl commented Apr 11, 2025

🚧 This is a draft PR! 🚧

Description

Addresses #7591

The goal of this PR is to allow draping imagery over arbitrary 3D Tiles.

Currently, tiled image data (like Bing Maps and others) can be draped over the globe in CesiumJS. This is accomplished by defining an ImageryLayerCollection for the viewer, causing the "globe (geometry) tiles" to then be textured with the image data. The same should be possible for arbitrary 3D Tiles: Given a (textured or untextured!) 3D Tiles data set, it should be possible to assign imagery layers to that, and drape that imagery over the 3D Tiles geometry.

Description of the current state

The following is the attempt of a summary of the current state, at the time of writing this. There are several parts that are incomplete or preliminary. Updates will be tracked in comments to this PR.

  • There now is a Cesium3DTileset.imageryLayers that can be used to assign an ImageryLayerCollection to a tileset
  • The Model class has a private getter for imageryLayers that returns the imagery layers of the tileset, iff the model is part of a Model3DTileContent that is part of a tileset that has imagery layers
  • The model now contains an instance of the new ModelImagery class, which in turn, contains one ModelPrimitiveImagery for each primitive of the model
  • These classes both offer update functions and ready getters, following the pattern that is used for other parts of the model:
    • The update functions are called in Model.update, checking for changes that may affect the internal state/data structures, and updating them accordingly
    • The ready flags (getters) indicate whether all required resources have been loaded and all computations have been performed
  • The ModelImagery does not do very much: It mainly delegates the work for the update/ready handling to the respective ModelPrimitiveImagery instances
  • The ModelPrimitiveImagery is doing most of the work:
    • In its update function, it is obtaining the vertex positions of the primitive, from the "POSITION" attribute data
    • These positions are converted into cartographic positions for the respective ellipsoid, and stored in a MappedPositions object. This contains the mapped positions and the cartographic bounding rectangle, and serves as the basis for most further imagery-related computations.
    • It computes ImageryCoverage objects from these cartographic positions. These describe the (x,y,level)-coordinates of the imagery tiles that are covered by the primitive, and that later have to be draped on the geometry data
    • It currently also processes the state machine of the corresponding Imagery objects, to make sure that they eventually reach the ImageryState.READY state, so that the whole ModelPrimitiveImagery can count as ready
    • When all these computations have been performed, the ModelPrimitiveImagery counts as ready, and can be used as the input for the draping process

The draping implementation itself is implemented as a pipeline stage. Summarizing a few things (that should probably be documented on a higher level somewhere):

  • When a Model is created, it creates a ModelSceneGraph internally
  • For each 'primitive' of the model, a ModelRuntimePrimitive is created
  • The ModelRuntimePrimitive contains a list of "pipeline stages"
  • When the draw commands are built (skipping a lot here...), these pipeline stages are executed, and fill PrimitiveRenderResources with the data from the primitive, which essentially means "copying the data to the GPU, so that it can be used for rendering"

So now there is an ImageryPipelineStage that is inserted when the model is part of a tileset that contains imagery data.

  • It creates additional texture coordinate attributes for that primitive - namely, the texture coordinates that will be used for draping the imagery textures on that geometry
  • It prepares the shader that will be used for the draping computations: The shader receives the additional texture coordinate attributes, and a bunch of uniforms that define things like the actual textures as sampler2D, and things like the alpha value that should be used. All this is then combined in a sampleAndBlend function in the fragment shader that takes the previous pixel color, and mixes it with the pixel color that was obtained from the imagery texture
  • The actual input for these uniforms is (not yet a proper class, but just) a structure that is referred to as imageryInput for now: It contains the Texture itself, a rectangle that indicates which part of the imagery texture has to be mapped to that primitive, as well as translation/scale values that are applied to the imagery data so that this rectangle is filled with the right part of the imagery texture

Issue number and link

#7591

Testing plan

There are no specs yet. One difficulty is that a considerable part of the work here consisted of breaking some 700-line-functions that came from the globe rendering down into 20 smaller functions, which means that there could now be many specs. But right now, the testing is still on the level of "(How) does it work?".

The following is a very basic tileset that I've been using for some of these tests:

testLevels2.zip

It is a simple tileset with a unit square, with two levels of detail. The first is a plane with 10x10 vertices. The second one are 4 planes, with 10x10 each.

And this is a Sandcastle for testing this on this branch, on localhost (this might be removed from here, and/or updated in comments as necessary)

http://localhost:8080/Apps/Sandcastle/#c=vVhZbxs3EP4rhPvQNSqv5Ct1FNmor6QC7CSwnNQuBBTcXUpiw10uSK5ttch/7/DaQ1rbiVL5QbZIDuf4ZjiHYp5Jhe4ouScCHaKM3KNTImmRhp/NXjDeiM36lGcK04yI8cbmm3E2zrpddCoIVgSpGXwoI5IoRDOztAzHWWzY+8NDtx/KmGQkzAVNqaJ3RIY4SYJxhhC+x1R5Dey/3bNrez2cCJ5+EgxUmimV97tdxmPMZlyq/kGvt9t1YsK/Jc/GGx30r+aIUEKiYjqa8fsTXmQJzaafOStS0kdKFKRjaVL8QNMiHcWCkGyU45icC8FFH22/MhRfN8dZafaHXFGeYQbGypzEigs0gY+RMwX+FSCa3JlMHhTJksCZZjcbBg49s0v6QI2wbtfCV4rRLrnkCWEVknErC8PfUHqd39YV7CMNh1GTpnhKxBxFGhssKJGIWxdOGY+I92CJ4dDSa3En1ZVDgyXIohMUPE27ad3i1HfiL/CcCBMF7s5Hwe9oosPPuKcWlprbKecCHAmxJxfpnc+Rxh8sHcUzol1dY/AHiS6JiDHAdF2jCTZdKGhXI6Tx/7oc5kFhIKYTSpLNEjxm1AcCrPQFOeMFS1BEEM5zBpRI8fojGWdYzrMYTYos1pGEYiPAhfmwDkng4Gq8owZF89HWj045Y8QICGzoIgTGnJEJLphCwYmO1EucS2NvG+vH/dF4plb7P7hgnvpYm1e5Qqo5Ax94FXlWpx3pw/D4/Gp4fLHkAafz9wdvibSHrgzKM4FzkjwWxhPMpI5jhBqR/MSlTZ9mvg/BlWL6B6Lao2pxhZX+oz+CqEJkrdrXnsCxlHSaNUD3Qb8Q2i5lLYSoDZin4hyE6SQaMF1E4EbvDfwbNBmFjGRTNYOTX36pPQwOAcT4NGjSTokK6KZ7x740LOpFF+w1xo5wlsRYKkbQpyGCa0Xu7bqrZWAj/+bm5q+zq+OPw/fv/ro4/3x+ASXjwGCOsykjZwTitWfWEmoVuYHj2vK2XGKWzzCszCISdDpTGZHS74B0JUAlv54VpOSLwYFYP3N/OMVp6nh99TZ9yhOfw1Kt/yVWgj4gPql7r4MiLCFdcXtHn3grulZ9++/W39NwWH7gPZ/NCiPKefmyEtbMZZ4xAPm+SCMI8xLb0J/ZUK3RX+EE6F3IA9dZqDjsUZzJoO2SVbpNhD1Zor19lPbW5yNLrYG6FiAXgjatdCq3ZEjAXe+5ULNP+TV/Sx9I8lZgeJH2FfoWBwt48KD/ruluQH1oQGSw9et+uL2/s9c76KDd1+HrvVevf93voO1GYqyp7bxZhwbWe4bnSBMs19FKcmCx6DgEQEzY82mjdsHxDFp0EFyZALQku4t6WNuuHNGfvoY4hz4mafd5Se0Ge0FOzoJy32VY/aksyYoZz0iwsDk8O39/Pby+tZG1cJhC4aU5mwc1vp1mKHXqMr+DSdPKVbnUgmmJRa1LCJu41Fa1klHLOHBF6W6DS8hzrK1tQlgQlJgqay43y3iVl57LOr6psALL9kkX87ZKgg4PodKUJdzWwqo+2iAwai7WClNeeh4XD65JF0sloS2nLBF5VkZaaMpBa2rUB9ajlrKqFW3k1Wn9jq8mbTf8WZ0eqk0bKWzXqapC1JpEy9P6HVOq2sjNQaMJh8iBELifkcUgQPEMcokethaCwXB6CxlBz00+FB6vTm+q46UwqikyKiIZCwrdvWt96uHoAuFLxuMvvFAhQBl/qczSfFwrzzmLsA6rhMcwjWZKR9Q5I/rryXwI4/CGo7ED9yJnPVrMoYdPjIKlhI7nXDZUVl4GpUcP55Uq1cuoeRTLD/cZNJ45EWoe6Eub5etY0MD3lVrvD5Ek4g5HUGhqmpjrJZn0uAUt3imbUoPykz4yTviT83Sh9eygewovGiOZ6iTDJxP9k4Pk5ViGoAOlEqYuxuaAg6SgrneHo27MUr8TrMH9SFU8u9IRFjTzaL352NrZCfdt0XyWYKfXC3v25wQ3Bf8D1lzzoDTEaqMJNjobAzM+Hembv9E0h44CFYIFYdhVJM2ZHhi6URF/gbQc+2f+kw8v57oIonAq9MzSR2Ia4WBvp4P8pxce2FsI5TCvmBFrL39wWxGMJURsCbCjkNXB14YYmuWF8sLuIHYoFJItzGBm6KOUJgkjTQlbiud9tFNK8dsRV4qn1QmIGXQ9AIOE3iGaHLb8JIVihqWEE+3dEf2HjDeOBl2gb1xj3Hj0A2gI6UeTzLaPLuxmGIaDLiyXb5XP0PhgoHSYH1m1ByriyfzIx/hAiaNqVhuo5GgpzQ+6sNukqVawtkiqeU5AstBRB8bdYVbo9faB/lkrpZn+DvspfoBvO/vwVSqSu114O3grgswAa3OzvzyldCxPW6P76Gcj9mdnYrsyijwoLQjAhdX+NwpqsGwYDwvxBHLHrp1fCTALUa+EaPdV71mM/PywLmg8/5URMY38zQ/gUYXMdu95PPxUsB40LPcfw+L2ZbG4XSsWt6u/FN0S/i/PZLsCAkrUziNxrMWt7ZFo5isjcVJ2u/9P1vgWOKoOe12YVBJWBubUNfUvB4sfI9YFiue/MiS/F+Tl0IBJaV1AAOvVs2g5k70cFNUcuLZ0WkpYGZh3evp8OUzMsLsuOAzzb0ICvpbtLHx3fa5viP8D

It offers some basic functions for manipulating the geometry and the imagery settings, targeted at basic tests. A short screencap of that current state:

Cesium Draping PR 0001

(Yes, sometimes parts of the texture are missing. I so much wanted to fix this before opening this PR 😆 but I did not yet have time to review where this is coming from)

To do

There are several loose ends. Some examples:

  • The decision about how to pick the right level (of detail) for the imagery is currently still open. There's that XXX_DRAPING_LEVEL slider in the test, which is a hack for testing different levels. But the actual decision will likely have to happen somewhere here in ModelPrimitiveImagery
  • The "missing textures" that can be seen in the screencap. This is likely caused by one of the parts that had been taken from a modified state of the globe-based imagery, which I tried to disassemble into building blocks that could then be be-assembled, but some parts are still a bit ... "opaque". It should be relatively minor, though...
  • The function that obtains the vertex positions from the primitive currently copies the attribute data to the CPU. And this does not properly handle quantized data. This might be easy to generalize (if things like Draco or Meshopt do not have to be handled manually). But it might involve shuffling some larger code blocks from existing classes around.
  • The processing of the "state machine" of the Imagery has to be reviewed. Part of that is that there's that Imagery.addReference function that is currently not called at all, but that "reference counting" will likely have to be done somewhere
  • How to handle the case that the draping involves more than 16 texture units. This is largely open. It may involve that ominous "upsampling", or maybe could be handled by combining several imagery parts the reduce the necessary number of texture units...?
  • Several more (usually minor ones, marked with TODO_DRAPING or XXX_DRAPING in the code, accordingly)

Notes:

The commit history may look odd: At some point, I started doing the development on a local branch, with fine(r)-grained commits, but for the first time in my life, used the --squash function. There just had been too many unknowns and experimental, intermediate steps, and it just wouldn't make sense to track them in all detail here...

Author checklist

  • I have submitted a Contributor License Agreement
  • I have added my name to CONTRIBUTORS.md
  • I have updated CHANGES.md with a short summary of my change
  • I have added or updated unit tests to ensure consistent code coverage
  • I have updated the inline documentation, and included code examples where relevant
  • I have performed a self-review of my code

Copy link

Thank you for the pull request, @javagl!

✅ We can confirm we have a CLA on file for you.

@javagl
Copy link
Contributor Author

javagl commented Apr 13, 2025

The last commits

  • The computation of the textures that are required and their corresponding texture coordinate rectangle has been fixed. (In fact, the "missing textures" had been there, but ... their texture coordinate rectangle was "empty"...). It may still need review and further testing, though.
  • The transform of the primitives within the model had not been taken into account. So now, it is passing the ModelRuntimeNode to the ModelPrimitiveImagery as well, so that it can integrate this transform for the computation of the primitive positions that are supposed to be mapped to the ellipsoid
  • The function that obtains the primitive positions did not handle quantized attributes. Basic support for quantized positions is implemented now (and I tested this with a draco-compressed/quantized tileset), but it might have to be reviewed in terms of possible ~"other forms of quantization"

There seems to be an issue related to the computation of the offset/scale for the imagery that is mapped to the primitives. It does work for the test tileset, but not for another ("real-world") one. This might somehow be connected to that quantization, but is hopefully only a detail 🤞 - to be investigated.

@ggetz
Copy link
Contributor

ggetz commented Apr 18, 2025

Thanks for opening this @javagl! I'm looking through the code, but I also has a few higher-level thoughts as I do so:

The decision about how to pick the right level (of detail) for the imagery is currently still open. There's that XXX_DRAPING_LEVEL slider in the test, which is a hack for testing different levels.

If I understand correctly, the is driven by a maximumScreenSpaceError tolerance in the existing terrain and imagery engine. Can we do something similar here? Something along the lines of refining to the level such that:

minTileDimensionInMeters / minTileImageDimensionInPixels < tileset.maximumScreenSpaceError

?

The function that obtains the vertex positions from the primitive currently copies the attribute data to the CPU. And this does not properly handle quantized data. This might be easy to generalize (if things like Draco or Meshopt do not have to be handled manually). But it might involve shuffling some larger code blocks from existing classes around.

Certainly open to a small refactor to make this easier. If needed, maybe consider a separate PR to keep this scope of this constrained as much as possible.

How to handle the case that the draping involves more than 16 texture units. This is largely open. It may involve that ominous "upsampling", or maybe could be handled by combining several imagery parts the reduce the necessary number of texture units...?

Am I correct in my understanding that this can be addressed either by:

  1. Implementing upsampling: Loader higher LODs of geometry such that we don't have multiple tiles of one imagery layer covering a single geometry tile.

or

  1. Merging multiple textures for a geometry tile into one single texture

?

@javagl
Copy link
Contributor Author

javagl commented Apr 18, 2025

If I understand correctly, the is driven by a maximumScreenSpaceError tolerance in the existing terrain and imagery engine. Can we do something similar here? Something along the lines of refining to the level such that: ...

There are two different possible approaches. In CesiumJS, that level is computed here (and I did not yet go through the math and/or justifications there). In cesium-native, it is computed with computeDesiredScreenPixels (implementation), which carries a few implementation notes (and is invoked in some non-trivial contexts, including the translation of these "screen pixels" to the "level").

How all this could be translated to the draping is not yet clear. But when you mention the tileset maximumScreenSpaceError: Note that in cesium-native, the decision is based on the RasterOverlayOptions.maximumScreenSpaceError, which apparently can be configured independently. Whether or not this can simply be "replaced" with the tileset maximumScreenSpaceError in CesiumJS
(and maybe throwing in some / 8.0 at the right place) has to be examined...

The function that obtains the vertex positions [...] does not properly handle quantized data.

Certainly open to a small refactor to make this easier.

This turned out to not be such a huge code block and has already been implemented by now. The AttributeCompression.dequantize is 'static' and could just be called. What's left is the offset/scale handling. It would be nice to have that encapsulated similarly, because ... my initial implementation was wrong, and fixed in a subsequent commit: There are quite a few quantizedVolume....-variables floating around, and reading what they are takes orders of magnitude more time than just calling some nice, opaque, convenient nowReallyDequantizeThatStuff-function...

Am I correct in my understanding that this can be addressed either by:

  • Implementing upsampling: Loader higher LODs of geometry such that we don't have multiple tiles of one imagery layer covering a single geometry tile.
  • Merging multiple textures for a geometry tile into one single texture

These options had been my initial thought. But based on the recent discussion and comments by Kevin, the second one would not be viable. So apparently, there has to be some sort of upsampling. Unfortunately, this is not only about "loading higher LODs", but ... about creating higher LODs (i.e. slicing-and-dicing the actual geometry). This raises many questions, but I'll pragmatically start with trying to pick some code fragments from cesium-native (which have been ported from CesiumJS, and port them back to CesiumJS ... 😬) and see whether I can use that, for example, to split the unit square into 4 parts.

@ggetz
Copy link
Contributor

ggetz commented Apr 22, 2025

How all this could be translated to the draping is not yet clear. But when you mention the tileset maximumScreenSpaceError: Note that in cesium-native, ... [this] can be configured independently. Whether or not this can simply be "replaced" with the tileset maximumScreenSpaceError in CesiumJS has to be examined...

Gotcha. From native, it looks like the tileset's geometric error and maximum SEE are factors here, but as you said, ultimately there is a separate SSE option used to drive the imagery refinement (see comments below). So that boils down to

desiredScreenPixels = projectedRectangleSizeInMeters * tilesetMaximumScreenSpaceError / tileGeometricError

and we are then looking for the imagery level such that

desiredScreenPixels <= imageryMaximumScreenSpaceError

I am open to having a separate SSE factor for the raster imagery layer— or a modifier that can be applied to the tileset's maximumScreenSpaceError— but if there's already a precedent with the former, that might be best for consistency's sake.


From native:

   * In 3D Tiles, we can only get so close to a model before it switches to the
   * next higher level-of-detail by showing its children instead. The switch
   * distance is controlled by the `geometric error` of the tile, as well as by
   * the `maximum screen space error` of the tileset. So this method requires
   * both of those parameters.
  // It works like this:
  // * Estimate the size of the projected rectangle in world coordinates.
  // * Compute the distance at which tile will switch to its children, based on
  // its geometric error and the tileset SSE.
  // * Compute the on-screen size of the projected rectangle at that distance.
  //
  // For the two compute steps, we use the usual perspective projection SSE
  // equation:
  // screenSize = (realSize * viewportHeight) / (distance * 2 * tan(0.5 * fovY))
  //
  // Conveniently a bunch of terms cancel out, so the screen pixel size at the
  // switch distance is not actually dependent on the screen dimensions or
  // field-of-view angle.
   * The `target screen pixels` returned here may be further modified by the
   * raster overlay's {@link RasterOverlayTileProvider::getTile} method. In particular, it
   * will usually be divided by the raster overlay's `maximum screen space
   * error` of the raster overlay (not to be confused with the `maximum screen
   * space error` of the tileset, mentioned above).

Copy link
Contributor

@ggetz ggetz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments on the code so far...

@@ -2811,7 +2830,8 @@ function addTileDebugLabel(tile, tileset, position) {
let attributes = 0;

if (tileset.debugShowGeometricError) {
labelString += `\nGeometric error: ${tile.geometricError}`;
// XXX_DRAPING Show SSE as well....
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of scope for this PR, but Would it be helpful to add this functionality to the tileset inspector? I could certainly see value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll leave this open, and during a "cleanup pass" where all the _DRAPING comments are handled, this can be moved into an issue (there might be other "minor TODOs" ... and some larger ones as well ... coming out of all this)

// `tileset._imageryLayersDirty` flag that is passed to the
// models, so that the models can be updated for new imagery
get: function () {
return this._imageryLayers;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to initialize value as an empty ImageryLayerCollection? That would make the API a bit cleaner, i.e.

const imageryProvider = await ...;
const layer = new Cesium.ImageryLayer(imageryProvider);
tileset.imageryLayers.add(layer)

Copy link
Contributor Author

@javagl javagl Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There could be certain patterns or "ways how things are usually done" (and maybe there are patterns and I overlooked them...).

Here, I think that this could make sense. It will require some fixes, with changes from
if (defined(imageryLayers)) doSomethingCostly()
to
if (defined(imageryLayers) && imageryLayers.length > 0) doSomethingCostly();

If we do this, then I'd rather mark the imageryLayers as readonly (i.e. remove the setter). This way, the "wiring" of the listeners that is mentioned there could be done once, and we wouldn't have to handle someone doing a tileset.imageryLayers = completelyNewOne.

Maybe throwing in some clear()/removeAll() and addAll(...) functions for convenience...

// TODO_DRAPING These are short-circuiting to `number` instead
// of `boolean` here. Ignore that or do that clumsy `x = x && y`
// or some `!!x` trickery...?
imageryFlags.alpha |= imageryLayer.alpha !== 1.0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bitwise or | and the bitwise or assignment |= operators return number, not truthy, values. Are you looking for logical assignment operators, like ||= or &&=?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me, eslint is complaining about those.

Beyond that, this was an attempt of doing what is done with the apply... flags in

uniformMapProperties.dayTextureBrightness[numberOfDayTextures] =
. There, the values are 1. set in the uniformMapProperties and 2. then fetched from the uniformMapProperties, to check whether they had the default value (which looks kinda clumssy...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Been digging a bit):

I think that this would require ecmaVersion: 2021 at https://github.com/CesiumGS/eslint-config-cesium/blob/82963f233e0d28e49422622f4dc18ee78c174e44/browser.js#L11 (or maybe some special rule somewhere, to allow that explicitly).

With this version, "logical-assignment-operators": ["error", "always"], could be enabled.

This would complain about thousands of places where the defaultValue was recently changed, suggesting to change things like
options.contextOptions.webgl = options.contextOptions.webgl ?? {};
to
options.contextOptions.webgl ??= {};
but I think this would be nice (decrease duplication).

Most other places are then about the 'destroy' pattern, where
primitive = primitive && !primitive.isDestroyed() && primitive.destroy();
would become
primitive &&= !primitive.isDestroyed() && primitive.destroy();

attribute.componentDatatype,
count * componentsPerAttribute,
);
buffer.getBufferData(typedArray);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to account for interleaved positions?

Copy link
Contributor Author

@javagl javagl Apr 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably yes.

I should not have removed that "Some generic reader would be nice" comment that I originally added there. Some generic reader could avoid a lot of headaches...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I generally agree about a generic reader. Though this may be something else out of scope for the PR. I don't think we have an issue open. Would you mind creating one?

// dequantized data
const p = new Cartesian3();

// XXX_DRAPING: There's also some quantizedVolumeScale floating around in
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it isn't already on your mind, let's be sure we add test cases covering the various permutations of attributes data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beyond the general TODO of "Tests, tests, tests (including interleaved data :D )", I'm not sure what "permutations" refers to here. This specific part should directly access the positions (and only these)...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I meant things like interleaved buffers, quantized vs un-quantized, etc...

@javagl
Copy link
Contributor Author

javagl commented Apr 30, 2025

The last commit (which is, again, squashed from a local state) aims at working towards a state that allows for some basic tests. Some details are summarized below.


About "upsampling"

There has been some discussion internally, about the process of "upsampling" the geometry data. I did start some experiments and created some building blocks. But the upsampling is not enabled in the current state. There are still too many building blocks missing.

This is roughly related to some of the inlined review comments:

javagl: "I should not have removed that "Some generic reader would be nice" comment that I originally added there."
ggetz: "I generally agree about a generic reader. Though this may be something else out of scope for the PR. I don't think we have an issue open. Would you mind creating one?"
ggetz: "Ah, I meant things like interleaved buffers, quantized vs un-quantized, etc..."

The point of such a "generic reader" would exactly be to provide some form of abstraction from all these variants. Certain tasks that are required for the upsampling are pretty simple, like "GIve me the position/normal/texCoord of a vertex". But there is no infrastructure for that. It will have to fetch the data from the GPU, and take into account whether the data is normalized, quantized, oct-encoded, draco-compressed, or stored in separate or interleaved buffers (and for indices, whether they are TRIANGLES or TRIANGLE_STRIP), and eventually, all (float) attributes of the data (regardless of their representation) will have to be read, subdivided, interpolated, and assembled into "new geometry data". There are many questions related to that. So while there is a draft for some ModelAttributeReader in the current state, this is not fully wired in. I locally started creating test data for that, as in files

unitSquare11x11_draco.glb
unitSquare11x11_draco_interleaved.glb
unitSquare11x11_meshopt.glb
unitSquare11x11_meshopt_interleaved.glb
unitSquare11x11_plain.glb
unitSquare11x11_plain_interleaved.glb
unitSquare11x11_unsignedShortTexCoords.glb
unitSquare11x11_unsignedShortTexCoords_interleaved.glb
unitSquare11x2_triangleStrip_plain.glb
...

But even if this was already tested and implemented properly, it would only be a tiny part of the upsampling. So for now, upsampling is omitted.


About the current state:

Instead of "upsampling", the process is currently computing a "matching" imagery level for the geometry at hand, and drapes this over the geometry. So it is essentially trying to always map one imagery texture to a certain primitive. This does not work for all possible cases. Specifically, it assumes that the geometry data itself provides a sufficient level-of-detail structure. (So when the tileset contains a single LOD with a 1000kmx1000km square consisting of 4 vertices, this won't work well...). But it allows for some basic tests, e.g. with Google Photorealistic 3D Tiles.

The following is a sandcastle for such basic tests when running the current state on localhost:

http://localhost:8080/Apps/Sandcastle/#c=vVZbb9s2FP4rRPZQGbDluxV7TrDEXdsA7RrU7friF0qiJSIUKZCUE6/If98hJUqy6wVFsOTBhshz5XeukeBKox0l90SiC8TJPVoRRYvM/9veeZuzyJ5XgmtMOZGbsy76seEIJUyEZIG2mCnS3fDHzu8bHll1mjKiiAZ9+B5T7TRGkmBN3guRMHKbCi3gzKjSNBq//VqKeFaz4Gz/TVGefKc6LfnfExGJmMgF0rKozZV++yoinPi5pBnVdEeUj+PYq5wwbBve76OVtY50SpBXcJWTiG4piTuIZjghco8Y3hOpgAFrK6BSUbAYhQThPGfAirSw4pXmDcdqzyO0LXikqeCofF/1kptS60er1OuUkB3Ac8BxiH2btBKMEWvAK9+CEDj3lmxxwTTyrgEm9AnnqmMo1go6acGAUl3cSrGjsQmu5UanwvRdSOb4r8w7vR+OGyGl9wxC77wVvM29NkT/6s8vN1cfu07msVN+mSfAqX7HOyFRTMIiSeAdC7ROxb3F2AUlFAWPsaREQVYcou/gtOJG8K3EOXFemDBcN8IXZZ5a63SLvF8R6jwXzlYkjcaVEDKmHEBVxxItTMEIQLCOUpIBtC0V30n4icgIQ718bfF4nafBlUQXkp/0Hdgeq6q4Uoom/AByVwdHyV6CTY+StsycpzIfjFU6/GNheuSU9WiNeRxhpRlB324QiBW5M27K/RN0AQayFjnM8hQv0NAiEUqapJoTpdwNSGkJqtw5LQDZgf1UGNDBpqocMcFZVul6dL58y2PXM8ARDdhDGm7BLDvVNRCWBMU2n6z0YcJ2UYgVNJHq1rwFZeYxG153kMLac3VUWXTNw6TtAWI+IzzRKbq4uECDOlnLuLtMcEVi3TyG3E+g4w5cV7EsvoUUGP8qshAStIa8JJQZVnI2eJ9ib6htGReRUxKO1uaHiJ1ihes2VxPMU8wNtS1jw32K3RI6rRJZQbghbvcpOY4cilLME6J+iqDV9E6KzAxRF7//CG7L0roIVSQpTJyq+NpJUnWDOy6iO1FoH7CK7hq/W9NXCBZiE+xYREVGuDZx/pMR83m9v4lhplc8mzMjdqzZjLs9zJXYOlhb6DrNRmYLfdsr7XGcQfPgTXW28rUVMqw+33NofDmReu8ZoU6ds0ceuL5m/P4cKiJ3OGSk7YkVr9mUw807AX/dFB3Kt4JybeGNQIvECJenctHIGXk4WixKNh9q2IbT+hwT2Ft41UBazXqFJbR5ivm4mga90Ww+DuaBP50OB7PBJAiqtt2bjObBcDz1p8NgMJ8Eo2lFGJ9PJ8Fg4AfwPxlOxrOJuS+7vYC5BHvYT2Y/EGyidUt1lH6BhaEyPvFn0+kA7M5H0+FwPq9t9Ab+6Hw2Pp9P4H84Pj8fzyrK0B9PZ8MAPB1Ng2A8DMakFzgH7NJ11j1b2gXg0lz/QbNcSI0KyTzf72uS5cxMun5YRHfQ8yPXAH5zeVnFPIT0TaQZtgskkxB7k1EXud/APy+lEMph0NrdYJI/VFchzFMiexLeXKiG8HhghvK80M7YDpKORpj1YOFMALqMxjEjhxZ6WuQLNKqtuOtQaC2yhgJmln0HwDKmO0TjixNrMooYVgoo24KxNf2HbM4ul33gPxBjwkbuM3gIjcmwpMPLj+Wl7/vLPhx/lqrr18ZgqU19XJZuL3Uo4v2lK46llpfNkrHU8eWVaeTLPnwd3jcnOJfo6X1OwJo0XQ4elFEOp4H5wg/wNYQvpUluLv3BCE5QfbgXQuuAqx1mZt7audFF9lTO0wV6Y9W/qdw/bVSTB230A3Bwmj6h/EDNwcPgIJ9A4rqeUf8LHONfgaOZiy+FSWPh2cCsqlH8erC44f9SoDj9z4bkQ0FeDw3Yb14KCFD9bAzW9Sb1elA029tLIdJYeDYw783O+HqY2BX1peCwyn8JCfisRw18VzPIDat/AQ

It is using Google Photorealistic tiles, draping Bing Aerial maps over them, and offering some basic imagery layer manipulation UI for testing. A short screencap:

Cesium Draping P3DT preview


Given that there are many commented-out blocks related to ~"upsampling experiments" in the current state, it does not make sense to review this state on a code level. But maybe someone wants to try it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants