feat: 氧化铝数字孪生系统监控大屏完成

This commit is contained in:
2026-04-08 21:44:08 +08:00
commit a48babc68d
67606 changed files with 3337335 additions and 0 deletions

21
node_modules/meshoptimizer/LICENSE.md generated vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2016-2025 Arseny Kapoulkine
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

283
node_modules/meshoptimizer/README.md generated vendored Normal file
View File

@@ -0,0 +1,283 @@
# meshoptimizer.js
This folder contains JavaScript/WebAssembly modules that can be used to access parts of functionality of meshoptimizer library. While normally these would be used internally by glTF loaders, processors and other Web optimization tools, they can also be used directly if needed. The modules are available as an [NPM package](https://www.npmjs.com/package/meshoptimizer) but can also be redistributed individually on a file-by-file basis.
When using the NPM package, package exports can be used to import individual components (e.g. `meshoptimizer/decoder`) as well as the entire package (`meshoptimizer`).
## Structure
Each component comes in a separate file, `meshopt_component.js`, which uses ES6 module exports and can be imported from another ES6 module. The export name is MeshoptComponent and is an object that has two fields:
- `supported` is a boolean that can be checked to see if the component is supported by the current execution environment; it will generally be `false` when WebAssembly is not supported or enabled. To use these components on browsers without WebAssembly a polyfill library is recommended.
- `ready` is a Promise that is resolved when WebAssembly compilation and initialization finishes; any functions are unsafe to call before that happens.
In addition to that, each component exposes a set of specific functions documented below.
## Decoder
`MeshoptDecoder` (`meshopt_decoder.mjs`) implements high performance decompression of attribute and index buffers encoded using meshopt compression. This can be used to decompress glTF buffers encoded with `EXT_meshopt_compression` extension or for custom geometry compression pipelines. The module contains two implementations, scalar and SIMD, with the best performing implementation selected automatically. When SIMD is available, the decoders run at 1-3 GB/s on modern desktop computers.
> Note: for maximum compatibility, MeshoptDecoder is also available as CommonJS module via `meshopt_decoder.cjs`; it can be used by a wide variety of JavaScript module loaders, including node.js require(), AMD, Common.JS, and can also be loaded into the web page directly via a `<script>` tag which exposes the module as a global variable `MeshoptDecoder`. The ESM version uses `.mjs` file extension unlike other components, to avoid compatibility issues with prior versions.
To decode a buffer, one of the decoding functions should be called:
```ts
decodeVertexBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array, filter?: string) => void;
decodeIndexBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array) => void;
decodeIndexSequence: (target: Uint8Array, count: number, size: number, source: Uint8Array) => void;
```
The `source` should contain the data encoded using meshopt codecs; `count` represents the number of elements (attributes or indices); `size` represents the size of each element and should be divisible by 4 for `decodeVertexBuffer` and equal to 2 or 4 for the index decoders. `target` must be `count * size` bytes.
Given a valid encoded buffer and the correct input parameters, these functions always succeed; they fail if the input data is malformed.
When decoding attribute (vertex) data, additionally one of the decoding filters can be applied to further post-process the decoded data. `filter` must be equal to `"OCTAHEDRAL"`, `"QUATERNION"` or `"EXPONENTIAL"` to activate this extra step. The description of filters can be found in [the specification for EXT_meshopt_compression](https://github.com/KhronosGroup/glTF/blob/master/extensions/2.0/Vendor/EXT_meshopt_compression/README.md).
To simplify the decoding further, a wrapper function is provided that automatically calls the correct version of the decoding based on `mode` - which should be `"ATTRIBUTES"`, `"TRIANGLES"` or `"INDICES"`. The difference in terminology is due to the fact that the JavaScript API uses the terms established in the glTF extension, whereas the function names match that of the meshoptimizer C++ API.
```ts
decodeGltfBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array, mode: string, filter?: string) => void;
```
Note that all functions above run synchronously; sometimes decoding large buffers takes time, so this library provides support for asynchronous decoding
using WebWorkers via the following API; `useWorkers` must be called once at startup to create the desired number of workers:
```ts
useWorkers: (count: number) => void;
decodeGltfBufferAsync: (count: number, size: number, source: Uint8Array, mode: string, filter?: string) => Promise<Uint8Array>;
```
## Encoder
`MeshoptEncoder` (`meshopt_encoder.js`) implements data preprocessing and compression of attribute and index buffers. It can be used to compress data that can be decompressed using the decoder module - note that the encoding process is more complicated and nuanced. It is typically split into three steps:
1. Pre-process the mesh to improve index and vertex locality which increases compression ratio
2. Quantize the data, either manually using integer or normalized integer format as a target, or using filter encoders
3. Encode the data
Step 1 is optional but highly recommended for triangle meshes; it can be omitted when compressing data with a predefined order such as animation keyframes.
Step 2 is the only lossy step in this process; without step 2, encoding will retain all semantics of the input exactly which can result in compressed data that is too large.
To reverse the process, decoder is used to reverse step 3 and (optionally) 2; the resulting data can typically be fed directly to the GPU. Note that the output of step 3 can also be further compressed in transport using a general-purpose compression algorithm such as Deflate.
To pre-process the mesh, the following function should be called with the input index buffer:
```ts
reorderMesh: (indices: Uint32Array, triangles: boolean, optsize: boolean) => [Uint32Array, number];
```
The function optimizes the input array for locality of reference (make sure to pass `triangles=true` for triangle lists, and `false` otherwise). `optsize` can choose whether the order should be optimal for transmission size (recommended for Web) or for GPU rendering performance. The function changes the `indices` array in place and returns an additional remap array and the total number of unique vertices.
After this function returns, to maintain correct rendering the application should reorder all vertex streams - including morph targets if applicable - according to the remap array. For each original index, remap array contains the new location for that index (or `0xffffffff` if the value is unused), so the remapping pseudocode looks like this:
```ts
let newvertices = new VertexArray(unique); // unique is returned by reorderMesh
for (let i = 0; i < oldvertices.length; ++i)
if (remap[i] != 0xffffffff)
newvertices[remap[i]] = oldvertices[i];
```
When the input is a point cloud and not a triangle mesh, it is recommended to reorder the points using a specialized function that performs spatial sorting that can result in significant improvements in compression ratio by the subsequent processing:
```ts
reorderPoints: (positions: Float32Array, positions_stride: number) => Uint32Array;
```
This function returns a remap array just like `reorderMesh`, so the vertices need to be reordered accordingly for every vertex stream - the `positions` input is not modified. Note that it assumes no index buffer is provided, as it is redundant for point clouds.
To quantize the attribute data (whether it represents a mesh component or something else like a rotation quaternion for a bone), typically some data-specific analysis should be performed to determine the optimal quantization strategy. For linear data such as positions or texture coordinates remapping the input range to 0..1 and quantizing the resulting integer using fixed-point encoding with a given number of bits stored in a 16-bit or 8-bit integer is recommended; however, this is not always best for compression ratio for data with complex cross-component dependencies.
To that end, three filter encoders are provided: octahedral (optimal for normal or tangent data), quaternion (optimal for unit-length quaternions) and exponential (optimal for compressing floating-point vectors). The last two are recommended for use for animation data, and exponential filter can additionally be used to quantize any floating-point vertex attribute for which integer quantization is not sufficiently precise.
```ts
encodeFilterOct: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
encodeFilterQuat: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
encodeFilterExp: (source: Float32Array, count: number, stride: number, bits: number, mode?: string) => Uint8Array;
```
All these functions take a source floating point buffer as an input, and perform a complex transformation that, when reversed by a decoder, results in an optimally quantized decompressed output. Because of this these functions assume specific configuration of input and output data:
- `encodeFilterOct` takes each 4 floats from the source array (for a total of `count` 4-vectors), treats them as a unit vector (XYZ) and fourth component from -1..1 (W), and encodes them into `stride` bytes in a way that, when decoded, the result is stored as a normalized signed 4-vector. `stride` must be 4 (in which case the round-trip result is 4 8-bit normalized values) or 8 (in which case the round-trip result is 4 16-bit normalized values). This encoding is recommended for normals (with stride=4 for medium quality and 8 for high quality output) and tangents (with stride=4 providing enough quality in all cases; note that 4-th component is preserved in case it stores tangent frame winding). `bits` represents the desired precision of each component and must be in `[2..8]` range if `stride=4` and `[2..16]` range if `stride=8`.
- `encodeFilterQuat` takes each 4 floats from the source array (for a total of `count` 4-vectrors), treats them as a unit quaternion, and encodes them into `stride` bytes in a way that, when decoded, the result is stored as a normalized signed 4-vector representing the same rotation as the source quaternion. `stride` must be 8 (the round-trip result is 4 16-bit normalized values). `bits` represents the desired precision of each component and must be in `[4..16]` range, although using less than 9-10 bits is likely going to lead to significant deviation in rotations.
- `encodeFilterExp` takes each K floats from the source array (where `K=stride/4`, for a total of `count` K-vectors), and encodes them into `stride` bytes in a way that, when decoded, the result is stored as K single-precision floating point values. This may seem redundant but it allows to trade some precision for a higher compression ratio due to reduced precision of stored components, controlled by `bits` which must be in `[1..24]` range, and a shared exponent encoding used by the function.
The `mode` parameter can be used to influence the exponent sharing and provides a tradeoff between compressed size and quality for various use cases, and can be one of 'Separate', 'SharedVector', 'SharedComponent' and 'Clamped' (defaulting to 'SharedVector').
- `encodeFilterColor` takes each 4 floats from the source array (for a total of `count` 4-vectors), treats them as an RGBA color with each component from 0..1, and encodes them into `stride` bytes in a way that, when decoded, the result is stored as a normalized unsigned 4-vector. `stride` must be 4 (in which case the round-trip result is 4 8-bit normalized values) or 8 (in which case the round-trip result is 4 16-bit normalized values). This encoding is recommended for colors (with stride=4 for medium quality and 8 for high quality output). `bits` represents the desired precision of each component and must be in `[2..8]` range if `stride=4` and `[2..16]` range if `stride=8`.
Note that in all cases using the highest `bits` value allowed by the output `stride` won't change the size of the output array (which is always going to be `count * stride` bytes), but it *will* reduce compression efficiency, as such the lowest acceptable `bits` value is recommended to use. When multiple parts of the data require different levels of precision, encode filters can be called multiple times and the output of the same filter called with the same `stride` can be concatenated even if `bits` are different.
After data is quantized using filter encoding or manual quantization, the result should be compressed using one of the following functions that mirror the interface of the decoding functions described above:
```ts
encodeVertexBuffer: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeVertexBufferLevel: (source: Uint8Array, count: number, size: number, level: number, version?: number) => Uint8Array;
encodeIndexBuffer: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeIndexSequence: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeGltfBuffer: (source: Uint8Array, count: number, size: number, mode: string) => Uint8Array;
```
`size` is the size of each component in bytes; it must be divisible by 4 for attribute/vertex encoding and must be equal to 2 or 4 for index encoding; additionally, index buffer encoding assumes triangle lists as an input and as such count must be divisible by 3.
Note that the source is specified as byte arrays; for example, to quantize a position stream encoded using 16-bit integers with 5 vertices, `source` must have length of `5 * 8 = 40` bytes (8 bytes for each position - 3\*2 bytes of data and 2 bytes of padding to conform to alignment requirements), `count` must be 5 and `size` must be 8. When padding data to the alignment boundary make sure to use 0 as padding bytes for optimal compression.
When interleaved vertex data is compressed, `encodeVertexBuffer` can be called with the full size of a single interleaved vertex; however, when compressing deinterleaved data, note that `encodeVertexBuffer` should be called on each component individually if the strides of different streams are different.
By default, `encodeVertexBuffer` uses v1 version of the encoding; this encoding is *not* compatible with `EXT_meshopt_compression` glTF extension but results in higher compression ratios. To encode data compatible with `EXT_meshopt_compression`, use `encodeVertexBufferLevel` with version=0, or - preferably - `encodeGltfBuffer`, which defaults to v0 (but can also be used to encode v1 content by passing version=1).
## Simplifier
`MeshoptSimplifier` (`meshopt_simplifier.js`) implements mesh simplification, producing a mesh with fewer triangles/points that resembles the original mesh in its appearance. The simplification algorithms are lossy and may result in significant change in appearance, but can often be used without visible visual degradation on high poly input meshes or for level of detail variants far away.
To simplify the mesh, the following function needs to be called first:
```ts
simplify(indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, target_index_count: number, target_error: number, flags?: [Flags]) => [Uint32Array, number];
```
Given an input triangle mesh represented by an index buffer and a position buffer, the algorithm tries to simplify the mesh down to the target index count while maintaining the appearance. For meshes with inconsistent topology or many seams, such as faceted meshes, it can result in simplifier getting "stuck" and not being able to simplify the mesh fully. Therefore it's critical that identical vertices are "welded" together, that is, the input vertex buffer does not contain duplicates. Additionally, it may be possible to preprocess the index buffer to discard any vertex attributes that aren't critical and can be rebuilt later.
Target error is an approximate measure of the deviation from the original mesh using distance normalized to `[0..1]` range (e.g. `1e-2f` means that simplifier will try to maintain the error to be below 1% of the mesh extents). Note that the simplifier attempts to produce the requested number of indices at minimal error, but because of topological restrictions and error limit it is not guaranteed to reach the target index count and can stop earlier.
The algorithm uses position data stored in a strided array; `vertex_positions_stride` represents the distance between subsequent positions in `Float32` units and should typically be set to 3. If the input position data is quantized, it's necessary to dequantize it so that the algorithm can estimate the position error correctly. While the algorithm doesn't use other attributes like normals/texture coordinates, it automatically recognizes and preserves attribute discontinuities based on index data. Because of this, for the algorithm to function well, the mesh vertices should be unique (de-duplicated).
Upon completion, the function returns the new index buffer as well as the resulting appearance error. The index buffer can be used to render the simplified mesh with the same vertex buffer(s) as the original one, including non-positional attributes. For example, `simplify` can be called multiple times with different target counts/errors, and the application can select the appropriate index buffer to render for the mesh at runtime to implement level of detail.
To control behavior of the algorithm more precisely, `flags` may specify an array of strings that enable various additional options:
- `'LockBorder'` locks the vertices that lie on the topological border of the mesh in place such that they don't move during simplification. This can be valuable to simplify independent chunks of a mesh, for example terrain, to ensure that individual levels of detail can be stitched together later without gaps.
- `'ErrorAbsolute'` changes the error metric from relative to absolute both for the input error limit as well as for the resulting error. This can be used instead of `getScale`.
- `'Sparse'` improves simplification performance assuming input indices are a sparse subset of the mesh. This can be useful when simplifying small mesh subsets independently. For consistency, it is recommended to use absolute errors when sparse simplification is desired.
- ``Prune`` allows removal of isolated components regardless of the topological restrictions inside the component. This is generally recommended for full-mesh simplification as it can improve quality and reduce triangle count; note that with this option, triangles connected to locked vertices may be removed as part of their component.
In addition to the `Prune` flag, you can explicitly prune isolated components under a target threshold by calling the `simplifyPrune` function:
```ts
simplifyPrune: (indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, target_error: number) => Uint32Array;
```
This can be done before regular simplification or as the only step, which is useful for scenarios like isosurface cleanup.
While `simplify` is aware of attribute discontinuities by default (and infers them through the supplied index buffer) and tries to preserve them, it can be useful to provide information about attribute values. This allows the simplifier to take attribute error into account which can improve shading (by using vertex normals), texture deformation (by using texture coordinates), and may be necessary to preserve vertex colors when textures are not used in the first place. This can be done by using a variant of the simplification function that takes attribute values and weight factors, `simplifyWithAttributes`:
```ts
simplifyWithAttributes: (indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, vertex_attributes: Float32Array, vertex_attributes_stride: number, attribute_weights: number[], vertex_lock: Uint8Array | null, target_index_count: number, target_error: number, flags?: Flags[]) => [Uint32Array, number];
```
This function takes an additional `vertex_attributes` buffer that contains all the attributes to be used. The `attribute_weights` array contains a weight for each attribute, which is used to balance the importance of each attribute during simplification. For normalized attributes like normals and vertex colors, a weight around 1.0 is usually appropriate; internally, a change of `1/weight` in attribute value over a distance `d` is approximately equivalent to a change of `d` in position. Using higher weights may be appropriate to preserve attribute quality at the cost of position quality. If the attribute has a different scale (e.g. unnormalized vertex colors in [0..255] range), the weight should be divided by the scaling factor (1/255 in this example).
The optional `vertex_lock` parameter can be used to lock some vertices in place, preventing them from being moved during simplification. This is a binary array of the same length as the number of vertices, where `1` means that the vertex is locked and `0` means that it is free to move. This can be used to preserve seams or other important features of the mesh.
When the resulting mesh is stored, it might be desireable to remove the redundant vertices from the attribute buffers instead of simply using the original vertex data with the smaller index buffer. For that purpose, the simplifier module provides the `compactMesh` function, which is similar to `reorderMesh` function that the encoder provides, but doesn't perform extra optimizations and merely prepares a new vertex order that can be used to create new, smaller, vertex buffers:
```ts
compactMesh: (indices: Uint32Array) => [Uint32Array, number];
```
The simplification algorithm uses relative errors for input and output; to convert these errors to absolute units, they need to be multiplied by the scaling factor which depends on the mesh geometry and can be computed by calling the following function with the position data:
```ts
getScale: (vertex_positions: Float32Array, vertex_positions_stride: number) => number;
```
The algorithms `simplify` and `simplifyWithAttributes` work on triangle meshes. `MeshoptSimplifier` additionally provides an algorithm to simplify point clouds, with optional per-point color support:
```ts
simplifyPoints: (vertex_positions: Float32Array, vertex_positions_stride: number, target_vertex_count: number, vertex_colors?: Float32Array, vertex_colors_stride?: number, color_weight?: number) => Uint32Array;
```
`vertex_colors` is an optional buffer containing RGB colors, with 3 values per point; `color_weight` can be used to balance the importance of color preservation with position preservation, and can be set to `1.0` if the input colors are in `[0..1]` range.
The resulting indices can be used to render the simplified point cloud; similarly to triangle simplification, to reduce the memory footprint, the point cloud can be reindexed using the remap table returned by `compactMesh`.
## Clusterizer
`MeshoptClusterizer` (`meshopt_clusterizer.js`) implements meshlet generation and optimization.
To split a triangle mesh into clusters, call `buildMeshlets`, which tries to balance topological efficiency (by maximizing vertex reuse inside meshlets) with culling efficiency.
```ts
buildMeshlets(indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, max_vertices: number, max_triangles: number, cone_weight?: number) => MeshletBuffers;
```
The algorithm uses position data stored in a strided array; `vertex_positions_stride` represents the distance between subsequent positions in `Float32` units.
The maximum number of triangles and number of vertices per meshlet can be controlled via `max_triangles` and `max_vertices` parameters. However, `max_vertices` must not be greater than 256 and `max_triangles` must not be greater than 512.
Additionally, if cluster cone culling is to be used, `buildMeshlets` allows specifying a `cone_weight` as a value between 0 and 1 to balance culling efficiency with other forms of culling. By default, `cone_weight` is set to 0.
For finer control over triangle counts, use `buildMeshletsFlex`, which accepts minimum and maximum triangle limits and an optional `split_factor` to nudge large clusters to split sooner.
```ts
buildMeshletsFlex(indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, max_vertices: number, min_triangles: number, max_triangles: number, cone_weight?: number, split_factor?: number) => MeshletBuffers;
```
To favor spatial splits for ray tracing, `buildMeshletsSpatial` keeps the same controls but replaces cone weighting with `fill_weight` to trade off cluster fullness against SAH cost.
```ts
buildMeshletsSpatial(indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, max_vertices: number, min_triangles: number, max_triangles: number, fill_weight?: number) => MeshletBuffers;
```
All meshlets produced by these builders are implicitly optimized for better triangle and vertex locality.
The algorithm returns the meshlet data as packed buffers:
```ts
const buffers = MeshoptClusterizer.buildMeshlets(indices, positions, stride, /* args */);
console.log(buffers.meshlets); // prints the raw packed Uint32Array containing the meshlet data, i.e., the indices into the vertices and triangles array
console.log(buffers.vertices); // prints the raw packed Uint32Array containing the indices into the original meshes vertices
console.log(buffers.triangles); // prints the raw packed Uint8Array containing the indices into the verices array.
console.log(buffers.meshletCount); // prints the number of meshlets - this is not the same as buffers.meshlets.length because each meshlet consists of 4 unsigned 32-bit integers
```
Individual meshlets can be extracted from the packed buffers using `extractMeshlet`. The memory of the returned `Meshlet` object's `vertices` and `triangles` arrays is backed by the `MeshletBuffers` object.
```ts
const buffers = MeshoptClusterizer.buildMeshlets(indices, positions, stride, /* args */);
const meshlet = MeshoptClusterizer.extractMeshlet(buffers, 0);
console.log(meshlet.vertices); // prints the packed Uint32Array of the first meshlet's vertex indices, i.e., indices into the original meshes vertex buffer
console.log(meshlet.triangles); // prints the packed Uint8Array of the first meshlet's indices into its own vertices array
console.log(MeshoptClusterizer.extractMeshlet(buffers, 0).triangles[0] === meshlet.triangles[0]) // prints true
meshlet.triangles.set([123], 0);
console.log(MeshoptClusterizer.extractMeshlet(buffers, 0).triangles[0] === meshlet.triangles[0]) // still prints true
```
After generating the meshlet data, it's also possible to generate extra culling data for one or more meshlets:
```ts
computeMeshletBounds(buffers: MeshletBuffers, vertex_positions: Float32Array, vertex_positions_stride: number) => Bounds | Bounds[];
```
If `buffers` contains more than one meshlet, `computeMeshletBounds` returns an array of `Bounds`. Otherwise, a single `Bounds` object is returned.
```ts
const buffers = MeshoptClusterizer.buildMeshlets(indices, positions, stride, /* args */);
const bounds = MeshoptClusterizer.computeMeshletBounds(buffers, positions, stride);
console.log(bounds[0].centerX, bounds[0].centerY, bounds[0].centerZ); // prints the center of the first meshlet's bounding sphere
console.log(bounds[0].radius); // prints the radius of the first meshlet's bounding sphere
console.log(bounds[0].coneApexX, bounds[0].coneApexY, bounds[0].coneApexZ); // prints the apex of the first meshlet's normal cone
console.log(bounds[0].coneAxisX, bounds[0].coneAxisY, bounds[0].coneAxisZ); // prints the axis of the first meshlet's normal cone
console.log(bounds[0].coneCutoff); // prins the cutoff angle of the first meshlet's normal cone
```
It is also possible to compute bounds of a vertex cluster that is not generated by `MeshoptClusterizer` using `computeClusterBounds`. Like `buildMeshlets`, this algorithm takes vertex indices and a strided vertex positions array with a vertex stride in `Float32` units as input.
```ts
computeClusterBounds(indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number) => Bounds;
```
Finally, it is possible to compute spherical bounds of an arbitrary set of points, which can be useful to compute bounds for arbitrary mesh subsets. Each point can have an optional radius; this can be used to merge the spherical bounds of multiple clusters. The inputs are provided as strided arrays with the stride in `Float32` units.
```ts
computeSphereBounds: (positions: Float32Array, positions_stride: number, radii?: Float32Array, radii_stride?: number) => Bounds;
```
## License
This library is available to anybody free of charge, under the terms of MIT License (see LICENSE.md).

150
node_modules/meshoptimizer/benchmark.js generated vendored Normal file
View File

@@ -0,0 +1,150 @@
import { MeshoptEncoder as encoder } from './meshopt_encoder.js';
import { MeshoptDecoder as decoder } from './meshopt_decoder.mjs';
import { performance } from 'perf_hooks';
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
function bytes(view) {
return new Uint8Array(view.buffer, view.byteOffset, view.byteLength);
}
var tests = {
roundtripVertexBuffer: function () {
var N = 1024 * 1024;
var data = new Uint8Array(N * 16);
var lcg = 1;
for (var i = 0; i < N * 16; ++i) {
// mindstd_rand
lcg = (lcg * 48271) % 2147483647;
var k = i % 16;
if (k <= 8) data[i] = lcg & ((1 << k) - 1);
else data[i] = i & ((1 << (k - 8)) - 1);
}
var decoded = new Uint8Array(N * 16);
var t0 = performance.now();
var encoded = encoder.encodeVertexBuffer(data, N, 16);
var t1 = performance.now();
decoder.decodeVertexBuffer(decoded, N, 16, encoded);
var t2 = performance.now();
return { encodeVertex: t1 - t0, decodeVertex: t2 - t1, bytes: N * 16 };
},
roundtripIndexBuffer: function () {
var N = 1024 * 1024;
var data = new Uint32Array(N * 3);
for (var i = 0; i < N * 3; i += 6) {
var v = i / 6;
data[i + 0] = v;
data[i + 1] = v + 1;
data[i + 2] = v + 2;
data[i + 3] = v + 2;
data[i + 4] = v + 1;
data[i + 5] = v + 3;
}
var decoded = new Uint32Array(data.length);
var t0 = performance.now();
var encoded = encoder.encodeIndexBuffer(bytes(data), data.length, 4);
var t1 = performance.now();
decoder.decodeIndexBuffer(bytes(decoded), data.length, 4, encoded);
var t2 = performance.now();
return { encodeIndex: t1 - t0, decodeIndex: t2 - t1, bytes: N * 12 };
},
decodeGltf: function () {
var N = 1024 * 1024;
var data = new Uint8Array(N * 16);
for (var i = 0; i < N * 16; i += 4) {
data[i + 0] = 0;
data[i + 1] = (i % 16) * 1;
data[i + 2] = (i % 16) * 2;
data[i + 3] = (i % 16) * 8;
}
var decoded = new Uint8Array(N * 16);
var filters = [
{ name: 'none', filter: 'NONE', stride: 16 },
{ name: 'oct8', filter: 'OCTAHEDRAL', stride: 4 },
{ name: 'oct12', filter: 'OCTAHEDRAL', stride: 8 },
{ name: 'quat12', filter: 'QUATERNION', stride: 8 },
{ name: 'col8', filter: 'COLOR', stride: 4 },
{ name: 'col12', filter: 'COLOR', stride: 8 },
{ name: 'exp', filter: 'EXPONENTIAL', stride: 16 },
];
var results = { bytes: N * 16 };
for (var i = 0; i < filters.length; ++i) {
var f = filters[i];
var encoded = encoder.encodeVertexBuffer(data, (N * 16) / f.stride, f.stride);
var t0 = performance.now();
decoder.decodeGltfBuffer(decoded, (N * 16) / f.stride, f.stride, encoded, 'ATTRIBUTES', f.filter);
var t1 = performance.now();
results[f.name] = t1 - t0;
}
return results;
},
};
Promise.all([encoder.ready, decoder.ready]).then(() => {
var reps = 10;
var data = {};
for (var key in tests) {
data[key] = tests[key]();
}
for (var i = 1; i < reps; ++i) {
for (var key in tests) {
var nd = tests[key]();
var od = data[key];
for (var idx in nd) {
od[idx] = Math.min(od[idx], nd[idx]);
}
}
}
for (var key in tests) {
var rep = key;
rep += ':\n';
for (var idx in data[key]) {
if (idx != 'bytes') {
rep += idx;
rep += ' ';
rep += data[key][idx].toFixed(3);
rep += ' ms (';
rep += ((data[key].bytes / 1e9 / data[key][idx]) * 1000).toFixed(3);
rep += ' GB/s)';
if (key == 'decodeGltf' && idx != 'none') {
rep += '; filter ';
rep += ((data[key].bytes / 1e9 / (data[key][idx] - data[key]['none'])) * 1000).toFixed(3);
rep += ' GB/s';
}
rep += '\n';
}
}
console.log(rep);
}
});

4
node_modules/meshoptimizer/index.d.ts generated vendored Normal file
View File

@@ -0,0 +1,4 @@
export * from './meshopt_encoder.js';
export * from './meshopt_decoder.js';
export * from './meshopt_simplifier.js';
export * from './meshopt_clusterizer.js';

4
node_modules/meshoptimizer/index.js generated vendored Normal file
View File

@@ -0,0 +1,4 @@
export * from './meshopt_encoder.js';
export * from './meshopt_decoder.mjs';
export * from './meshopt_simplifier.js';
export * from './meshopt_clusterizer.js';

69
node_modules/meshoptimizer/meshopt_clusterizer.d.ts generated vendored Normal file
View File

@@ -0,0 +1,69 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2025, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
export class Bounds {
centerX: number;
centerY: number;
centerZ: number;
radius: number;
coneApexX: number;
coneApexY: number;
coneApexZ: number;
coneAxisX: number;
coneAxisY: number;
coneAxisZ: number;
coneCutoff: number;
}
export class MeshletBuffers {
meshlets: Uint32Array;
vertices: Uint32Array;
triangles: Uint8Array;
meshletCount: number;
}
export class Meshlet {
vertices: Uint32Array;
triangles: Uint8Array;
}
export const MeshoptClusterizer: {
supported: boolean;
ready: Promise<void>;
buildMeshlets: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
max_vertices: number,
max_triangles: number,
cone_weight?: number
) => MeshletBuffers;
buildMeshletsFlex: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
max_vertices: number,
min_triangles: number,
max_triangles: number,
cone_weight?: number,
split_factor?: number
) => MeshletBuffers;
buildMeshletsSpatial: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
max_vertices: number,
min_triangles: number,
max_triangles: number,
fill_weight?: number
) => MeshletBuffers;
extractMeshlet: (buffers: MeshletBuffers, index: number) => Meshlet;
computeClusterBounds: (indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number) => Bounds;
computeMeshletBounds: (buffers: MeshletBuffers, vertex_positions: Float32Array, vertex_positions_stride: number) => Bounds[];
computeSphereBounds: (positions: Float32Array, positions_stride: number, radii?: Float32Array, radii_stride?: number) => Bounds;
};

387
node_modules/meshoptimizer/meshopt_clusterizer.js generated vendored Normal file

File diff suppressed because one or more lines are too long

135
node_modules/meshoptimizer/meshopt_clusterizer.test.js generated vendored Normal file
View File

@@ -0,0 +1,135 @@
import assert from 'assert/strict';
import { MeshoptClusterizer as clusterizer } from './meshopt_clusterizer.js';
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
const cubeWithNormals = {
vertices: new Float32Array([
// n = (0, 0, 1)
-1.0, -1.0, 1.0, 0.0, 0.0, 1.0, 1.0, -1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -1.0, 1.0, 1.0, 0.0, 0.0, 1.0,
// n = (0, 0, -1)
-1.0, 1.0, -1.0, 0.0, 0.0, -1.0, 1.0, 1.0, -1.0, 0.0, 0.0, -1.0, 1.0, -1.0, -1.0, 0.0, 0.0, -1.0, -1.0, -1.0, -1.0, 0.0, 0.0, -1.0,
// n = (1, 0, 0)
1.0, -1.0, -1.0, 1.0, 0.0, 0.0, 1.0, 1.0, -1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, -1.0, 1.0, 1.0, 0.0, 0.0,
// n = (-1, 0, 0)
-1.0, -1.0, 1.0, -1.0, 0.0, 0.0, -1.0, 1.0, 1.0, -1.0, 0.0, 0.0, -1.0, 1.0, -1.0, -1.0, 0.0, 0.0, -1.0, -1.0, -1.0, -1.0, 0.0, 0.0,
// n = (0, 1, 0)
1.0, 1.0, -1.0, 0.0, 1.0, 0.0, -1.0, 1.0, -1.0, 0.0, 1.0, 0.0, -1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0,
// n = (0, -1, 0)
1.0, -1.0, 1.0, 0.0, -1.0, 0.0, -1.0, -1.0, 1.0, 0.0, -1.0, 0.0, -1.0, -1.0, -1.0, 0.0, -1.0, 0.0, 1.0, -1.0, -1.0, 0.0, -1.0, 0.0,
]),
indices: new Uint32Array([
// n = (0, 0, 1)
0, 1, 2, 2, 3, 0,
// n = (0, 0, -1)
4, 5, 6, 6, 7, 4,
// n = (1, 0, 0)
8, 9, 10, 10, 11, 8,
// n = (-1, 0, 0)
12, 13, 14, 14, 15, 12,
// n = (0, 1, 0)
16, 17, 18, 18, 19, 16,
// n = (0, -1, 0)
20, 21, 22, 22, 23, 20,
]),
vertexStride: 6, // in floats
};
const tests = {
buildMeshlets: function () {
const maxVertices = 4;
const buffers = clusterizer.buildMeshlets(cubeWithNormals.indices, cubeWithNormals.vertices, cubeWithNormals.vertexStride, maxVertices, 512);
const expectedVertices = [
new Uint32Array([6, 7, 4, 5]),
new Uint32Array([14, 15, 12, 13]),
new Uint32Array([2, 3, 0, 1]),
new Uint32Array([20, 21, 22, 23]),
new Uint32Array([10, 11, 8, 9]),
new Uint32Array([18, 19, 16, 17]),
];
const expectedTriangles = new Uint8Array([0, 1, 2, 2, 3, 0]);
assert.equal(buffers.meshletCount, 6);
for (let i = 0; i < buffers.meshletCount; ++i) {
const m = clusterizer.extractMeshlet(buffers, i);
assert.deepStrictEqual(m.vertices, expectedVertices[i]);
assert.deepStrictEqual(m.triangles, expectedTriangles);
}
},
computeClusterBounds: function () {
for (let i = 0; i < 6; ++i) {
const indexOffset = i * 6;
const normalOffset = i * 4 * cubeWithNormals.vertexStride;
const bounds = clusterizer.computeClusterBounds(
cubeWithNormals.indices.subarray(indexOffset, 6 + indexOffset),
cubeWithNormals.vertices,
cubeWithNormals.vertexStride
);
assert.deepStrictEqual(
new Int32Array([bounds.coneAxisX, bounds.coneAxisY, bounds.coneAxisZ]),
new Int32Array(cubeWithNormals.vertices.subarray(3 + normalOffset, 6 + normalOffset))
);
}
},
computeMeshletBounds: function () {
const maxVertices = 4;
const buffers = clusterizer.buildMeshlets(cubeWithNormals.indices, cubeWithNormals.vertices, cubeWithNormals.vertexStride, maxVertices, 512);
const expectedNormals = [
new Int32Array([0, 0, -1]),
new Int32Array([-1, 0, 0]),
new Int32Array([0, 0, 1]),
new Int32Array([0, -1, 0]),
new Int32Array([1, 0, 0]),
new Int32Array([0, 1, 0]),
];
const bounds = clusterizer.computeMeshletBounds(buffers, cubeWithNormals.vertices, cubeWithNormals.vertexStride);
assert(bounds.length === 6);
assert(bounds.length === buffers.meshletCount);
bounds.forEach((b, i) => {
const normal = new Int32Array([b.coneAxisX, b.coneAxisY, b.coneAxisZ]);
assert.deepStrictEqual(normal, expectedNormals[i]);
});
},
computeSphereBounds: function () {
// positions without per-point radii (tetrahedron)
const positions = new Float32Array([0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1]);
const bp = clusterizer.computeSphereBounds(positions, 3);
assert(Math.abs(bp.centerX - 0.5) < 1e-3);
assert(Math.abs(bp.centerY - 0.5) < 1e-3);
assert(Math.abs(bp.centerZ - 0.5) < 1e-3);
assert(bp.radius < 0.87);
// use 4th float of each element as radius (last point has radius=3 enveloping others)
const radii = new Float32Array([0, 1, 2, 3]);
const br = clusterizer.computeSphereBounds(positions, 3, radii, 1);
assert(Math.abs(br.centerX - 1.0) < 1e-3);
assert(Math.abs(br.centerY - 0.0) < 1e-3);
assert(Math.abs(br.centerZ - 1.0) < 1e-3);
assert(Math.abs(br.radius - 3.0) < 1e-3);
},
};
clusterizer.ready.then(() => {
var count = 0;
for (var key in tests) {
tests[key]();
count++;
}
console.log(count, 'tests passed');
});

199
node_modules/meshoptimizer/meshopt_decoder.cjs generated vendored Normal file

File diff suppressed because one or more lines are too long

15
node_modules/meshoptimizer/meshopt_decoder.d.ts generated vendored Normal file
View File

@@ -0,0 +1,15 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2025, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
export const MeshoptDecoder: {
supported: boolean;
ready: Promise<void>;
decodeVertexBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array, filter?: string) => void;
decodeIndexBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array) => void;
decodeIndexSequence: (target: Uint8Array, count: number, size: number, source: Uint8Array) => void;
decodeGltfBuffer: (target: Uint8Array, count: number, size: number, source: Uint8Array, mode: string, filter?: string) => void;
useWorkers: (count: number) => void;
decodeGltfBufferAsync: (count: number, size: number, source: Uint8Array, mode: string, filter?: string) => Promise<Uint8Array>;
};

196
node_modules/meshoptimizer/meshopt_decoder.mjs generated vendored Normal file

File diff suppressed because one or more lines are too long

320
node_modules/meshoptimizer/meshopt_decoder.test.js generated vendored Normal file
View File

@@ -0,0 +1,320 @@
import assert from 'assert/strict';
import { MeshoptDecoder as decoder } from './meshopt_decoder.mjs';
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
var tests = {
decodeVertexBuffer: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x58, 0x57, 0x58, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00, 0x00, 0x58, 0x01, 0x08, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x17, 0x18, 0x17, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00,
0x00, 0x17, 0x01, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 44, 1, 44, 1, 0, 0, 0,
0, 244, 1, 244, 1,
]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(result, 4, 12, encoded);
assert.deepStrictEqual(result, expected);
},
decodeVertexBuffer_More: function () {
var encoded = new Uint8Array([
0xa0, 0x00, 0x01, 0x2a, 0xaa, 0xaa, 0xaa, 0x02, 0x04, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x44, 0x03, 0x00, 0x10, 0x10, 0x10, 0x10, 0x10,
0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 1, 2, 8, 0, 2, 4, 16, 0, 3, 6, 24, 0, 4, 8, 32, 0, 5, 10, 40, 0, 6, 12, 48, 0, 7, 14, 56, 0, 8, 16, 64, 0, 9, 18, 72, 0,
10, 20, 80, 0, 11, 22, 88, 0, 12, 24, 96, 0, 13, 26, 104, 0, 14, 28, 112, 0, 15, 30, 120,
]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(result, 16, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeVertexBuffer_Mode2: function () {
var encoded = new Uint8Array([
0xa0, 0x02, 0x08, 0x88, 0x88, 0x88, 0x88, 0x88, 0x88, 0x88, 0x02, 0x0a, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0x02, 0x0c, 0xcc, 0xcc,
0xcc, 0xcc, 0xcc, 0xcc, 0xcc, 0x02, 0x0e, 0xee, 0xee, 0xee, 0xee, 0xee, 0xee, 0xee, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 4, 5, 6, 7, 8, 10, 12, 14, 12, 15, 18, 21, 16, 20, 24, 28, 20, 25, 30, 35, 24, 30, 36, 42, 28, 35, 42, 49, 32, 40, 48, 56, 36,
45, 54, 63, 40, 50, 60, 70, 44, 55, 66, 77, 48, 60, 72, 84, 52, 65, 78, 91, 56, 70, 84, 98, 60, 75, 90, 105,
]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(result, 16, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeVertexBufferV1: function () {
var encoded = new Uint8Array([
0xa1, 0xee, 0xaa, 0xee, 0x00, 0x4b, 0x4b, 0x4b, 0x00, 0x00, 0x4b, 0x00, 0x00, 0x7d, 0x7d, 0x7d, 0x00, 0x00, 0x7d, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x62, 0x00, 0x62,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 44, 1, 44, 1, 0, 0, 0,
0, 244, 1, 244, 1,
]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(result, 4, 12, encoded);
assert.deepStrictEqual(result, expected);
},
decodeVertexBufferV1_Custom: function () {
var encoded = new Uint8Array([
0xa1, 0xd4, 0x94, 0xd4, 0x01, 0x0e, 0x00, 0x58, 0x57, 0x58, 0x02, 0x02, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x58,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x7d, 0x7d,
0x7d, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x7d, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x62,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 44, 1, 44, 1, 0, 0, 0,
0, 244, 1, 244, 1,
]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(result, 4, 12, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexBuffer16: function () {
var encoded = new Uint8Array([
0xe0, 0xf0, 0x10, 0xfe, 0xff, 0xf0, 0x0c, 0xff, 0x02, 0x02, 0x02, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98,
0x01, 0x69, 0x00, 0x00,
]);
var expected = new Uint16Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var result = new Uint16Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 12, 2, encoded);
assert.deepEqual(result, expected);
},
decodeIndexBuffer32: function () {
var encoded = new Uint8Array([
0xe0, 0xf0, 0x10, 0xfe, 0xff, 0xf0, 0x0c, 0xff, 0x02, 0x02, 0x02, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98,
0x01, 0x69, 0x00, 0x00,
]);
var expected = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 12, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexBufferV1: function () {
var encoded = new Uint8Array([
0xe1, 0xf0, 0x10, 0xfe, 0x1f, 0x3d, 0x00, 0x0a, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98, 0x01, 0x69, 0x00,
0x00,
]);
var expected = new Uint32Array([0, 1, 2, 2, 1, 3, 0, 1, 2, 2, 1, 5, 2, 1, 4]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 15, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexBufferV1_More: function () {
var encoded = new Uint8Array([
0xe1, 0xf0, 0x10, 0xfe, 0xff, 0xf0, 0x0c, 0xff, 0x02, 0x02, 0x02, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98,
0x01, 0x69, 0x00, 0x00,
]);
var expected = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 12, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexBufferV1_3Edges: function () {
var encoded = new Uint8Array([
0xe1, 0xf0, 0x20, 0x30, 0x40, 0x00, 0x76, 0x87, 0x56, 0x67, 0x78, 0xa9, 0x86, 0x65, 0x89, 0x68, 0x98, 0x01, 0x69, 0x00, 0x00,
]);
var expected = new Uint32Array([0, 1, 2, 1, 0, 3, 2, 1, 4, 0, 2, 5]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexBuffer(new Uint8Array(result.buffer), 12, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeIndexSequence: function () {
var encoded = new Uint8Array([0xd1, 0x00, 0x04, 0xcd, 0x01, 0x04, 0x07, 0x98, 0x1f, 0x00, 0x00, 0x00, 0x00]);
var expected = new Uint32Array([0, 1, 51, 2, 49, 1000]);
var result = new Uint32Array(expected.length);
decoder.decodeIndexSequence(new Uint8Array(result.buffer), 6, 4, encoded);
assert.deepStrictEqual(result, expected);
},
decodeFilterOct8: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x07, 0x00, 0x00, 0x00, 0x1e, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x8b, 0x8c, 0xfd, 0x00, 0x01, 0x26, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x01, 0x7f, 0x00,
]);
var expected = new Uint8Array([0, 1, 127, 0, 0, 159, 82, 1, 255, 1, 127, 0, 1, 130, 241, 1]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 4, 4, encoded, /* filter= */ 'OCTAHEDRAL');
assert.deepStrictEqual(result, expected);
},
decodeFilterOct12: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x0f, 0x00, 0x00, 0x00, 0x3d, 0x5a, 0x01, 0x0f, 0x00, 0x00, 0x00, 0x0e, 0x0d, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x9a, 0x99, 0x26,
0x01, 0x3f, 0x00, 0x00, 0x00, 0x0e, 0x0d, 0x0a, 0x00, 0x00, 0x01, 0x26, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0xff, 0x07,
0x00, 0x00,
]);
var expected = new Uint16Array([0, 16, 32767, 0, 0, 32621, 3088, 1, 32764, 16, 471, 0, 307, 28541, 16093, 1]);
var result = new Uint16Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 4, 8, encoded, /* filter= */ 'OCTAHEDRAL');
assert.deepStrictEqual(result, expected);
},
decodeFilterQuat12: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x0f, 0x00, 0x00, 0x00, 0x3d, 0x5a, 0x01, 0x0f, 0x00, 0x00, 0x00, 0x0e, 0x0d, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x9a, 0x99, 0x26,
0x01, 0x3f, 0x00, 0x00, 0x00, 0x0e, 0x0d, 0x0a, 0x00, 0x00, 0x01, 0x2a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0xfc, 0x07,
]);
var expected = new Uint16Array([32767, 0, 11, 0, 0, 25013, 0, 21166, 11, 0, 23504, 22830, 158, 14715, 0, 29277]);
var result = new Uint16Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 4, 8, encoded, /* filter= */ 'QUATERNION');
assert.deepStrictEqual(result, expected);
},
decodeFilterExp: function () {
var encoded = new Uint8Array([
0xa0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0xff, 0xf7, 0xff, 0xff, 0x02, 0xff,
0xff, 0x7f, 0xfe,
]);
var expected = new Uint32Array([0, 0x3fc00000, 0xc2100000, 0x49fffffe]);
var result = new Uint32Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 1, 16, encoded, /* filter= */ 'EXPONENTIAL');
assert.deepStrictEqual(result, expected);
},
decodeFilterColor8: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x7e, 0x7d, 0x4c, 0x01, 0x3f, 0x00, 0x00, 0x00, 0xfd, 0xfd, 0xfe, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x83,
0x82, 0x80, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x7d, 0x3f, 0x7e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x7f, 0xc1, 0xff,
]);
var expected = new Uint8Array([254, 1, 0, 255, 0, 254, 0, 128, 1, 0, 255, 64, 102, 102, 102, 191]);
var result = new Uint8Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 4, 4, encoded, /* filter= */ 'COLOR');
assert.deepStrictEqual(result, expected);
},
decodeFilterColor12: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x1b, 0x00, 0x00, 0x00, 0xcc, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x06, 0x05, 0x04, 0x01, 0x29, 0x00, 0x00, 0x00, 0x01, 0x3f, 0x00,
0x00, 0x00, 0x0d, 0x0f, 0x10, 0x01, 0x38, 0x00, 0x00, 0x00, 0x03, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x16, 0x15, 0x08, 0x01, 0x21, 0x00, 0x00,
0x00, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x05, 0x03, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0xff, 0x07, 0x01, 0xfc, 0xff, 0x0f,
]);
var expected = new Uint16Array([65519, 16, 0, 65535, 0, 65519, 0, 32776, 16, 0, 65535, 16388, 26214, 26214, 26214, 49147]);
var result = new Uint16Array(expected.length);
decoder.decodeVertexBuffer(new Uint8Array(result.buffer), 4, 8, encoded, /* filter= */ 'COLOR');
assert.deepStrictEqual(result, expected);
},
decodeGltfBuffer: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x58, 0x57, 0x58, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00, 0x00, 0x58, 0x01, 0x08, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x17, 0x18, 0x17, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00,
0x00, 0x17, 0x01, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 44, 1, 44, 1, 0, 0, 0,
0, 244, 1, 244, 1,
]);
var result = new Uint8Array(expected.length);
decoder.decodeGltfBuffer(result, 4, 12, encoded, /* mode= */ 'ATTRIBUTES');
assert.deepStrictEqual(result, expected);
},
decodeGltfBufferAsync: function () {
var encoded = new Uint8Array([
0xa0, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x58, 0x57, 0x58, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00, 0x00, 0x58, 0x01, 0x08, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x3f, 0x00, 0x00, 0x00, 0x17, 0x18, 0x17, 0x01, 0x26, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00,
0x00, 0x17, 0x01, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]);
var expected = new Uint8Array([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 0, 0, 0, 0, 44, 1, 0, 0, 0, 0, 0, 0, 244, 1, 44, 1, 44, 1, 0, 0, 0,
0, 244, 1, 244, 1,
]);
decoder.decodeGltfBufferAsync(4, 12, encoded, /* mode= */ 'ATTRIBUTES').then(function (result) {
assert.deepStrictEqual(result, expected);
});
},
};
decoder.ready.then(() => {
var count = 0;
for (var key in tests) {
tests[key]();
count++;
}
console.log(count, 'tests passed');
});

460
node_modules/meshoptimizer/meshopt_decoder_reference.js generated vendored Normal file
View File

@@ -0,0 +1,460 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2025, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
// This is the reference decoder implementation by Jasper St. Pierre.
// It follows the decoder interface and should be a drop-in replacement for the actual decoder from meshopt_decoder module
// It is provided for educational value and is not recommended for use in production because it's not performance-optimized.
const MeshoptDecoder = {};
MeshoptDecoder.supported = true;
MeshoptDecoder.ready = Promise.resolve();
function assert(cond) {
if (!cond) {
throw new Error('Assertion failed');
}
}
function dezig(v) {
return (v & 1) !== 0 ? ~(v >>> 1) : v >>> 1;
}
MeshoptDecoder.decodeVertexBuffer = (target, elementCount, byteStride, source, filter) => {
assert(source[0] === 0xa0 || source[0] === 0xa1);
const version = source[0] & 0x0f;
const maxBlockElements = Math.min((0x2000 / byteStride) & ~0x000f, 0x100);
const deltas = new Uint8Array(maxBlockElements * byteStride);
const tailSize = version === 0 ? byteStride : byteStride + byteStride / 4;
const tailDataOffs = source.length - tailSize;
// What deltas are stored relative to
const tempData = source.slice(tailDataOffs, tailDataOffs + byteStride);
// Channel modes for v1
const channels = version === 0 ? null : source.slice(tailDataOffs + byteStride, tailDataOffs + tailSize);
let srcOffs = 1; // Skip header byte
const headerModes = [
[0, 2, 4, 8], // v0
[0, 1, 2, 4], // v1, when control is 0
[1, 2, 4, 8], // v1, when control is 1
];
// Attribute blocks
for (let dstElemBase = 0; dstElemBase < elementCount; dstElemBase += maxBlockElements) {
const attrBlockElementCount = Math.min(elementCount - dstElemBase, maxBlockElements);
const groupCount = ((attrBlockElementCount + 0x0f) & ~0x0f) >>> 4;
const headerByteCount = ((groupCount + 0x03) & ~0x03) >>> 2;
// Control modes for v1
const controlBitsOffs = srcOffs;
srcOffs += version === 0 ? 0 : byteStride / 4;
// Zero out deltas to simplify logic
deltas.fill(0x00);
// Data blocks
for (let byte = 0; byte < byteStride; byte++) {
const deltaBase = byte * attrBlockElementCount;
// Control mode for current byte for v1
const controlMode = version === 0 ? 0 : (source[controlBitsOffs + (byte >>> 2)] >>> ((byte & 0x03) << 1)) & 0x03;
if (controlMode === 2) {
// All byte deltas are 0; no data is stored for this byte
continue;
} else if (controlMode === 3) {
// Byte deltas are stored uncompressed with no header bits
deltas.set(source.subarray(srcOffs, srcOffs + attrBlockElementCount), deltaBase);
srcOffs += attrBlockElementCount;
continue;
}
// Header bits are omitted for v1 when using control modes 2/3
const headerBitsOffs = srcOffs;
srcOffs += headerByteCount;
for (let group = 0; group < groupCount; group++) {
const mode = (source[headerBitsOffs + (group >>> 2)] >>> ((group & 0x03) << 1)) & 0x03;
const modeBits = headerModes[version === 0 ? 0 : controlMode + 1][mode];
const deltaOffs = deltaBase + (group << 4);
if (modeBits === 0) {
// All 16 byte deltas are 0; the size of the encoded block is 0 bytes
} else if (modeBits === 1) {
// Deltas are using 1-bit sentinel encoding; the size of the encoded block is [2..18] bytes
const srcBase = srcOffs;
srcOffs += 0x02;
for (let m = 0; m < 0x10; m++) {
// Bits are stored from least significant to most significant for 1-bit encoding
const shift = m & 0x07;
let delta = (source[srcBase + (m >>> 3)] >>> shift) & 0x01;
if (delta === 1) delta = source[srcOffs++];
deltas[deltaOffs + m] = delta;
}
} else if (modeBits === 2) {
// Deltas are using 2-bit sentinel encoding; the size of the encoded block is [4..20] bytes
const srcBase = srcOffs;
srcOffs += 0x04;
for (let m = 0; m < 0x10; m++) {
// 0 = >>> 6, 1 = >>> 4, 2 = >>> 2, 3 = >>> 0
const shift = 6 - ((m & 0x03) << 1);
let delta = (source[srcBase + (m >>> 2)] >>> shift) & 0x03;
if (delta === 3) delta = source[srcOffs++];
deltas[deltaOffs + m] = delta;
}
} else if (modeBits === 4) {
// Deltas are using 4-bit sentinel encoding; the size of the encoded block is [8..24] bytes
const srcBase = srcOffs;
srcOffs += 0x08;
for (let m = 0; m < 0x10; m++) {
// 0 = >>> 6, 1 = >>> 4, 2 = >>> 2, 3 = >>> 0
const shift = 4 - ((m & 0x01) << 2);
let delta = (source[srcBase + (m >>> 1)] >>> shift) & 0x0f;
if (delta === 0xf) delta = source[srcOffs++];
deltas[deltaOffs + m] = delta;
}
} else {
// All 16 byte deltas are stored verbatim; the size of the encoded block is 16 bytes
deltas.set(source.subarray(srcOffs, srcOffs + 0x10), deltaOffs);
srcOffs += 0x10;
}
}
}
// Go through and apply deltas to data
for (let elem = 0; elem < attrBlockElementCount; elem++) {
const dstElem = dstElemBase + elem;
for (let byteGroup = 0; byteGroup < byteStride; byteGroup += 4) {
let channelMode = version === 0 ? 0 : channels[byteGroup >>> 2] & 0x03;
assert(channelMode !== 0x03);
if (channelMode === 0) {
// Channel 0 (byte deltas): Byte deltas are stored as zigzag-encoded differences between the byte values of the element and the byte values of the previous element in the same position.
for (let byte = byteGroup; byte < byteGroup + 4; byte++) {
const delta = dezig(deltas[byte * attrBlockElementCount + elem]);
const temp = (tempData[byte] + delta) & 0xff; // wrap around
const dstOffs = dstElem * byteStride + byte;
target[dstOffs] = tempData[byte] = temp;
}
} else if (channelMode === 1) {
// Channel 1 (2-byte deltas): 2-byte deltas are computed as zigzag-encoded differences between 16-bit values of the element and the previous element in the same position.
for (let byte = byteGroup; byte < byteGroup + 4; byte += 2) {
const delta = dezig(deltas[byte * attrBlockElementCount + elem] + (deltas[(byte + 1) * attrBlockElementCount + elem] << 8));
let temp = tempData[byte] + (tempData[byte + 1] << 8);
temp = (temp + delta) & 0xffff; // wrap around
const dstOffs = dstElem * byteStride + byte;
target[dstOffs] = tempData[byte] = temp & 0xff;
target[dstOffs + 1] = tempData[byte + 1] = temp >>> 8;
}
} else if (channelMode === 2) {
// Channel 2 (4-byte XOR deltas): 4-byte deltas are computed as XOR between 32-bit values of the element and the previous element in the same position, with an additional rotation applied based on the high 4 bits of the channel mode byte.
const byte = byteGroup;
const delta =
deltas[byte * attrBlockElementCount + elem] +
(deltas[(byte + 1) * attrBlockElementCount + elem] << 8) +
(deltas[(byte + 2) * attrBlockElementCount + elem] << 16) +
(deltas[(byte + 3) * attrBlockElementCount + elem] << 24);
let temp = tempData[byte] + (tempData[byte + 1] << 8) + (tempData[byte + 2] << 16) + (tempData[byte + 3] << 24);
const rot = channels[byteGroup >>> 2] >>> 4;
temp = temp ^ ((delta >>> rot) | (delta << (32 - rot))); // rotate and XOR
const dstOffs = dstElem * byteStride + byte;
target[dstOffs] = tempData[byte] = temp & 0xff;
target[dstOffs + 1] = tempData[byte + 1] = (temp >>> 8) & 0xff;
target[dstOffs + 2] = tempData[byte + 2] = (temp >>> 16) & 0xff;
target[dstOffs + 3] = tempData[byte + 3] = temp >>> 24;
}
}
}
}
const tailSizePadded = Math.max(tailSize, version === 0 ? 32 : 24);
assert(srcOffs == source.length - tailSizePadded);
// Filters - only applied if filter isn't undefined or NONE
if (filter === 'OCTAHEDRAL') {
assert(byteStride === 4 || byteStride === 8);
const dst = byteStride === 4 ? new Int8Array(target.buffer) : new Int16Array(target.buffer);
const maxInt = byteStride === 4 ? 127 : 32767;
for (let i = 0; i < 4 * elementCount; i += 4) {
let x = dst[i + 0],
y = dst[i + 1],
one = dst[i + 2];
x /= one;
y /= one;
const z = 1.0 - Math.abs(x) - Math.abs(y);
const t = Math.max(-z, 0.0);
x -= x >= 0 ? t : -t;
y -= y >= 0 ? t : -t;
const h = maxInt / Math.hypot(x, y, z);
dst[i + 0] = Math.round(x * h);
dst[i + 1] = Math.round(y * h);
dst[i + 2] = Math.round(z * h);
// keep dst[i + 3] as is
}
} else if (filter === 'QUATERNION') {
assert(byteStride === 8);
const dst = new Int16Array(target.buffer);
for (let i = 0; i < 4 * elementCount; i += 4) {
const inputW = dst[i + 3];
const maxComponent = inputW & 0x03;
const s = Math.SQRT1_2 / (inputW | 0x03);
let x = dst[i + 0] * s;
let y = dst[i + 1] * s;
let z = dst[i + 2] * s;
let w = Math.sqrt(Math.max(0.0, 1.0 - x ** 2 - y ** 2 - z ** 2));
dst[i + ((maxComponent + 1) % 4)] = Math.round(x * 32767);
dst[i + ((maxComponent + 2) % 4)] = Math.round(y * 32767);
dst[i + ((maxComponent + 3) % 4)] = Math.round(z * 32767);
dst[i + ((maxComponent + 0) % 4)] = Math.round(w * 32767);
}
} else if (filter === 'EXPONENTIAL') {
assert((byteStride & 0x03) === 0x00);
const src = new Int32Array(target.buffer);
const dst = new Float32Array(target.buffer);
for (let i = 0; i < (byteStride * elementCount) / 4; i++) {
const v = src[i],
exp = v >> 24,
mantissa = (v << 8) >> 8;
dst[i] = 2.0 ** exp * mantissa;
}
} else if (filter === 'COLOR') {
assert(byteStride === 4 || byteStride === 8);
const maxInt = (1 << (byteStride * 2)) - 1;
const data = byteStride === 4 ? new Uint8Array(target.buffer) : new Uint16Array(target.buffer, 0, elementCount * 4);
const dataSigned = byteStride === 4 ? new Int8Array(target.buffer) : new Int16Array(target.buffer, 0, elementCount * 4);
for (let i = 0; i < elementCount * 4; i += 4) {
const y = data[i + 0];
const co = dataSigned[i + 1];
const cg = dataSigned[i + 2];
const alphaInput = data[i + 3];
// Recover scale from alpha high bit - find highest bit set
const alphaBit = 31 - Math.clz32(alphaInput);
const as = (1 << (alphaBit + 1)) - 1;
// YCoCg to RGB conversion
const r = y + co - cg;
const g = y + cg;
const b = y - co - cg;
// Expand alpha by one bit, replicating last bit
let a = alphaInput & (as >> 1);
a = (a << 1) | (a & 1);
// Scale to full range
const ss = maxInt / as;
// Store result
data[i + 0] = Math.round(r * ss);
data[i + 1] = Math.round(g * ss);
data[i + 2] = Math.round(b * ss);
data[i + 3] = Math.round(a * ss);
}
}
};
function pushfifo(fifo, n) {
for (let i = fifo.length - 1; i > 0; i--) fifo[i] = fifo[i - 1];
fifo[0] = n;
}
MeshoptDecoder.decodeIndexBuffer = (target, count, byteStride, source) => {
assert(source[0] === 0xe1);
assert(count % 3 === 0);
assert(byteStride === 2 || byteStride === 4);
let dst;
if (byteStride === 2) dst = new Uint16Array(target.buffer);
else dst = new Uint32Array(target.buffer);
const triCount = count / 3;
let codeOffs = 0x01;
let dataOffs = codeOffs + triCount;
let codeauxOffs = source.length - 0x10;
function readLEB128() {
let n = 0;
for (let i = 0; ; i += 7) {
const b = source[dataOffs++];
n |= (b & 0x7f) << i;
if (b < 0x80) return n;
}
}
let next = 0,
last = 0;
const edgefifo = new Uint32Array(32);
const vertexfifo = new Uint32Array(16);
function decodeIndex(v) {
return (last += dezig(v));
}
let dstOffs = 0;
for (let i = 0; i < triCount; i++) {
const code = source[codeOffs++];
const b0 = code >>> 4,
b1 = code & 0x0f;
if (b0 < 0x0f) {
const a = edgefifo[(b0 << 1) + 0],
b = edgefifo[(b0 << 1) + 1];
let c = -1;
if (b1 === 0x00) {
c = next++;
pushfifo(vertexfifo, c);
} else if (b1 < 0x0d) {
c = vertexfifo[b1];
} else if (b1 === 0x0d) {
c = --last;
pushfifo(vertexfifo, c);
} else if (b1 === 0x0e) {
c = ++last;
pushfifo(vertexfifo, c);
} else if (b1 === 0x0f) {
const v = readLEB128();
c = decodeIndex(v);
pushfifo(vertexfifo, c);
}
// fifo pushes happen backwards
pushfifo(edgefifo, b);
pushfifo(edgefifo, c);
pushfifo(edgefifo, c);
pushfifo(edgefifo, a);
dst[dstOffs++] = a;
dst[dstOffs++] = b;
dst[dstOffs++] = c;
} else {
// b0 === 0x0F
let a = -1,
b = -1,
c = -1;
if (b1 < 0x0e) {
const e = source[codeauxOffs + b1];
const z = e >>> 4,
w = e & 0x0f;
a = next++;
if (z === 0x00) b = next++;
else b = vertexfifo[z - 1];
if (w === 0x00) c = next++;
else c = vertexfifo[w - 1];
pushfifo(vertexfifo, a);
if (z === 0x00) pushfifo(vertexfifo, b);
if (w === 0x00) pushfifo(vertexfifo, c);
} else {
const e = source[dataOffs++];
if (e === 0x00) next = 0;
const z = e >>> 4,
w = e & 0x0f;
if (b1 === 0x0e) a = next++;
else a = decodeIndex(readLEB128());
if (z === 0x00) b = next++;
else if (z === 0x0f) b = decodeIndex(readLEB128());
else b = vertexfifo[z - 1];
if (w === 0x00) c = next++;
else if (w === 0x0f) c = decodeIndex(readLEB128());
else c = vertexfifo[w - 1];
pushfifo(vertexfifo, a);
if (z === 0x00 || z === 0x0f) pushfifo(vertexfifo, b);
if (w === 0x00 || w === 0x0f) pushfifo(vertexfifo, c);
}
pushfifo(edgefifo, a);
pushfifo(edgefifo, b);
pushfifo(edgefifo, b);
pushfifo(edgefifo, c);
pushfifo(edgefifo, c);
pushfifo(edgefifo, a);
dst[dstOffs++] = a;
dst[dstOffs++] = b;
dst[dstOffs++] = c;
}
}
};
MeshoptDecoder.decodeIndexSequence = (target, count, byteStride, source) => {
assert(source[0] === 0xd1);
assert(byteStride === 2 || byteStride === 4);
let dst;
if (byteStride === 2) dst = new Uint16Array(target.buffer);
else dst = new Uint32Array(target.buffer);
let dataOffs = 0x01;
function readLEB128() {
let n = 0;
for (let i = 0; ; i += 7) {
const b = source[dataOffs++];
n |= (b & 0x7f) << i;
if (b < 0x80) return n;
}
}
const last = new Uint32Array(2);
for (let i = 0; i < count; i++) {
const v = readLEB128();
const b = v & 0x01;
const delta = dezig(v >>> 1);
dst[i] = last[b] += delta;
}
};
MeshoptDecoder.decodeGltfBuffer = (target, count, size, source, mode, filter) => {
const table = {
ATTRIBUTES: MeshoptDecoder.decodeVertexBuffer,
TRIANGLES: MeshoptDecoder.decodeIndexBuffer,
INDICES: MeshoptDecoder.decodeIndexSequence,
};
assert(table[mode] !== undefined);
table[mode](target, count, size, source, filter);
};
MeshoptDecoder.decodeGltfBufferAsync = (count, size, source, mode, filter) => {
const target = new Uint8Array(count * size);
MeshoptDecoder.decodeGltfBuffer(target, count, size, source, mode, filter);
return Promise.resolve(target);
};
// node.js interface:
// for (let k in MeshoptDecoder) exports[k] = MeshoptDecoder[k];
export { MeshoptDecoder };

23
node_modules/meshoptimizer/meshopt_encoder.d.ts generated vendored Normal file
View File

@@ -0,0 +1,23 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2025, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
export type ExpMode = 'Separate' | 'SharedVector' | 'SharedComponent' | 'Clamped';
export const MeshoptEncoder: {
supported: boolean;
ready: Promise<void>;
reorderMesh: (indices: Uint32Array, triangles: boolean, optsize: boolean) => [Uint32Array, number];
reorderPoints: (positions: Float32Array, positions_stride: number) => Uint32Array;
encodeVertexBuffer: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeVertexBufferLevel: (source: Uint8Array, count: number, size: number, level: number, version?: number) => Uint8Array;
encodeIndexBuffer: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeIndexSequence: (source: Uint8Array, count: number, size: number) => Uint8Array;
encodeGltfBuffer: (source: Uint8Array, count: number, size: number, mode: string, version?: number) => Uint8Array;
encodeFilterOct: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
encodeFilterQuat: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
encodeFilterExp: (source: Float32Array, count: number, stride: number, bits: number, mode?: ExpMode) => Uint8Array;
encodeFilterColor: (source: Float32Array, count: number, stride: number, bits: number) => Uint8Array;
};

217
node_modules/meshoptimizer/meshopt_encoder.js generated vendored Normal file

File diff suppressed because one or more lines are too long

243
node_modules/meshoptimizer/meshopt_encoder.test.js generated vendored Normal file
View File

@@ -0,0 +1,243 @@
import assert from 'assert/strict';
import { MeshoptEncoder as encoder } from './meshopt_encoder.js';
import { MeshoptDecoder as decoder } from './meshopt_decoder.mjs';
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
function bytes(view) {
return new Uint8Array(view.buffer, view.byteOffset, view.byteLength);
}
var tests = {
reorderMesh: function () {
var indices = new Uint32Array([4, 2, 5, 3, 1, 4, 0, 1, 3, 1, 2, 4]);
var expected = new Uint32Array([0, 1, 2, 3, 1, 0, 4, 3, 0, 5, 3, 4]);
var remap = new Uint32Array([5, 3, 1, 4, 0, 2]);
var res = encoder.reorderMesh(indices, /* triangles= */ true, /* optsize= */ true);
assert.deepEqual(indices, expected);
assert.deepEqual(res[0], remap);
assert.equal(res[1], 6); // unique
},
reorderPoints: function () {
var points = new Float32Array([1, 1, 1, 11, 11, 11, 2, 2, 2, 12, 12, 12]);
var expected = new Uint32Array([0, 2, 1, 3]);
var remap = encoder.reorderPoints(points, 3);
assert.deepEqual(remap, expected);
},
roundtripVertexBuffer: function () {
var data = new Uint8Array(16 * 4);
// this tests 0/2/4/8 bit groups in one stream
for (var i = 0; i < 16; ++i) {
data[i * 4 + 0] = 0;
data[i * 4 + 1] = i * 1;
data[i * 4 + 2] = i * 2;
data[i * 4 + 3] = i * 8;
}
var encoded = encoder.encodeVertexBuffer(data, 16, 4);
assert.equal(encoded[0], 0xa1);
var decoded = new Uint8Array(16 * 4);
decoder.decodeVertexBuffer(decoded, 16, 4, encoded);
assert.deepEqual(decoded, data);
},
roundtripVertexBufferV1: function () {
var data = new Uint8Array(16 * 4);
// this tests 0/2/4/8 bit groups in one stream
for (var i = 0; i < 16; ++i) {
data[i * 4 + 0] = 0;
data[i * 4 + 1] = i * 1;
data[i * 4 + 2] = i * 2;
data[i * 4 + 3] = i * 8;
}
var encoded = encoder.encodeVertexBufferLevel(data, 16, 4, 3, /* version= */ 1);
assert.equal(encoded[0], 0xa1);
var decoded = new Uint8Array(16 * 4);
decoder.decodeVertexBuffer(decoded, 16, 4, encoded);
assert.deepEqual(decoded, data);
},
roundtripIndexBuffer: function () {
var data = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var encoded = encoder.encodeIndexBuffer(bytes(data), data.length, 4);
var decoded = new Uint32Array(data.length);
decoder.decodeIndexBuffer(bytes(decoded), data.length, 4, encoded);
assert.deepEqual(decoded, data);
},
roundtripIndexBuffer16: function () {
var data = new Uint16Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var encoded = encoder.encodeIndexBuffer(bytes(data), data.length, 2);
var decoded = new Uint16Array(data.length);
decoder.decodeIndexBuffer(bytes(decoded), data.length, 2, encoded);
assert.deepEqual(decoded, data);
},
roundtripIndexSequence: function () {
var data = new Uint32Array([0, 1, 51, 2, 49, 1000]);
var encoded = encoder.encodeIndexSequence(bytes(data), data.length, 4);
var decoded = new Uint32Array(data.length);
decoder.decodeIndexSequence(bytes(decoded), data.length, 4, encoded);
assert.deepEqual(decoded, data);
},
roundtripIndexSequence16: function () {
var data = new Uint16Array([0, 1, 51, 2, 49, 1000]);
var encoded = encoder.encodeIndexSequence(bytes(data), data.length, 2);
var decoded = new Uint16Array(data.length);
decoder.decodeIndexSequence(bytes(decoded), data.length, 2, encoded);
assert.deepEqual(decoded, data);
},
encodeFilterOct8: function () {
var data = new Float32Array([1, 0, 0, 0, 0, -1, 0, 0, 0.7071068, 0, 0.707168, 1, -0.7071068, 0, -0.707168, 1]);
var expected = new Uint8Array([0x7f, 0, 0x7f, 0, 0, 0x81, 0x7f, 0, 0x3f, 0, 0x7f, 0x7f, 0x81, 0x40, 0x7f, 0x7f]);
// 4 vectors, encode each vector into 4 bytes with 8 bits of precision/component
var encoded = encoder.encodeFilterOct(data, 4, 4, 8);
assert.deepEqual(encoded, expected);
},
encodeFilterOct12: function () {
var data = new Float32Array([1, 0, 0, 0, 0, -1, 0, 0, 0.7071068, 0, 0.707168, 1, -0.7071068, 0, -0.707168, 1]);
var expected = new Uint16Array([0x7ff, 0, 0x7ff, 0, 0x0, 0xf801, 0x7ff, 0, 0x3ff, 0, 0x7ff, 0x7fff, 0xf801, 0x400, 0x7ff, 0x7fff]);
// 4 vectors, encode each vector into 8 bytes with 12 bits of precision/component
var encoded = encoder.encodeFilterOct(data, 4, 8, 12);
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterQuat12: function () {
var data = new Float32Array([1, 0, 0, 0, 0, -1, 0, 0, 0.7071068, 0, 0, 0.707168, -0.7071068, 0, 0, -0.707168]);
var expected = new Uint16Array([0, 0, 0, 0x7fc, 0, 0, 0, 0x7fd, 0x7ff, 0, 0, 0x7ff, 0x7ff, 0, 0, 0x7ff]);
// 4 quaternions, encode each quaternion into 8 bytes with 12 bits of precision/component
var encoded = encoder.encodeFilterQuat(data, 4, 8, 12);
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterExp: function () {
var data = new Float32Array([1, -23.4, -0.1]);
var expected = new Uint32Array([0xf7000200, 0xf7ffd133, 0xf7ffffcd]);
// 1 vector with 3 components (12 bytes), encode each vector into 12 bytes with 15 bits of precision/component
var encoded = encoder.encodeFilterExp(data, 1, 12, 15);
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterExpMode: function () {
var data = new Float32Array([1, -23.4, -0.1, 11.0]);
var expected = new Uint32Array([0xf3002000, 0xf7ffd133, 0xf3fffccd, 0xf7001600]);
// 2 vectors with 2 components (8 bytes), encode each vector into 8 bytes with 15 bits of precision/component
var encoded = encoder.encodeFilterExp(data, 2, 8, 15, 'SharedComponent');
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterExpClamp: function () {
var data = new Float32Array([1, -23.4, -0.1]);
var expected = new Uint32Array([0xf3002000, 0xf7ffd133, 0xf2fff99a]);
// 1 vector with 3 components (12 bytes), encode each vector into 12 bytes with 15 bits of precision/component
// exponents are separate but clamped to 0
var encoded = encoder.encodeFilterExp(data, 1, 12, 15, 'Clamped');
assert.deepEqual(encoded, bytes(expected));
},
encodeFilterColor8: function () {
var data = new Float32Array([1, 0, 0, 1, 0, 1, 0, 0.5, 0, 0, 1, 0.25, 0.4, 0.4, 0.4, 0.75]);
var expected = new Uint8Array([0x40, 0x7f, 0xc1, 0xff, 0x7f, 0x00, 0x7f, 0xc0, 0x40, 0x81, 0xc0, 0xa0, 0x66, 0x00, 0x00, 0xdf]);
// 4 vectors, encode each vector into 4 bytes with 8 bits of precision/component
var encoded = encoder.encodeFilterColor(data, 4, 4, 8);
assert.deepEqual(encoded, expected);
},
encodeFilterColor12: function () {
var data = new Float32Array([1, 0, 0, 1, 0, 1, 0, 0.5, 0, 0, 1, 0.25, 0.4, 0.4, 0.4, 0.75]);
var expected = new Uint16Array([
0x0400, 0x07ff, 0xfc01, 0x0fff, 0x07ff, 0x0000, 0x07ff, 0x0c00, 0x0400, 0xf801, 0xfc00, 0x0a00, 0x0666, 0x0000, 0x0000, 0x0dff,
]);
// 4 vectors, encode each vector into 8 bytes with 12 bits of precision/component
var encoded = encoder.encodeFilterColor(data, 4, 8, 12);
assert.deepEqual(encoded, bytes(expected));
},
encodeGltfBuffer: function () {
var data = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var encoded = encoder.encodeGltfBuffer(bytes(data), data.length, 4, 'TRIANGLES');
var decoded = new Uint32Array(data.length);
decoder.decodeGltfBuffer(bytes(decoded), data.length, 4, encoded, 'TRIANGLES');
assert.equal(encoded[0], 0xe1);
assert.deepEqual(decoded, data);
},
encodeGltfBufferAttribute: function () {
var data = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var encoded = encoder.encodeGltfBuffer(bytes(data), data.length, 4, 'ATTRIBUTES');
var decoded = new Uint32Array(data.length);
decoder.decodeGltfBuffer(bytes(decoded), data.length, 4, encoded, 'ATTRIBUTES');
assert.equal(encoded[0], 0xa0);
assert.deepEqual(decoded, data);
},
encodeGltfBufferAttributeV1: function () {
var data = new Uint32Array([0, 1, 2, 2, 1, 3, 4, 6, 5, 7, 8, 9]);
var encoded = encoder.encodeGltfBuffer(bytes(data), data.length, 4, 'ATTRIBUTES', 1);
var decoded = new Uint32Array(data.length);
decoder.decodeGltfBuffer(bytes(decoded), data.length, 4, encoded, 'ATTRIBUTES');
assert.equal(encoded[0], 0xa1);
assert.deepEqual(decoded, data);
},
};
Promise.all([encoder.ready, decoder.ready]).then(() => {
var count = 0;
for (var key in tests) {
tests[key]();
count++;
}
console.log(count, 'tests passed');
});

67
node_modules/meshoptimizer/meshopt_simplifier.d.ts generated vendored Normal file
View File

@@ -0,0 +1,67 @@
// This file is part of meshoptimizer library and is distributed under the terms of MIT License.
// Copyright (C) 2016-2025, by Arseny Kapoulkine (arseny.kapoulkine@gmail.com)
export type Flags = 'LockBorder' | 'Sparse' | 'ErrorAbsolute' | 'Prune' | 'Regularize' | 'Permissive';
export const MeshoptSimplifier: {
supported: boolean;
ready: Promise<void>;
compactMesh: (indices: Uint32Array) => [Uint32Array, number];
simplify: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
target_index_count: number,
target_error: number,
flags?: Flags[]
) => [Uint32Array, number];
simplifyWithAttributes: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
vertex_attributes: Float32Array,
vertex_attributes_stride: number,
attribute_weights: number[],
vertex_lock: Uint8Array | null,
target_index_count: number,
target_error: number,
flags?: Flags[]
) => [Uint32Array, number];
simplifyWithUpdate: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
vertex_attributes: Float32Array,
vertex_attributes_stride: number,
attribute_weights: number[],
vertex_lock: Uint8Array | null,
target_index_count: number,
target_error: number,
flags?: Flags[]
) => [number, number];
simplifySloppy: (
indices: Uint32Array,
vertex_positions: Float32Array,
vertex_positions_stride: number,
vertex_lock: Uint8Array | null,
target_index_count: number,
target_error: number
) => [Uint32Array, number];
getScale: (vertex_positions: Float32Array, vertex_positions_stride: number) => number;
simplifyPoints: (
vertex_positions: Float32Array,
vertex_positions_stride: number,
target_vertex_count: number,
vertex_colors?: Float32Array,
vertex_colors_stride?: number,
color_weight?: number
) => Uint32Array;
simplifyPrune: (indices: Uint32Array, vertex_positions: Float32Array, vertex_positions_stride: number, target_error: number) => Uint32Array;
};

620
node_modules/meshoptimizer/meshopt_simplifier.js generated vendored Normal file

File diff suppressed because one or more lines are too long

203
node_modules/meshoptimizer/meshopt_simplifier.test.js generated vendored Normal file
View File

@@ -0,0 +1,203 @@
import assert from 'assert/strict';
import { MeshoptSimplifier as simplifier } from './meshopt_simplifier.js';
process.on('unhandledRejection', (error) => {
console.log('unhandledRejection', error);
process.exit(1);
});
var tests = {
compactMesh: function () {
var indices = new Uint32Array([0, 1, 3, 3, 1, 5]);
var expected = new Uint32Array([0, 1, 2, 2, 1, 3]);
var missing = 2 ** 32 - 1;
var remap = new Uint32Array([0, 1, missing, 2, missing, 3]);
var res = simplifier.compactMesh(indices);
assert.deepEqual(indices, expected);
assert.deepEqual(res[0], remap);
assert.equal(res[1], 4); // unique
},
simplify: function () {
// 0
// 1 2
// 3 4 5
var indices = new Uint32Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
var positions = new Float32Array([0, 4, 0, 0, 1, 0, 2, 2, 0, 0, 0, 0, 1, 0, 0, 4, 0, 0]);
var res = simplifier.simplify(indices, positions, 3, /* target indices */ 3, /* target error */ 0.01);
var expected = new Uint32Array([0, 5, 3]);
assert.deepEqual(res[0], expected);
assert(res[1] < 1e-4); // error
},
simplify16: function () {
// 0
// 1 2
// 3 4 5
var indices = new Uint16Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
var positions = new Float32Array([0, 4, 0, 0, 1, 0, 2, 2, 0, 0, 0, 0, 1, 0, 0, 4, 0, 0]);
var res = simplifier.simplify(indices, positions, 3, /* target indices */ 3, /* target error */ 0.01);
var expected = new Uint16Array([0, 5, 3]);
assert.deepEqual(res[0], expected);
assert(res[1] < 1e-4); // error
},
simplifyLockBorder: function () {
// 0
// 1 2
// 3 4 5
var indices = new Uint32Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
var positions = new Float32Array([0, 2, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0]);
var res = simplifier.simplify(indices, positions, 3, /* target indices */ 3, /* target error */ 0.01, ['LockBorder']);
var expected = new Uint32Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
assert.deepEqual(res[0], expected);
assert(res[1] < 1e-4); // error
},
simplifyAttr: function () {
var vb_pos = new Float32Array(8 * 3 * 3);
var vb_att = new Float32Array(8 * 3 * 3);
for (var y = 0; y < 8; ++y) {
// first four rows are a blue gradient, next four rows are a yellow gradient
var r = y < 4 ? 0.8 + y * 0.05 : 0;
var g = y < 4 ? 0.8 + y * 0.05 : 0;
var b = y < 4 ? 0 : 0.8 + (7 - y) * 0.05;
for (var x = 0; x < 3; ++x) {
vb_pos[(y * 3 + x) * 3 + 0] = x;
vb_pos[(y * 3 + x) * 3 + 1] = y;
vb_pos[(y * 3 + x) * 3 + 2] = 0.03 * x + 0.028 * (y % 2) + (x == 2 && y == 7 ? 1 : 0) * 0.03;
vb_att[(y * 3 + x) * 3 + 0] = r;
vb_att[(y * 3 + x) * 3 + 1] = g;
vb_att[(y * 3 + x) * 3 + 2] = b;
}
}
var ib = new Uint32Array(7 * 2 * 6);
for (var y = 0; y < 7; ++y) {
for (var x = 0; x < 2; ++x) {
ib[(y * 2 + x) * 6 + 0] = (y + 0) * 3 + (x + 0);
ib[(y * 2 + x) * 6 + 1] = (y + 0) * 3 + (x + 1);
ib[(y * 2 + x) * 6 + 2] = (y + 1) * 3 + (x + 0);
ib[(y * 2 + x) * 6 + 3] = (y + 1) * 3 + (x + 0);
ib[(y * 2 + x) * 6 + 4] = (y + 0) * 3 + (x + 1);
ib[(y * 2 + x) * 6 + 5] = (y + 1) * 3 + (x + 1);
}
}
var attr_weights = [0.5, 0.5, 0.5];
var res = simplifier.simplifyWithAttributes(ib, vb_pos, 3, vb_att, 3, attr_weights, null, 6 * 3, 1e-2);
var expected = new Uint32Array([0, 2, 11, 0, 11, 9, 9, 11, 12, 12, 11, 14, 12, 14, 23, 12, 23, 21]);
assert.deepEqual(res[0], expected);
},
simplifyUpdate: function () {
var indices = new Uint32Array([0, 1, 3, 3, 1, 4, 4, 1, 2, 0, 3, 2, 3, 4, 2]);
var positions = new Float32Array([0, 0, 0, 1, 1, 0, 2, 0, 0, 0.9, 0.2, 0.1, 1.1, 0.2, 0.1]);
var attributes = new Float32Array([0, 0, 0, 0.2, 0.1]);
var res = simplifier.simplifyWithUpdate(indices, positions, 3, attributes, 1, [1], null, 9, 1);
var expected = new Uint32Array([0, 1, 3, 3, 1, 2, 0, 3, 2]);
assert.equal(res[0], expected.length);
assert.deepEqual(indices.subarray(0, expected.length), expected);
// border vertices haven't moved but may have small floating point drift
for (var i = 0; i < 3; ++i) {
assert(Math.abs(attributes[i]) < 1e-6);
}
// center vertex got updated
assert(Math.abs(positions[3 * 3 + 0] - 0.88) < 1e-2);
assert(Math.abs(positions[3 * 3 + 1] - 0.19) < 1e-2);
assert(Math.abs(positions[3 * 3 + 2] - 0.11) < 1e-2);
assert(Math.abs(attributes[3] - 0.18) < 1e-2);
},
simplifyLockFlags: function () {
// 0
// 1 2
// 3 4 5
var indices = new Uint32Array([0, 2, 1, 1, 2, 3, 3, 2, 4, 2, 5, 4]);
var positions = new Float32Array([0, 2, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0]);
var locks = new Uint8Array([1, 1, 1, 1, 0, 1]); // only vertex 4 can move
var res = simplifier.simplifyWithAttributes(indices, positions, 3, new Float32Array(), 0, [], locks, 3, 0.01);
var expected = new Uint32Array([0, 2, 1, 1, 2, 3, 2, 5, 3]);
assert.deepEqual(res[0], expected);
assert(res[1] < 1e-4); // error
},
getScale: function () {
var positions = new Float32Array([0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 3]);
assert(simplifier.getScale(positions, 3) == 3.0);
},
simplifyPoints: function () {
var positions = new Float32Array([0, 0, 0, 100, 0, 0, 100, 1, 1, 110, 0, 0]);
var colors = new Float32Array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]);
var expected = new Uint32Array([0, 1]);
var expectedC = new Uint32Array([0, 2]);
var res = simplifier.simplifyPoints(positions, 3, 2);
assert.deepEqual(res, expected);
// note: recommended value for color_weight is 1e-2 but here we push color weight to be very high to bias candidate selection for testing
var resC1 = simplifier.simplifyPoints(positions, 3, 2, colors, 3, 1e-1);
assert.deepEqual(resC1, expectedC);
var resC2 = simplifier.simplifyPoints(positions, 3, 2, colors, 3, 1e-2);
assert.deepEqual(resC2, expected);
},
simplifyPrune: function () {
var indices = new Uint32Array([0, 1, 2, 3, 4, 5, 6, 7, 8]);
var positions = new Float32Array([0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 2, 1, 2, 0, 1, 0, 0, 2, 0, 4, 2, 4, 0, 2]);
var expected = new Uint32Array([6, 7, 8]);
var res = simplifier.simplifyPrune(indices, positions, 3, 0.5);
assert.deepEqual(res, expected);
},
};
Promise.all([simplifier.ready]).then(() => {
var count = 0;
for (var key in tests) {
tests[key]();
count++;
}
console.log(count, 'tests passed');
});

55
node_modules/meshoptimizer/package.json generated vendored Normal file
View File

@@ -0,0 +1,55 @@
{
"name": "meshoptimizer",
"version": "1.0.1",
"description": "Mesh optimization library that makes meshes smaller and faster to render",
"author": "Arseny Kapoulkine",
"license": "MIT",
"bugs": "https://github.com/zeux/meshoptimizer/issues",
"homepage": "https://github.com/zeux/meshoptimizer",
"keywords": [
"compression",
"mesh"
],
"repository": {
"type": "git",
"url": "https://github.com/zeux/meshoptimizer"
},
"files": [
"*.cjs",
"*.mjs",
"*.js",
"*.ts"
],
"type": "module",
"main": "index.js",
"types": "index.d.ts",
"exports": {
".": {
"types": "./index.d.ts",
"default": "./index.js"
},
"./encoder": {
"types": "./meshopt_encoder.d.ts",
"default": "./meshopt_encoder.js"
},
"./decoder": {
"types": "./meshopt_decoder.d.ts",
"default": "./meshopt_decoder.mjs"
},
"./simplifier": {
"types": "./meshopt_simplifier.d.ts",
"default": "./meshopt_simplifier.js"
},
"./clusterizer": {
"types": "./meshopt_clusterizer.d.ts",
"default": "./meshopt_clusterizer.js"
},
"./decoder.cjs": {
"require": "./meshopt_decoder.cjs"
}
},
"scripts": {
"test": "node meshopt_encoder.test.js && node meshopt_decoder.test.js && node meshopt_simplifier.test.js && node meshopt_clusterizer.test.js",
"prepublishOnly": "npm test"
}
}

54
node_modules/meshoptimizer/wasi_trace.js generated vendored Normal file
View File

@@ -0,0 +1,54 @@
// Usage:
// 1. import { wasi_trace } from './wasi_trace.js';
// 2. Pass wasi_trace as an import object to WebAssembly.instantiate
// 3. Call wasi_trace.init(instance) after instantiation
var instance;
var wasi_snapshot_preview1 = {
fd_close: function () {
return 8;
},
fd_seek: function () {
return 8;
},
fd_fdstat_get: function (fd, stat) {
// needed for isatty() to enable line buffering for stdout
var heap = new DataView(instance.exports.memory.buffer);
heap.setUint8(stat, 2);
for (var i = 1; i < 24; ++i) heap.setUint8(stat + i, 0);
return 0;
},
fd_write: function (fd, iovs, iovs_len, nwritten) {
var heap = new DataView(instance.exports.memory.buffer);
var written = 0;
var str = '';
for (var i = 0; i < iovs_len; ++i) {
var buf = heap.getUint32(iovs + 8 * i + 0, true);
var buf_len = heap.getUint32(iovs + 8 * i + 4, true);
var buf_data = new Uint8Array(heap.buffer, buf, buf_len);
for (var j = 0; j < buf_data.length; ++j) {
str += String.fromCharCode(buf_data[j]);
}
written += buf_len;
}
console.log(str);
heap.setUint32(nwritten, written, true);
return 0;
},
};
var wasi_trace = {
wasi_snapshot_preview1,
init: function (inst) {
instance = inst;
},
};
export { wasi_trace };