vault backup: 2025-03-25 00:34:23
All checks were successful
Update pages on webserver / Update (push) Successful in 5s
All checks were successful
Update pages on webserver / Update (push) Successful in 5s
This commit is contained in:
parent
3168fa8b92
commit
03ff71d640
@ -128,7 +128,7 @@ The `two_macro` function is *2.4x* faster than the `one_macro` function.
|
||||
|
||||
What the heck is going on? How does *adding* an entire second macro function *improve* performance??
|
||||
|
||||
It turns out that the clever and convenient `one_macro.array[string:$(keyword)]` triggers iteration to filter the array. Since the iteration is triggered by a macro, it directly runs Java code. It's still much faster than iterating in mcfunction, but the performance hit is O(n). In contrast, the `two_macro` approach directly accesses values by `key` and `index`. These operations have a performance hit of O(1). While I haven't tested it, this means that, when run on a larger dataset, the gap between `two_macro` and `one_macro` should continue to widen.
|
||||
It turns out that the clever and convenient `one_macro.array[string:$(keyword)]` triggers iteration to filter the array. Since the iteration is triggered by a macro, it directly runs Java code. It's still much faster than iterating in mcfunction, but the performance hit is O(n). In contrast, the `two_macro` approach directly accesses values by `key` and `index`. These operations have a performance hit of O(1). This was confirmed by **Nicoder**. While I haven't tested it, this means that, when run on a larger dataset, the gap between `two_macro` and `one_macro` should continue to widen.
|
||||
# takeaways
|
||||
Indexing is cool. If you find yourself in a situation where you're working with moderate-to-large arrays and are able to index in advance of querying data, it's absolutely worth it from a query performance standpoint.
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user