Skip to content

Trim the viable points set

Due to re-sampling and points merging from repeating projections re-discovery the viable points set for a single projection to can grow much over the nfeval limit. For instance, in the claude-decoding example with enumLevel=2 the sets are growing from target 100 points samples up to 3000. This makes the getViableProjections step very slow as in the worst case each of the coupled parameter sets has to be tested against each of the points in the viable set.

Implement trimming of the viable set (in prepareViablePointsForProjection) e.g. with k-means into nfeval clusters, and picking a centroid random or a closest to the projection values point from each (centroid rather won't be good for projection). (Nota bene: it will also reduce sampling overshoots from HYPERSPACE, which happen quite often for small sample sizes.)