Utils¶
Package¶
utils
¶
utils¶
Shared utility functions and helpers for psyphy.
This subpackage provides: - bootstrap : frequentist confidence intervals via resampling. - candidates : functions for generating candidate stimulus pools. - diagnostics : parameter summaries and threshold uncertainty estimation. - math : mathematical utilities (basis functions, distances, kernels). - rng : random number handling for reproducibility.
MVP implementation
- bootstrap: prediction CIs, model comparison, arbitrary statistics.
- candidates: grid, Sobol, custom pools.
- diagnostics: parameter summaries, threshold uncertainty.
- math: Chebyshev basis, Mahalanobis distance, RBF kernel.
- rng: seed() and split() for JAX PRNG keys.
Full WPPM mode
- candidates: adaptive refinement around posterior uncertainty.
- diagnostics: parameter sensitivity analysis, model comparison.
- math: richer kernels and basis expansions for Wishart processes.
- rng: experiment-wide RNG registry.
Functions:
| Name | Description |
|---|---|
bootstrap_compare_models |
Bootstrap comparison of two models' predictive performance. |
bootstrap_predictions |
Bootstrap confidence intervals for model predictions. |
bootstrap_statistic |
Bootstrap confidence interval for any model-derived statistic. |
chebyshev_basis |
Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x. |
custom_candidates |
Wrap a user-defined list of probes into candidate pairs. |
estimate_threshold_contour_uncertainty |
Estimate threshold contour and its uncertainty around a reference point. |
estimate_threshold_uncertainty |
Estimate threshold location and uncertainty via parameter sampling. |
grid_candidates |
Generate grid-based candidate probes around a reference. |
mahalanobis_distance |
Compute squared Mahalanobis distance between x and mean. |
parameter_summary |
Compute summary statistics for all model parameters. |
print_parameter_summary |
Print a human-readable parameter summary. |
rbf_kernel |
Radial Basis Function (RBF) kernel between two sets of points. |
seed |
Create a new PRNG key from an integer seed. |
sobol_candidates |
Generate Sobol quasi-random candidates within bounds. |
split |
Split a PRNG key into multiple independent keys. |
bootstrap_compare_models
¶
bootstrap_compare_models(
model1: Model,
model2: Model,
X_train: ndarray,
y_train: ndarray,
X_test: ndarray,
y_test: ndarray,
*,
metric_fn: Callable[[ndarray, ndarray], float]
| None = None,
n_bootstrap: int = 100,
confidence_level: float = 0.95,
probes: ndarray | None = None,
inference: str = "map",
inference_config: dict[str, Any] | None = None,
key: Any,
) -> tuple[float, float, float, bool]
Bootstrap comparison of two models' predictive performance.
Tests whether model1 performs significantly better/worse than model2 by computing confidence intervals on the performance difference.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model1
|
Model
|
Unfitted model instances to compare |
required |
model2
|
Model
|
Unfitted model instances to compare |
required |
X_train
|
Training data
|
|
required |
y_train
|
Training data
|
|
required |
X_test
|
Test data for evaluation
|
|
required |
y_test
|
Test data for evaluation
|
|
required |
metric_fn
|
callable
|
Function that takes (y_true, y_pred) and returns a scalar. Default: accuracy for binary classification |
None
|
n_bootstrap
|
int
|
Number of bootstrap samples |
100
|
confidence_level
|
float
|
Confidence level |
0.95
|
probes
|
optional
|
Test probes for discrimination tasks |
None
|
inference
|
str
|
Inference method |
'map'
|
inference_config
|
dict
|
Inference configuration |
None
|
key
|
KeyArray
|
Random key |
required |
Returns:
| Name | Type | Description |
|---|---|---|
diff_estimate |
float
|
Estimated difference in performance (model1 - model2) Positive = model1 is better |
ci_lower |
float
|
Lower bound on difference |
ci_upper |
float
|
Upper bound on difference |
is_significant |
bool
|
True if the difference is statistically significant (i.e., confidence interval excludes zero) |
Examples:
Notes
This function performs paired bootstrap comparison: for each bootstrap sample, both models are fit on the same resampled training data and evaluated on the same test data. This controls for data sampling variability.
The null hypothesis is: "models have equal performance" We reject this if the CI on the difference excludes zero.
Source code in src/psyphy/utils/bootstrap.py
354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 | |
bootstrap_predictions
¶
bootstrap_predictions(
model: Model,
X_train: ndarray,
y_train: ndarray,
X_test: ndarray,
*,
n_bootstrap: int = 100,
probes: ndarray | None = None,
confidence_level: float = 0.95,
inference: str = "map",
inference_config: dict[str, Any] | None = None,
key: Any,
) -> tuple[ndarray, ndarray, ndarray]
Bootstrap confidence intervals for model predictions.
Resamples training data with replacement, refits model N times, and computes prediction quantiles at test points.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Model
|
Unfitted model instance (will be cloned for each bootstrap sample) |
required |
X_train
|
(ndarray, shape(n_train, ...))
|
Training stimuli |
required |
y_train
|
(ndarray, shape(n_train))
|
Training responses |
required |
X_test
|
(ndarray, shape(n_test, ...))
|
Test points for predictions |
required |
n_bootstrap
|
int
|
Number of bootstrap samples. Typical values: 100 (quick), 1000 (publication quality) |
100
|
probes
|
ndarray
|
Test probes for discrimination tasks |
None
|
confidence_level
|
float
|
Confidence level (e.g., 0.95 for 95% CI, 0.99 for 99% CI) |
0.95
|
inference
|
str
|
Inference method for each bootstrap fit |
"map"
|
inference_config
|
dict
|
Configuration for inference engine |
None
|
key
|
Any
|
JAX random key for reproducibility |
required |
Returns:
| Name | Type | Description |
|---|---|---|
mean_estimate |
(ndarray, shape(n_test))
|
Average prediction across bootstrap samples |
ci_lower |
(ndarray, shape(n_test))
|
Lower confidence bound at each test point |
ci_upper |
(ndarray, shape(n_test))
|
Upper confidence bound at each test point |
Examples:
Notes
Computational cost: - Each bootstrap sample requires a full model refit - Total time ≈ n_bootstrap × (time per fit) - For MAP with 100 samples: typically 10-100 seconds
Assumptions: - Training data are IID (independent and identically distributed) - For sequential data, consider block bootstrap instead
The bootstrap estimates sampling uncertainty (how stable are predictions if we collected different data?), not model uncertainty (what is the range of plausible predictions given the data?). For model uncertainty, use the Bayesian posterior.variance instead.
Source code in src/psyphy/utils/bootstrap.py
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | |
bootstrap_statistic
¶
bootstrap_statistic(
model: Model,
X: ndarray,
y: ndarray,
statistic_fn: Callable[[Model], float | ndarray],
*,
n_bootstrap: int = 100,
confidence_level: float = 0.95,
inference: str = "map",
inference_config: dict[str, Any] | None = None,
key: Any,
) -> tuple[
float | ndarray, float | ndarray, float | ndarray
]
Bootstrap confidence interval for any model-derived statistic.
Resamples data, refits model, and computes statistic for each bootstrap sample. Returns point estimate and confidence interval.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Model
|
Unfitted model instance |
required |
X
|
(ndarray, shape(n_trials, ...))
|
Training stimuli |
required |
y
|
(ndarray, shape(n_trials))
|
Training responses |
required |
statistic_fn
|
callable
|
Function that takes a fitted Model and returns a scalar or array. Examples: - lambda m: m.estimate_threshold(criterion=0.75) - lambda m: m.posterior(X_test).mean - lambda m: jnp.linalg.norm(m._posterior.params["lengthscales"]) |
required |
n_bootstrap
|
int
|
Number of bootstrap samples |
100
|
confidence_level
|
float
|
Confidence level for interval |
0.95
|
inference
|
str
|
Inference method |
"map"
|
inference_config
|
dict
|
Inference configuration |
None
|
key
|
KeyArray
|
Random key |
required |
Returns:
| Name | Type | Description |
|---|---|---|
estimate |
float or ndarray
|
Point estimate (mean across bootstrap samples) |
ci_lower |
float or ndarray
|
Lower confidence bound |
ci_upper |
float or ndarray
|
Upper confidence bound |
Examples:
Notes
This is a general-purpose function for any statistic you can compute from a fitted model. The statistic_fn should: - Take a fitted Model as input - Return a scalar or array (but shape must be consistent across samples) - Not modify the model
For vector-valued statistics, confidence intervals are computed element-wise.
Source code in src/psyphy/utils/bootstrap.py
206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 | |
chebyshev_basis
¶
chebyshev_basis(x: ndarray, degree: int) -> ndarray
Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
ndarray
|
Input points of shape (N,). For best numerical properties, values should lie in [-1, 1]. |
required |
degree
|
int
|
Maximum polynomial degree (>= 0). The output includes columns for T_0 through T_degree. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Array of shape (N, degree + 1) where column j contains T_j(x). |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Notes
Uses the three-term recurrence: T_0(x) = 1 T_1(x) = x T_{n+1}(x) = 2 x T_n(x) - T_{n-1}(x) The Chebyshev polynomials are orthogonal on [-1, 1] with weight (1 / sqrt(1 - x^2)).
Examples:
Source code in src/psyphy/utils/math.py
custom_candidates
¶
Wrap a user-defined list of probes into candidate pairs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reference
|
(ndarray, shape(D))
|
Reference stimulus. |
required |
probe_list
|
list of jnp.ndarray
|
Explicitly chosen probe vectors. |
required |
Returns:
| Type | Description |
|---|---|
list of Stimulus
|
Candidate (reference, probe) pairs. |
Notes
- Useful when hardware constraints (monitor gamut, auditory frequencies) restrict the set of valid stimuli.
- Full WPPM mode: this pool could be pruned or expanded dynamically depending on posterior fit quality.
Source code in src/psyphy/utils/candidates.py
estimate_threshold_contour_uncertainty
¶
estimate_threshold_contour_uncertainty(
model: Model,
reference: ndarray,
n_angles: int = 16,
max_distance: float = 0.5,
n_grid_points: int = 100,
probe_offset: float = 0.05,
threshold_criterion: float = 0.75,
n_samples: int = 100,
*,
key: Any,
) -> dict[str, Any]
Estimate threshold contour and its uncertainty around a reference point.
Searches radially in multiple directions to find threshold locations and their uncertainty.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Model
|
Fitted model |
required |
reference
|
(ndarray, shape(input_dim))
|
Reference stimulus (center of contour) |
required |
n_angles
|
int
|
Number of directions to search |
16
|
max_distance
|
float
|
Maximum search distance from reference |
0.5
|
n_grid_points
|
int
|
Grid resolution per direction |
100
|
probe_offset
|
float
|
Probe offset for discrimination |
0.05
|
threshold_criterion
|
float
|
Target accuracy level |
0.75
|
n_samples
|
int
|
Parameter samples for uncertainty estimation |
100
|
key
|
JAX random key
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
results |
dict
|
Dictionary with keys: - "angles": (n_angles,) - angles in radians - "threshold_mean": (n_angles, input_dim) - mean threshold coords - "threshold_std": (n_angles,) - std of threshold distance - "threshold_samples": (n_angles, n_samples) - all sample indices |
Examples:
Source code in src/psyphy/utils/diagnostics.py
276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 | |
estimate_threshold_uncertainty
¶
estimate_threshold_uncertainty(
model: Model,
X_grid: ndarray,
probes: ndarray,
threshold_criterion: float = 0.75,
n_samples: int = 100,
*,
key: Any,
) -> tuple[ndarray, float, float]
Estimate threshold location and uncertainty via parameter sampling.
For each parameter sample θᵢ ~ p(θ | data), finds where the model predicts threshold_criterion accuracy. The distribution of these threshold locations gives us uncertainty about the threshold.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Model
|
Fitted model (must support predict_with_params) |
required |
X_grid
|
(ndarray, shape(n_grid, input_dim))
|
Grid of test points to search over (e.g., line through stimulus space) |
required |
probes
|
(ndarray, shape(n_grid, input_dim))
|
Probe at each grid point |
required |
threshold_criterion
|
float
|
Target accuracy level (e.g., 0.75 for 75% correct threshold) |
0.75
|
n_samples
|
int
|
Number of parameter samples for Monte Carlo estimation |
100
|
key
|
JAX random key
|
Random key for parameter sampling |
required |
Returns:
| Name | Type | Description |
|---|---|---|
threshold_locations |
(ndarray, shape(n_samples))
|
Grid index of threshold for each parameter sample |
threshold_mean |
float
|
Mean threshold location (as grid index) |
threshold_std |
float
|
Standard deviation of threshold location (quantifies uncertainty) |
Examples:
Notes
This function quantifies threshold uncertainty - how uncertain we are about the threshold location given the observed data.
This is different from prediction uncertainty at a fixed location: - pred_post.variance tells you: "uncertainty about p(correct) at X" - estimate_threshold_uncertainty tells you: "uncertainty about where the threshold is"
Use this for: - Reporting threshold estimates with confidence intervals - Visualizing threshold contour uncertainty - Experimental design (test near uncertain thresholds)
Source code in src/psyphy/utils/diagnostics.py
159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | |
grid_candidates
¶
Generate grid-based candidate probes around a reference.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reference
|
(ndarray, shape(D))
|
Reference stimulus in model space. |
required |
radii
|
list of float
|
Distances from reference to probe. |
required |
directions
|
int
|
Number of angular directions. |
16
|
Returns:
| Type | Description |
|---|---|
list of Stimulus
|
Candidate (reference, probe) pairs. |
Notes
- MVP: probes lie on concentric circles around reference.
- Full WPPM mode: could adaptively refine grid around regions of high posterior uncertainty.
Source code in src/psyphy/utils/candidates.py
mahalanobis_distance
¶
Compute squared Mahalanobis distance between x and mean.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
ndarray
|
Data vector, shape (D,). |
required |
mean
|
ndarray
|
Mean vector, shape (D,). |
required |
cov_inv
|
ndarray
|
Inverse covariance matrix, shape (D, D). |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar squared Mahalanobis distance. |
Notes
- Formula: d^2 = (x - mean)^T Σ^{-1} (x - mean)
- Used in WPPM discriminability calculations.
Source code in src/psyphy/utils/math.py
parameter_summary
¶
parameter_summary(
param_posterior: ParameterPosterior,
n_samples: int = 1000,
*,
key: Any | None = None,
quantiles: tuple[float, ...] = (
0.025,
0.25,
0.5,
0.75,
0.975,
),
) -> dict[str, dict[str, ndarray]]
Compute summary statistics for all model parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
param_posterior
|
ParameterPosterior
|
Parameter posterior to summarize |
required |
n_samples
|
int
|
Number of Monte Carlo samples |
1000
|
key
|
JAX PRNGKey
|
Random key for sampling (auto-generated if None) |
None
|
quantiles
|
tuple of floats
|
Quantiles to compute |
(0.025, 0.25, 0.5, 0.75, 0.975)
|
Returns:
| Name | Type | Description |
|---|---|---|
summary |
dict[str, dict[str, ndarray]]
|
Dictionary with keys for each parameter, values are dicts with: - "mean": Mean of posterior samples - "std": Standard deviation - "quantiles": Dict mapping quantile to value |
Examples:
Source code in src/psyphy/utils/diagnostics.py
print_parameter_summary
¶
print_parameter_summary(
param_posterior: ParameterPosterior,
n_samples: int = 1000,
*,
key: Any | None = None,
) -> None
Print a human-readable parameter summary.
Examples:
log_diag: Mean: [0.12, -0.03] Std: [0.05, 0.02]
Source code in src/psyphy/utils/diagnostics.py
rbf_kernel
¶
rbf_kernel(
x1: ndarray, x2: ndarray, lengthscale: float = 1.0
) -> ndarray
Radial Basis Function (RBF) kernel between two sets of points.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x1
|
ndarray
|
First set of points, shape (N, D). |
required |
x2
|
ndarray
|
Second set of points, shape (M, D). |
required |
lengthscale
|
float
|
Length-scale parameter controlling smoothness. |
1.0
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Kernel matrix of shape (N, M). |
Notes
- RBF kernel: k(x, x') = exp(-||x - x'||^2 / (2 * lengthscale^2))
- Default used for Gaussian processes for smooth covariance priors in Full WPPM mode.
Source code in src/psyphy/utils/math.py
seed
¶
Create a new PRNG key from an integer seed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed_value
|
int
|
Seed for random number generation. |
required |
Returns:
| Type | Description |
|---|---|
KeyArray
|
New PRNG key. |
Source code in src/psyphy/utils/rng.py
sobol_candidates
¶
sobol_candidates(
reference: ndarray,
n: int,
bounds: list[tuple[float, float]],
seed: int = 0,
) -> list[Stimulus]
Generate Sobol quasi-random candidates within bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reference
|
(ndarray, shape(D))
|
Reference stimulus. |
required |
n
|
int
|
Number of candidates to generate. |
required |
bounds
|
list of (low, high)
|
Bounds per dimension. |
required |
seed
|
int
|
Random seed. |
0
|
Returns:
| Type | Description |
|---|---|
list of Stimulus
|
Candidate (reference, probe) pairs. |
Notes
- MVP: uniform coverage of space using low-discrepancy Sobol sequence.
- Full WPPM mode: Sobol could be used for initialization, then hand off to posterior-aware strategies.
Source code in src/psyphy/utils/candidates.py
split
¶
Split a PRNG key into multiple independent keys.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
KeyArray
|
RNG key to split. |
required |
num
|
int
|
Number of new keys to return. |
2
|
Returns:
| Type | Description |
|---|---|
tuple of jax.random.KeyArray
|
Independent new PRNG keys. |
Source code in src/psyphy/utils/rng.py
RNG¶
rng
¶
rng.py
Random number utilities for psyphy.
This module standardizes RNG handling across the package, especially important when mixing NumPy and JAX.
MVP implementation: - Wrappers around JAX PRNG keys. - Helpers for reproducibility.
Future extensions: - Experiment-wide RNG registry. - Splitting strategies for parallel adaptive placement.
Examples:
Functions:
| Name | Description |
|---|---|
seed |
Create a new PRNG key from an integer seed. |
split |
Split a PRNG key into multiple independent keys. |
seed
¶
Create a new PRNG key from an integer seed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seed_value
|
int
|
Seed for random number generation. |
required |
Returns:
| Type | Description |
|---|---|
KeyArray
|
New PRNG key. |
Source code in src/psyphy/utils/rng.py
split
¶
Split a PRNG key into multiple independent keys.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
KeyArray
|
RNG key to split. |
required |
num
|
int
|
Number of new keys to return. |
2
|
Returns:
| Type | Description |
|---|---|
tuple of jax.random.KeyArray
|
Independent new PRNG keys. |
Source code in src/psyphy/utils/rng.py
Math¶
math
¶
math.py
Math utilities for psyphy.
Includes: - chebyshev_basis : compute Chebyshev polynomial basis. - mahalanobis_distance : discriminability metric used in WPPM MVP. - rbf_kernel : kernel function, useful in Full WPPM mode covariance priors.
All functions use JAX (jax.numpy) for compatibility with autodiff.
Notes
- math.chebyshev_basis is relevant when implementing Full WPPM mode, where covariance fields are expressed in a basis expansion.
- math.mahalanobis_distance is directly used in WPPM MVP discriminability.
- math.rbf_kernel is a placeholder for Gaussian-process-style covariance priors.
Examples:
Functions:
| Name | Description |
|---|---|
chebyshev_basis |
Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x. |
mahalanobis_distance |
Compute squared Mahalanobis distance between x and mean. |
rbf_kernel |
Radial Basis Function (RBF) kernel between two sets of points. |
chebyshev_basis
¶
chebyshev_basis(x: ndarray, degree: int) -> ndarray
Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
ndarray
|
Input points of shape (N,). For best numerical properties, values should lie in [-1, 1]. |
required |
degree
|
int
|
Maximum polynomial degree (>= 0). The output includes columns for T_0 through T_degree. |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Array of shape (N, degree + 1) where column j contains T_j(x). |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Notes
Uses the three-term recurrence: T_0(x) = 1 T_1(x) = x T_{n+1}(x) = 2 x T_n(x) - T_{n-1}(x) The Chebyshev polynomials are orthogonal on [-1, 1] with weight (1 / sqrt(1 - x^2)).
Examples:
Source code in src/psyphy/utils/math.py
mahalanobis_distance
¶
Compute squared Mahalanobis distance between x and mean.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
ndarray
|
Data vector, shape (D,). |
required |
mean
|
ndarray
|
Mean vector, shape (D,). |
required |
cov_inv
|
ndarray
|
Inverse covariance matrix, shape (D, D). |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar squared Mahalanobis distance. |
Notes
- Formula: d^2 = (x - mean)^T Σ^{-1} (x - mean)
- Used in WPPM discriminability calculations.
Source code in src/psyphy/utils/math.py
rbf_kernel
¶
rbf_kernel(
x1: ndarray, x2: ndarray, lengthscale: float = 1.0
) -> ndarray
Radial Basis Function (RBF) kernel between two sets of points.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x1
|
ndarray
|
First set of points, shape (N, D). |
required |
x2
|
ndarray
|
Second set of points, shape (M, D). |
required |
lengthscale
|
float
|
Length-scale parameter controlling smoothness. |
1.0
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Kernel matrix of shape (N, M). |
Notes
- RBF kernel: k(x, x') = exp(-||x - x'||^2 / (2 * lengthscale^2))
- Default used for Gaussian processes for smooth covariance priors in Full WPPM mode.
Source code in src/psyphy/utils/math.py
Stimulus candidates¶
candidates
¶
candidates.py
Utilities for generating candidate stimulus pools.
Definition
A candidate pool is the set of all possible (reference, probe) pairs that an adaptive placement strategy may select from.
Separation of concerns
- Candidate generation (this module) defines what stimuli are possible.
- Trial placement strategies (e.g., GreedyMAPPlacement, InfoGainPlacement) define which of those candidates to present next.
Why this matters
- Researchers: think of the candidate pool as the "menu" of allowable trials.
- Developers: placement strategies should not generate candidates but only select from a given pool.
MVP implementation
- Grid-based candidates (probes on circles around a reference).
- Sobol sequence candidates (low-discrepancy exploration).
- Custom user-defined candidate pools.
Full WPPM mode
- Candidate generation could adaptively refine itself based on posterior uncertainty (e.g., dynamic grids).
- Candidate pools could be constrained by device gamut or subject-specific calibration.
Functions:
| Name | Description |
|---|---|
custom_candidates |
Wrap a user-defined list of probes into candidate pairs. |
grid_candidates |
Generate grid-based candidate probes around a reference. |
sobol_candidates |
Generate Sobol quasi-random candidates within bounds. |
Attributes:
| Name | Type | Description |
|---|---|---|
Stimulus |
|
custom_candidates
¶
Wrap a user-defined list of probes into candidate pairs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reference
|
(ndarray, shape(D))
|
Reference stimulus. |
required |
probe_list
|
list of jnp.ndarray
|
Explicitly chosen probe vectors. |
required |
Returns:
| Type | Description |
|---|---|
list of Stimulus
|
Candidate (reference, probe) pairs. |
Notes
- Useful when hardware constraints (monitor gamut, auditory frequencies) restrict the set of valid stimuli.
- Full WPPM mode: this pool could be pruned or expanded dynamically depending on posterior fit quality.
Source code in src/psyphy/utils/candidates.py
grid_candidates
¶
Generate grid-based candidate probes around a reference.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reference
|
(ndarray, shape(D))
|
Reference stimulus in model space. |
required |
radii
|
list of float
|
Distances from reference to probe. |
required |
directions
|
int
|
Number of angular directions. |
16
|
Returns:
| Type | Description |
|---|---|
list of Stimulus
|
Candidate (reference, probe) pairs. |
Notes
- MVP: probes lie on concentric circles around reference.
- Full WPPM mode: could adaptively refine grid around regions of high posterior uncertainty.
Source code in src/psyphy/utils/candidates.py
sobol_candidates
¶
sobol_candidates(
reference: ndarray,
n: int,
bounds: list[tuple[float, float]],
seed: int = 0,
) -> list[Stimulus]
Generate Sobol quasi-random candidates within bounds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reference
|
(ndarray, shape(D))
|
Reference stimulus. |
required |
n
|
int
|
Number of candidates to generate. |
required |
bounds
|
list of (low, high)
|
Bounds per dimension. |
required |
seed
|
int
|
Random seed. |
0
|
Returns:
| Type | Description |
|---|---|
list of Stimulus
|
Candidate (reference, probe) pairs. |
Notes
- MVP: uniform coverage of space using low-discrepancy Sobol sequence.
- Full WPPM mode: Sobol could be used for initialization, then hand off to posterior-aware strategies.