Psyphy¶
psyphy
¶
psyphy¶
Psychophysical modeling and adaptive trial placement.
This package implements the Wishart Process Psychophysical Model (WPPM) with modular components for priors, task likelihoods, and noise models, which can be fitted to incoming subject data and used to adaptively select new trials to present to the subject next. This is useful for efficiently estimating psychophysical parameters (e.g. threshold contours) with minimal trials.
Workflow
Core design
- WPPM (model/wppm.py):
- Structural definition of the psychophysical model.
- Maintains parameterization of local covariance fields.
- Computes discriminability between stimuli.
-
Delegates trial likelihoods and predictions to the task.
-
Prior (model/prior.py):
- Defines the distribution over model parameters.
- MVP: Gaussian prior over diagonal log-variances.
-
Full WPPM mode: structured prior over basis weights and lengthscale-controlled covariance fields.
-
TaskLikelihood (model/task.py):
- Encodes the psychophysical decision rule.
- MVP: OddityTask (3AFC) and TwoAFC with sigmoid mappings.
-
Full WPPM mode: loglik and predict implemented via Monte Carlo observer simulations, using the noise model explicitly.
-
NoiseModel (model/noise.py):
- Defines the distribution of internal representation noise.
- MVP: GaussianNoise (zero mean, isotropic).
- Full WPPM mode: add StudentTNoise option and beyond.
Unified import style
Top-level (core models + session): from psyphy import WPPM, Prior, OddityTask, GaussianNoise, MAPOptimizer from psyphy import ExperimentSession, ResponseData, TrialBatch
Subpackages: from psyphy.model import WPPM, Prior, OddityTask, TwoAFC, GaussianNoise, StudentTNoise from psyphy.inference import MAPOptimizer, LangevinSampler, LaplaceApproximation from psyphy.posterior import Posterior, effective_sample_size, rhat from psyphy.acquisition import expected_improvement, upper_confidence_bound, mutual_information from psyphy.acquisition import optimize_acqf, optimize_acqf_discrete, optimize_acqf_random from psyphy.trial_placement import GridPlacement, SobolPlacement, StaircasePlacement from psyphy.utils import grid_candidates, sobol_candidates, custom_candidates, chebyshev_basis from psyphy.utils import bootstrap_predictions, bootstrap_statistic, bootstrap_compare_models
Data flow
- A ResponseData object (psyphy.data) contains trial stimuli and responses.
- WPPM.init_params(prior) samples parameter initialization.
- Inference engines optimize the log posterior: log_posterior = task.loglik(params, data, model=WPPM, noise=NoiseModel) + prior.log_prob(params)
- Posterior predictions (p(correct), threshold ellipses) are always obtained through WPPM delegating to TaskLikelihood.
Extensibility
- To add a new task: subclass TaskLikelihood, implement predict/loglik.
- To add a new noise model: subclass NoiseModel, implement logpdf/sample.
- To upgrade from MVP -> Full WPPM mode: replace local_covariance and discriminability with basis-expansion Wishart process + MC simulation.
MVP vs Full WPPM mode
- MVP is a diagonal-covariance, closed-form scaffold that runs out of the box.
- Full WPPM mode matches the published research model:
- Smooth covariance fields (Wishart process priors).
- Monte Carlo likelihood evaluation.
- Explicit noise model in predictions.
Classes:
| Name | Description |
|---|---|
ExperimentSession |
High-level experiment orchestrator. |
GaussianNoise |
|
LangevinSampler |
Langevin sampler (stub). |
LaplaceApproximation |
Laplace approximation around MAP estimate. |
MAPOptimizer |
MAP (Maximum A Posteriori) optimizer. |
OddityTask |
Three-alternative forced-choice oddity task. |
Prior |
Prior distribution over WPPM parameters |
ResponseData |
Container for psychophysical trial data. |
StudentTNoise |
|
TrialBatch |
Container for a proposed batch of trials |
TwoAFC |
2-alternative forced-choice task (MVP placeholder). |
WPPM |
Wishart Process Psychophysical Model (WPPM). |
Attributes:
| Name | Type | Description |
|---|---|---|
Posterior |
|
ExperimentSession
¶
High-level experiment orchestrator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
(Psychophysical) model instance. |
required |
inference
|
InferenceEngine
|
Inference engine (MAP, Langevin, etc.). |
required |
placement
|
TrialPlacement
|
Adaptive trial placement strategy. |
required |
init_placement
|
TrialPlacement
|
Initial placement strategy (e.g., Sobol exploration). |
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
data |
ResponseData
|
Stores all collected trials. |
posterior |
Posterior or None
|
Current posterior estimate (None before initialization). |
Methods:
| Name | Description |
|---|---|
initialize |
Fit an initial posterior before any adaptive placement. |
next_batch |
Propose the next batch of trials. |
update |
Refit posterior with accumulated data. |
Source code in src/psyphy/session/experiment_session.py
initialize
¶
Fit an initial posterior before any adaptive placement.
Returns:
| Type | Description |
|---|---|
Posterior
|
Posterior object wrapping fitted parameters. |
Notes
MVP: Posterior is fitted to empty data (prior only). Full WPPM mode: Could use pilot data or pre-collected trials along grid etc.
Source code in src/psyphy/session/experiment_session.py
next_batch
¶
next_batch(batch_size: int)
Propose the next batch of trials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_size
|
int
|
Number of trials to propose. |
required |
Returns:
| Type | Description |
|---|---|
TrialBatch
|
Batch of proposed (reference, probe) stimuli. |
Notes
MVP: Always calls placement.propose() on current posterior. Full WPPM mode: Could support hybrid placement (init strategy -> adaptive strategy).
Source code in src/psyphy/session/experiment_session.py
update
¶
Refit posterior with accumulated data.
Returns:
| Type | Description |
|---|---|
Posterior
|
Updated posterior. |
Notes
MVP: Re-optimizes from scratch using all data. Full WPPM mode: Could support warm-start or online parameter updates.
Source code in src/psyphy/session/experiment_session.py
GaussianNoise
¶
GaussianNoise(sigma: float = 1.0)
LangevinSampler
¶
Langevin sampler (stub).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
steps
|
int
|
Number of Langevin steps. |
1000
|
step_size
|
float
|
Integration step size. |
1e-3
|
temperature
|
float
|
Noise scale (temperature). |
1.0
|
Methods:
| Name | Description |
|---|---|
fit |
Fit model parameters with Langevin dynamics (stub). |
Attributes:
| Name | Type | Description |
|---|---|---|
step_size |
|
|
steps |
|
|
temperature |
|
Source code in src/psyphy/inference/langevin.py
fit
¶
fit(model, data) -> Posterior
Fit model parameters with Langevin dynamics (stub).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Model instance. |
required |
data
|
ResponseData
|
Observed trials. |
required |
Returns:
| Type | Description |
|---|---|
Posterior
|
Posterior wrapper (MVP: params from init). |
Source code in src/psyphy/inference/langevin.py
LaplaceApproximation
¶
Laplace approximation around MAP estimate.
Methods:
| Name | Description |
|---|---|
from_map |
Construct a Gaussian approximation centered at MAP. |
MAPOptimizer
¶
MAPOptimizer(
steps: int = 500,
learning_rate: float = 5e-05,
momentum: float = 0.9,
optimizer: GradientTransformation | None = None,
*,
track_history: bool = False,
log_every: int = 10,
)
Bases: InferenceEngine
MAP (Maximum A Posteriori) optimizer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
steps
|
int
|
Number of optimization steps. |
500
|
optimizer
|
GradientTransformation
|
Optax optimizer to use. Default: SGD with momentum. |
None
|
Notes
- Loss function = negative log posterior.
- Gradients computed with jax.grad.
Create a MAP optimizer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
steps
|
int
|
Number of optimization steps. |
500
|
optimizer
|
GradientTransformation | None
|
Optax optimizer to use. |
None
|
learning_rate
|
float
|
Learning rate for the default optimizer (SGD with momentum). |
5e-05
|
momentum
|
float
|
Momentum for the default optimizer (SGD with momentum). |
0.9
|
track_history
|
bool
|
When True, record loss history during fitting for plotting. |
False
|
log_every
|
int
|
Record every N steps (also records the last step). |
10
|
Methods:
| Name | Description |
|---|---|
fit |
Fit model parameters with MAP optimization. |
get_history |
Return (steps, losses) recorded during the last fit when tracking was enabled. |
Attributes:
| Name | Type | Description |
|---|---|---|
log_every |
|
|
loss_history |
list[float]
|
|
loss_steps |
list[int]
|
|
optimizer |
|
|
steps |
|
|
track_history |
|
Source code in src/psyphy/inference/map_optimizer.py
fit
¶
fit(
model,
data,
init_params: dict | None = None,
seed: int | None = None,
) -> MAPPosterior
Fit model parameters with MAP optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
WPPM
|
Model instance. |
required |
data
|
ResponseData
|
Observed trials. |
required |
init_params
|
dict | None
|
Initial parameter PyTree to start optimization from. If provided, this takes precedence over the seed. |
None
|
seed
|
int | None
|
PRNG seed used to draw initial parameters from the model's prior when init_params is not provided. If None, defaults to 0. |
None
|
Returns:
| Type | Description |
|---|---|
MAPPosterior
|
Posterior wrapper around MAP params and model. |
Source code in src/psyphy/inference/map_optimizer.py
get_history
¶
Return (steps, losses) recorded during the last fit when tracking was enabled.
OddityTask
¶
OddityTask(slope: float = 1.5)
Bases: TaskLikelihood
Three-alternative forced-choice oddity task.
In an oddity task, the observer is presented with three stimuli: two identical references and one comparison. The task is to identify which stimulus is the "odd one out" (the comparison). Performance depends on the discriminability between reference and comparison.
This class provides two likelihood computation methods:
- Analytical approximation (MVP mode):
predict(): maps discriminability to P(correct) via tanhloglik(): Bernoulli likelihood using analytical predictions-
Fast, differentiable, suitable for gradient-based optimization
-
Monte Carlo simulation (Full WPPM mode):
loglik_mc(): simulates the full 3-stimulus oddity task- Samples three internal representations per trial (z0, z1, z2)
- Uses proper oddity decision rule with three pairwise distances
- More accurate for complex covariance structures
- Suitable for validation and benchmarking
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
slope
|
float
|
Slope parameter for analytical tanh mapping in predict(). Controls steepness of discriminability -> performance relationship. |
1.5
|
Attributes:
| Name | Type | Description |
|---|---|---|
chance_level |
float
|
Chance performance for oddity task (1/3) |
performance_range |
float
|
Range from chance to perfect performance (2/3) |
Notes
The analytical approximation in predict() uses:
P(correct) = 1/3 + 2/3 * (1 + tanh(slope * d)) / 2
MC simulation in loglik_mc() (Full 3-stimulus oddity): 1. Sample three internal representations: z_ref, z_refprime ~ N(ref, Σ_ref), z_comparison ~ N(comparison, Σ_comparison) 2. Compute average covariance: Σ_avg = (2/3) Σ_ref + (1/3) Σ_comparison 3. Compute three pairwise Mahalanobis distances: - d^2(z_ref, z_refprime) = distance between two reference samples - d^2(z_ref, z_comparison) = distance from ref to comparison - d^2(z_refprime, z_comparison) = distance from reference_prime to comparison 4. Apply oddity decision rule: delta = min(d^2(z_ref,z_comparison), d^2(z_refprime,z_comparison)) - d^2(z_ref,z_refprime) 5. Logistic smoothing: P(correct) pprox logistic.cdf(delta / bandwidth) 6. Average over samples
Examples:
Methods:
| Name | Description |
|---|---|
loglik |
Compute log-likelihood using analytical predictions. |
loglik_mc |
Compute log-likelihood via Monte Carlo observer simulation. |
predict |
Predict probability of correct response using analytical approximation. |
Source code in src/psyphy/model/task.py
loglik
¶
Compute log-likelihood using analytical predictions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters |
required |
data
|
ResponseData
|
Trial data containing refs, comparisons, responses |
required |
model
|
WPPM
|
Model instance |
required |
noise
|
NoiseModel
|
Observer noise model |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar sum of log-likelihoods over all trials |
Notes
Uses Bernoulli log-likelihood: LL = Σ [y * log(p) + (1-y) * log(1-p)] where p comes from predict() (analytical approximation)
Source code in src/psyphy/model/task.py
loglik_mc
¶
loglik_mc(
params: Any,
data: Any,
model: Any,
noise: Any,
num_samples: int = 1000,
bandwidth: float = 0.01,
key: Any = None,
) -> ndarray
Compute log-likelihood via Monte Carlo observer simulation.
This method implements the FULL 3-stimulus oddity task. Instead of using an analytical approximation, we: 1. Sample three internal noisy representations per trial: - z_ref, z_refprime ~ N(ref, Σ_ref) [two samples from reference] - z_comparison ~ N(comparison, Σ_comparison) [one sample from comparison] 2. Compute three pairwise Mahalanobis distances 3. Apply oddity decision rule: comparison is odd if it's farther from BOTH ref and reference_prime 4. Apply logistic smoothing to approximate P(correct) 5. Average over MC samples
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters (must contain 'W' for WPPM basis coefficients) |
required |
data
|
ResponseData
|
Trial data with refs, comparisons, and responses |
required |
model
|
WPPM
|
Model instance providing compute_U() for covariance computation |
required |
noise
|
NoiseModel
|
Observer noise model (provides sigma for diagonal noise term) |
required |
num_samples
|
int
|
Number of Monte Carlo samples per trial. - Use 1000-5000 for accurate likelihood estimation - Larger values reduce MC variance but increase compute time |
1000
|
bandwidth
|
float
|
Smoothing parameter for logistic CDF approximation. - Smaller values -> sharper transition (closer to step function) - Larger values -> smoother approximation - Typical range: [1e-3, 5e-2] |
1e-2
|
key
|
PRNGKey
|
Random key for reproducible sampling. If None, uses PRNGKey(0) (deterministic but not recommended for production) |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar sum of log-likelihoods over all trials. Same shape and interpretation as loglik(). |
Raises:
| Type | Description |
|---|---|
ValueError
|
If num_samples <= 0 |
Notes
Full 3-stimulus oddity task algorithm:
For each trial (ref, comparison, response): 1. Compute covariances: - Σ_ref = U_ref @ U_ref.T + σ^2 I - Σ_comparison = U_comparison @ U_comparison.T + σ^2 I - Σ_avg = (2/3) Σ_ref + (1/3) Σ_comparison [weighted by stimulus frequency]
- Sample three internal representations:
- z_ref, z_refprime ~ N(ref, Σ_ref) [2 samples from reference, num_samples times each]
-
z_comparison ~ N(comparison, Σ_comparison) [1 sample from comparison, num_samples times]
-
Compute three pairwise Mahalanobis distances:
- d^2(z_ref, z_refprime) = (z_ref - z_refprime).T @ Σ_avg^{-1} @ (z_ref - z_refprime) [ref vs reference_prime]
- d^2(z_ref, z_comparison) = (z_ref - z_comparison).T @ Σ_avg^{-1} @ (z_ref - z_comparison) [ref vs comparison]
-
d^2(z_refprime, z_comparison) = (z_refprime - z_comparison).T @ Σ_avg^{-1} @ (z_refprime - z_comparison) [reference_prime vs comparison]
-
Apply oddity decision rule:
- delta = min(d^2(z_ref,z_comparison), d^2(z_refprime,z_comparison)) - d^2(z_ref,z_refprime)
-
delta > 0 means comparison is farther from BOTH ref and reference_prime -> correct identification
-
Apply logistic smoothing:
-
P(correct) pprox mean(logistic.cdf(delta / bandwidth))
-
Bernoulli log-likelihood:
- LL = Σ [y * log(p) + (1-y) * log(1-p)]
Performance: - Time complexity: O(n_trials * num_samples * input_dim³) - Memory: O(num_samples * input_dim) per trial - Vectorized across trials using jax.vmap for GPU acceleration - Can be JIT-compiled for additional speed (future optimization)
Comparison to analytical: - MC implements full 3-stimulus oddity (more realistic) - MC is more accurate for complex Σ(x) structures - Analytical is faster and differentiable - Use MC for validation and benchmarking, analytical for optimization
Examples:
See Also
loglik : Analytical log-likelihood (faster, differentiable) predict : Analytical prediction for single trial
Source code in src/psyphy/model/task.py
227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 | |
predict
¶
Predict probability of correct response using analytical approximation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters (e.g., W for WPPM) |
required |
stimuli
|
tuple[ndarray, ndarray]
|
(reference, comparison) stimulus pair |
required |
model
|
WPPM
|
Model instance providing discriminability() |
required |
noise
|
NoiseModel
|
Observer noise model (currently unused in analytical version) |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Scalar probability of correct response, in range [1/3, 1] |
Notes
Uses tanh mapping: P(correct) = 1/3 + 2/3 * sigmoid(slope * d) where d is discriminability from model.discriminability()
Source code in src/psyphy/model/task.py
Prior
¶
Prior(
input_dim: int,
scale: float = 0.5,
basis_degree: int | None = None,
variance_scale: float = 1.0,
decay_rate: float = 0.5,
lengthscale: float = 1.0,
extra_embedding_dims: int = 0,
)
Prior distribution over WPPM parameters
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the model space (same as WPPM.input_dim) |
required |
scale
|
float
|
Stddev of Gaussian prior for log_diag entries (MVP only). |
0.5
|
basis_degree
|
int | None
|
Degree of Chebyshev basis for Wishart process. If None, uses MVP mode with log_diag parameters. If set, uses Wishart mode with W coefficients. |
None
|
variance_scale
|
float
|
Prior variance for degree-0 (constant) coefficient in Wishart mode. Controls overall scale of covariances. |
1.0
|
decay_rate
|
float
|
Geometric decay rate for prior variance over higher-degree coefficients. Prior variance for degree-d coefficient = variance_scale * (decay_rate^d). Smaller decay_rate → stronger smoothness prior. |
0.5
|
lengthscale
|
float
|
Alias for decay_rate (kept for backward compatibility). If both specified, decay_rate takes precedence. |
1.0
|
extra_embedding_dims
|
int
|
Additional latent dimensions in U matrices beyond input dimensions. Allows richer ellipsoid shapes in Wishart mode. |
0
|
Methods:
| Name | Description |
|---|---|
default |
Convenience constructor with MVP defaults. |
log_prob |
Compute log prior density (up to a constant) |
sample_params |
Sample initial parameters from the prior. |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
|
decay_rate |
float
|
|
extra_embedding_dims |
int
|
|
input_dim |
int
|
|
lengthscale |
float
|
|
scale |
float
|
|
variance_scale |
float
|
|
default
¶
log_prob
¶
log_prob(params: Params) -> ndarray
Compute log prior density (up to a constant)
MVP mode: Isotropic Gaussian on log_diag
Wishart mode: Gaussian prior on W with smoothness via decay_rate log p(W) = Σ_ij log N(W_ij | 0, σ_ij^2) where σ_ij^2 = prior variance
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Parameter dictionary |
required |
Returns:
| Name | Type | Description |
|---|---|---|
log_prob |
float
|
Log prior probability (up to normalizing constant) |
Source code in src/psyphy/model/prior.py
sample_params
¶
Sample initial parameters from the prior.
MVP mode (basis_degree=None): Returns {"log_diag": shape (input_dim,)}
Wishart mode (basis_degree set): Returns {"W": shape (degree+1, degree+1, input_dim, embedding_dim)} for 2D, where embedding_dim = input_dim + extra_embedding_dims
1 2 3 | |
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
key
|
JAX random key
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
params |
dict
|
Parameter dictionary |
Source code in src/psyphy/model/prior.py
ResponseData
¶
Container for psychophysical trial data.
Attributes:
| Name | Type | Description |
|---|---|---|
refs |
List[Any]
|
List of reference stimuli. |
comparisons |
List[Any]
|
List of comparison stimuli. |
responses |
List[int]
|
List of subject responses (e.g., 0/1 or categorical). |
Methods:
| Name | Description |
|---|---|
add_batch |
Append responses for a batch of trials. |
add_trial |
append a single trial. |
copy |
Create a deep copy of this dataset. |
from_arrays |
Construct ResponseData from arrays. |
merge |
Merge another dataset into this one (in-place). |
tail |
Return last n trials as a new ResponseData. |
to_numpy |
Return refs, comparisons, responses as numpy arrays. |
Source code in src/psyphy/data/dataset.py
trials
¶
add_batch
¶
add_batch(
responses: list[int], trial_batch: TrialBatch
) -> None
Append responses for a batch of trials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
responses
|
List[int]
|
Responses corresponding to each (ref, comparison) in the trial batch. |
required |
trial_batch
|
TrialBatch
|
The batch of proposed trials. |
required |
Source code in src/psyphy/data/dataset.py
add_trial
¶
append a single trial.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ref
|
Any
|
Reference stimulus (numpy array, list, etc.) |
required |
comparison
|
Any
|
Probe stimulus |
required |
resp
|
int
|
Subject response (binary or categorical) |
required |
Source code in src/psyphy/data/dataset.py
copy
¶
copy() -> ResponseData
Create a deep copy of this dataset.
Returns:
| Type | Description |
|---|---|
ResponseData
|
New dataset with copied data |
Source code in src/psyphy/data/dataset.py
from_arrays
¶
from_arrays(
X: ndarray | ndarray,
y: ndarray | ndarray,
*,
comparisons: ndarray | ndarray | None = None,
) -> ResponseData
Construct ResponseData from arrays.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(array, shape(n_trials, 2, input_dim) or (n_trials, input_dim))
|
Stimuli. If 3D, second axis is [reference, comparison]. If 2D, comparisons must be provided separately. |
required |
y
|
(array, shape(n_trials))
|
Responses |
required |
comparisons
|
(array, shape(n_trials, input_dim))
|
Probe stimuli. Only needed if X is 2D. |
None
|
Returns:
| Type | Description |
|---|---|
ResponseData
|
Data container |
Examples:
Source code in src/psyphy/data/dataset.py
merge
¶
merge(other: ResponseData) -> None
Merge another dataset into this one (in-place).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
other
|
ResponseData
|
Dataset to merge |
required |
Source code in src/psyphy/data/dataset.py
tail
¶
tail(n: int) -> ResponseData
Return last n trials as a new ResponseData.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n
|
int
|
Number of trials to keep |
required |
Returns:
| Type | Description |
|---|---|
ResponseData
|
New dataset with last n trials |
Source code in src/psyphy/data/dataset.py
StudentTNoise
¶
TrialBatch
¶
Container for a proposed batch of trials
Attributes:
| Name | Type | Description |
|---|---|---|
stimuli |
List[Tuple[Any, Any]]
|
Each trial is a (reference, comparison) tuple. |
Methods:
| Name | Description |
|---|---|
from_stimuli |
Construct a TrialBatch from a list of stimuli (ref, comparison) pairs. |
Source code in src/psyphy/data/dataset.py
TwoAFC
¶
TwoAFC(slope: float = 2.0)
Bases: TaskLikelihood
2-alternative forced-choice task (MVP placeholder).
Methods:
| Name | Description |
|---|---|
loglik |
|
predict |
|
Attributes:
| Name | Type | Description |
|---|---|---|
chance_level |
float
|
|
performance_range |
float
|
|
slope |
|
Source code in src/psyphy/model/task.py
loglik
¶
Source code in src/psyphy/model/task.py
predict
¶
WPPM
¶
WPPM(
input_dim: int,
prior: Prior,
task: TaskLikelihood,
noise: Any | None = None,
*,
extra_dims: int = 0,
variance_scale: float = 1.0,
lengthscale: float = 1.0,
diag_term: float = 1e-06,
**kwargs,
)
Bases: Model
Wishart Process Psychophysical Model (WPPM).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim
|
int
|
Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and probe live in R^{input_dim}. |
required |
prior
|
Prior
|
Prior distribution over model parameters. Controls basis_degree for Wishart mode (basis expansion) vs MVP mode (diagonal covariance). The WPPM delegates to prior.basis_degree to ensure consistency between parameter sampling and basis evaluation. |
required |
task
|
TaskLikelihood
|
Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask, TwoAFC) |
required |
noise
|
Any
|
Noise model describing internal representation noise (e.g., GaussianNoise). Not used in MVP mapping but passed to the task interface for future MC sims. |
None
|
Forward-compatible hyperparameters
extra_dims : int, default=0 Additional embedding dimensions for basis expansions (beyond input_dim). In Wishart mode, embedding_dim = input_dim + extra_dims. variance_scale : float, default=1.0 Global scaling factor for covariance magnitude (unused in MVP). lengthscale : float, default=1.0 Smoothness/length-scale for spatial covariance variation (unused in MVP). (formerly "decay_rate") diag_term : float, default=1e-6 Small positive value added to the covariance diagonal for numerical stability. MVP uses this in matrix solves; the research model will also use it.
Methods:
| Name | Description |
|---|---|
condition_on_observations |
Update model with new observations (online learning). |
discriminability |
Compute scalar discriminability d >= 0 for a (reference, probe) pair |
fit |
Fit model to data. |
init_params |
Sample initial parameters from the prior. |
local_covariance |
Return local covariance Σ(x) at stimulus location x. |
log_likelihood |
Compute the log-likelihood for arrays of trials. |
log_likelihood_from_data |
Compute log-likelihood directly from a ResponseData object. |
log_posterior_from_data |
Convenience helper if you want log posterior in one call (MVP). |
posterior |
Return posterior distribution. |
predict_prob |
Predict probability of a correct response for a single stimulus. |
predict_with_params |
Evaluate model at specific parameter values (no marginalization). |
Attributes:
| Name | Type | Description |
|---|---|---|
basis_degree |
int | None
|
Chebyshev polynomial degree for Wishart process basis expansion. |
diag_term |
|
|
embedding_dim |
int
|
Dimension of the embedding space (perceptual space). |
extra_dims |
|
|
input_dim |
|
|
lengthscale |
|
|
noise |
|
|
online_config |
|
|
prior |
|
|
task |
|
|
variance_scale |
|
Source code in src/psyphy/model/wppm.py
basis_degree
¶
basis_degree: int | None
Chebyshev polynomial degree for Wishart process basis expansion.
This property delegates to self.prior.basis_degree to ensure consistency between parameter sampling and basis evaluation.
Returns:
| Type | Description |
|---|---|
int | None
|
Degree of Chebyshev polynomial basis (0 = constant, 1 = linear, etc.) None indicates MVP mode (no basis expansion) |
Notes
WPPM gets its basis_degree parameter from Prior.basis_degree.
embedding_dim
¶
embedding_dim: int
Dimension of the embedding space (perceptual space).
embedding_dim = input_dim + extra_dims. this represents the full perceptual space where: - First input_dim dimensions correspond to observable stimulus features - Remaining extra_dims are latent dimensions
Returns:
| Type | Description |
|---|---|
int
|
input_dim + extra_dims (in Wishart mode) input_dim (in MVP mode, extra_dims ignored) |
Notes
This is a computed property, not a constructor parameter.
condition_on_observations
¶
condition_on_observations(X: ndarray, y: ndarray) -> Model
Update model with new observations (online learning).
Behavior depends on self.online_config.strategy: - "full": Accumulate all data, refit periodically - "sliding_window": Keep only recent window_size trials - "reservoir": Random sampling of window_size trials - "none": Refit from scratch (no caching)
Returns a NEW model instance (immutable update).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
New stimuli |
required |
y
|
ndarray
|
New responses |
required |
Returns:
| Type | Description |
|---|---|
Model
|
Updated model (new instance) |
Examples:
Source code in src/psyphy/model/base.py
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 | |
discriminability
¶
Compute scalar discriminability d >= 0 for a (reference, probe) pair
MVP mode: d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) ) with Σ(ref) the local covariance at the reference in stimulus space.
Wishart mode (rectangular U design) if extra_dims > 0: d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) ) where Σ(ref) is directly computed in stimulus space (input_dim, input_dim) via U(x) @ U(x)^T with U rectangular.
The discrimination task only depends on observable stimulus dimensions. The rectangular U design means local_covariance() already returns the stimulus covariance - no block extraction needed.
Future (full WPPM mode): d is implicit via Monte Carlo simulation of internal noisy responses under the task's decision rule (no closed form). In that case, tasks will directly implement predict/loglik with MC, and this method may be used only for diagnostics.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
stimulus
|
tuple
|
(reference, probe) arrays of shape (input_dim,). |
required |
Returns:
| Name | Type | Description |
|---|---|---|
d |
ndarray
|
Nonnegative scalar discriminability. |
Source code in src/psyphy/model/wppm.py
fit
¶
fit(
X: ndarray,
y: ndarray,
*,
inference: InferenceEngine | str = "laplace",
inference_config: dict | None = None,
) -> Model
Fit model to data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray
|
Stimuli, shape (n_trials, 2, input_dim) for (ref, probe) pairs or (n_trials, input_dim) for references only |
required |
y
|
ndarray
|
Responses, shape (n_trials,) |
required |
inference
|
InferenceEngine | str
|
Inference engine or string key ("map", "laplace", "langevin") |
"laplace"
|
inference_config
|
dict | None
|
Hyperparameters for string-based inference. Examples: {"steps": 500, "lr": 1e-3} for MAP |
None
|
Returns:
| Type | Description |
|---|---|
Model
|
Self for method chaining |
Examples:
Source code in src/psyphy/model/base.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | |
init_params
¶
init_params(key: KeyArray) -> Params
local_covariance
¶
local_covariance(params: Params, x: ndarray) -> ndarray
Return local covariance Σ(x) at stimulus location x.
MVP mode (basis_degree=None): Σ(x) = diag(exp(log_diag)), constant across x. - Positive-definite because exp(log_diag) > 0.
Wishart mode (basis_degree set): Σ(x) = U(x) @ U(x)^T + diag_term * I where U(x) is rectangular (input_dim, embedding_dim) if extra_dims > 0. - Varies smoothly with x - Guaranteed positive-definite - Returns stimulus covariance directly (input_dim, input_dim)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters: - MVP: {"log_diag": (input_dim,)} - Wishart: {"W": (degree+1, ..., input_dim, embedding_dim)} |
required |
x
|
(ndarray, shape(input_dim))
|
Stimulus location |
required |
Returns:
| Type | Description |
|---|---|
Σ : jnp.ndarray, shape (input_dim, input_dim)
|
Covariance matrix in stimulus space. |
Source code in src/psyphy/model/wppm.py
log_likelihood
¶
log_likelihood(
params: Params,
refs: ndarray,
probes: ndarray,
responses: ndarray,
) -> ndarray
Compute the log-likelihood for arrays of trials.
IMPORTANT: We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV) or MC likelihood logic in multiple places.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
refs
|
(ndarray, shape(N, input_dim))
|
|
required |
probes
|
(ndarray, shape(N, input_dim))
|
|
required |
responses
|
(ndarray, shape(N))
|
Typically 0/1; task may support richer encodings. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
loglik |
ndarray
|
Scalar log-likelihood (task-only; add prior outside if needed) |
Source code in src/psyphy/model/wppm.py
log_likelihood_from_data
¶
Compute log-likelihood directly from a ResponseData object.
Why delegate to the task? - The task knows the decision rule (oddity, 2AFC, ...). - The task can use the model (this WPPM) to fetch discriminabilities - and the task can use the noise model if it needs MC simulation
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Model parameters. |
required |
data
|
ResponseData
|
Collected trial data. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
loglik |
ndarray
|
scalar log-likelihood (task-only; add prior outside if needed) |
Source code in src/psyphy/model/wppm.py
log_posterior_from_data
¶
Convenience helper if you want log posterior in one call (MVP).
This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.
Returns:
| Type | Description |
|---|---|
jnp.ndarray : scalar log posterior = loglik(params | data) + log_prior(params)
|
|
Source code in src/psyphy/model/wppm.py
posterior
¶
posterior(
X: ndarray | None = None,
*,
probes: ndarray | None = None,
kind: str = "predictive",
) -> PredictivePosterior | ParameterPosterior
Return posterior distribution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
ndarray | None
|
Test stimuli (references), shape (n_test, input_dim). Required for predictive posteriors, optional for parameter posteriors. |
None
|
probes
|
ndarray | None
|
Test probes, shape (n_test, input_dim). Required for predictive posteriors. |
None
|
kind
|
('predictive', 'parameter')
|
Type of posterior to return: - "predictive": PredictivePosterior over f(X*) [for acquisitions] - "parameter": ParameterPosterior over θ [for diagnostics] |
"predictive"
|
Returns:
| Type | Description |
|---|---|
PredictivePosterior | ParameterPosterior
|
Posterior distribution |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If model has not been fit yet |
Examples:
Source code in src/psyphy/model/base.py
predict_prob
¶
Predict probability of a correct response for a single stimulus.
Design choice: WPPM computes discriminability & covariance; the TASK defines how that translates to performance. We therefore delegate to: task.predict(params, stimulus, model=self, noise=self.noise)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
|
required |
stimulus
|
(reference, probe)
|
|
required |
Returns:
| Name | Type | Description |
|---|---|---|
p_correct |
ndarray
|
|
Source code in src/psyphy/model/wppm.py
predict_with_params
¶
Evaluate model at specific parameter values (no marginalization).
This is useful for: - Threshold uncertainty estimation (evaluate at sampled parameters) - Parameter sensitivity analysis - Debugging and diagnostics
NOT for making predictions (use .posterior() instead, which marginalizes over parameter uncertainty).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
X
|
(ndarray, shape(n_test, input_dim))
|
Test stimuli (references) |
required |
probes
|
(ndarray, shape(n_test, input_dim))
|
Probe stimuli (for discrimination tasks) |
required |
params
|
dict[str, ndarray]
|
Specific parameter values to evaluate at. Keys and shapes depend on the model (e.g., WPPM has "W", "noise_scale", etc.) |
required |
Returns:
| Name | Type | Description |
|---|---|---|
predictions |
(ndarray, shape(n_test))
|
Predicted probabilities at each test point, given these parameters |
Examples:
Notes
This bypasses the posterior marginalization. For acquisition functions, always use .posterior() which properly accounts for parameter uncertainty.