Skip to content

Psyphy


psyphy

psyphy

Psychophysical modeling and adaptive trial placement.

This package implements the Wishart Process Psychophysical Model (WPPM) with modular components for priors, task likelihoods, and noise models, which can be fitted to incoming subject data and used to adaptively select new trials to present to the subject next. This is useful for efficiently estimating psychophysical parameters (e.g. threshold contours) with minimal trials.


Workflow
Core design
  1. WPPM (model/wppm.py):
  2. Structural definition of the psychophysical model.
  3. Maintains parameterization of local covariance fields.
  4. Computes discriminability between stimuli.
  5. Delegates trial likelihoods and predictions to the task.

  6. Prior (model/prior.py):

  7. Defines the distribution over model parameters.
  8. MVP: Gaussian prior over diagonal log-variances.
  9. Full WPPM mode: structured prior over basis weights and lengthscale-controlled covariance fields.

  10. TaskLikelihood (model/task.py):

  11. Encodes the psychophysical decision rule.
  12. MVP: OddityTask (3AFC) and TwoAFC with sigmoid mappings.
  13. Full WPPM mode: loglik and predict implemented via Monte Carlo observer simulations, using the noise model explicitly.

  14. NoiseModel (model/noise.py):

  15. Defines the distribution of internal representation noise.
  16. MVP: GaussianNoise (zero mean, isotropic).
  17. Full WPPM mode: add StudentTNoise option and beyond.
Unified import style

Top-level (core models + session): from psyphy import WPPM, Prior, OddityTask, GaussianNoise, MAPOptimizer from psyphy import ExperimentSession, ResponseData, TrialBatch

Subpackages: from psyphy.model import WPPM, Prior, OddityTask, TwoAFC, GaussianNoise, StudentTNoise from psyphy.inference import MAPOptimizer, LangevinSampler, LaplaceApproximation from psyphy.posterior import Posterior, effective_sample_size, rhat from psyphy.trial_placement import GridPlacement, GreedyMAPPlacement, InfoGainPlacement, SobolPlacement, StaircasePlacement from psyphy.utils import grid_candidates, sobol_candidates, custom_candidates, chebyshev_basis

Data flow
  • A ResponseData object (psyphy.data) contains trial stimuli and responses.
  • WPPM.init_params(prior) samples parameter initialization.
  • Inference engines optimize the log posterior: log_posterior = task.loglik(params, data, model=WPPM, noise=NoiseModel) + prior.log_prob(params)
  • Posterior predictions (p(correct), threshold ellipses) are always obtained through WPPM delegating to TaskLikelihood.
Extensibility
  • To add a new task: subclass TaskLikelihood, implement predict/loglik.
  • To add a new noise model: subclass NoiseModel, implement logpdf/sample.
  • To upgrade from MVP -> Full WPPM mode: replace local_covariance and discriminability with basis-expansion Wishart process + MC simulation.
MVP vs Full WPPM mode
  • MVP is a diagonal-covariance, closed-form scaffold that runs out of the box.
  • Full WPPM mode matches the published research model:
  • Smooth covariance fields (Wishart process priors).
  • Monte Carlo likelihood evaluation.
  • Explicit noise model in predictions.

Classes:

Name Description
ExperimentSession

High-level experiment orchestrator.

GaussianNoise
LangevinSampler

Langevin sampler (stub).

LaplaceApproximation

Laplace approximation around MAP estimate.

MAPOptimizer

MAP (Maximum A Posteriori) optimizer.

OddityTask

Three-alternative forced-choice oddity task (MVP placeholder) ("pick the odd-one out).

Posterior

MVP Posterior (MAP only).

Prior

Prior distribution over WPPM parameters

ResponseData

Container for psychophysical trial data.

StudentTNoise
TrialBatch

Container for a proposed batch of trials

TwoAFC

2-alternative forced-choice task (MVP placeholder).

WPPM

Wishart Process Psychophysical Model (WPPM).

ExperimentSession

ExperimentSession(
    model, inference, placement, init_placement=None
)

High-level experiment orchestrator.

Parameters:

Name Type Description Default
model WPPM

(Psychophysical) model instance.

required
inference InferenceEngine

Inference engine (MAP, Langevin, etc.).

required
placement TrialPlacement

Adaptive trial placement strategy.

required
init_placement TrialPlacement

Initial placement strategy (e.g., Sobol exploration).

None

Attributes:

Name Type Description
data ResponseData

Stores all collected trials.

posterior Posterior or None

Current posterior estimate (None before initialization).

Methods:

Name Description
initialize

Fit an initial posterior before any adaptive placement.

next_batch

Propose the next batch of trials.

update

Refit posterior with accumulated data.

Source code in src/psyphy/session/experiment_session.py
def __init__(self, model, inference, placement, init_placement=None):
    self.model = model
    self.inference = inference
    self.placement = placement
    self.init_placement = init_placement

    # Data store starts empty
    self.data = ResponseData()

    # Posterior will be set after initialize() or update()
    self.posterior = None

data

data = ResponseData()

inference

inference = inference

init_placement

init_placement = init_placement

model

model = model

placement

placement = placement

posterior

posterior = None

initialize

initialize()

Fit an initial posterior before any adaptive placement.

Returns:

Type Description
Posterior

Posterior object wrapping fitted parameters.

Notes

MVP: Posterior is fitted to empty data (prior only). Full WPPM mode: Could use pilot data or pre-collected trials along grid etc.

Source code in src/psyphy/session/experiment_session.py
def initialize(self):
    """
    Fit an initial posterior before any adaptive placement.

    Returns
    -------
    Posterior
        Posterior object wrapping fitted parameters.

    Notes
    -----
    MVP:
        Posterior is fitted to empty data (prior only).
    Full WPPM mode:
        Could use pilot data or pre-collected trials along grid etc.
    """
    self.posterior = self.inference.fit(self.model, self.data)
    return self.posterior

next_batch

next_batch(batch_size: int)

Propose the next batch of trials.

Parameters:

Name Type Description Default
batch_size int

Number of trials to propose.

required

Returns:

Type Description
TrialBatch

Batch of proposed (reference, probe) stimuli.

Notes

MVP: Always calls placement.propose() on current posterior. Full WPPM mode: Could support hybrid placement (init strategy -> adaptive strategy).

Source code in src/psyphy/session/experiment_session.py
def next_batch(self, batch_size: int):
    """
    Propose the next batch of trials.

    Parameters
    ----------
    batch_size : int
        Number of trials to propose.

    Returns
    -------
    TrialBatch
        Batch of proposed (reference, probe) stimuli.

    Notes
    -----
    MVP:
        Always calls placement.propose() on current posterior.
    Full WPPM mode:
        Could support hybrid placement (init strategy -> adaptive strategy).
    """
    if self.posterior is None:
        raise RuntimeError("Posterior not initialized. Call initialize() first.")
    return self.placement.propose(self.posterior, batch_size)

update

update()

Refit posterior with accumulated data.

Returns:

Type Description
Posterior

Updated posterior.

Notes

MVP: Re-optimizes from scratch using all data. Full WPPM mode: Could support warm-start or online parameter updates.

Source code in src/psyphy/session/experiment_session.py
def update(self):
    """
    Refit posterior with accumulated data.

    Returns
    -------
    Posterior
        Updated posterior.

    Notes
    -----
    MVP:
        Re-optimizes from scratch using all data.
    Full WPPM mode:
        Could support warm-start or online parameter updates.
    """
    self.posterior = self.inference.fit(self.model, self.data)
    return self.posterior

GaussianNoise

GaussianNoise(sigma: float = 1.0)

Methods:

Name Description
log_prob

Attributes:

Name Type Description
sigma float

sigma

sigma: float = 1.0

log_prob

log_prob(residual: float) -> float
Source code in src/psyphy/model/noise.py
def log_prob(self, residual: float) -> float:
    _ = residual
    return -0.5

LangevinSampler

LangevinSampler(
    steps: int = 1000,
    step_size: float = 0.001,
    temperature: float = 1.0,
)

Langevin sampler (stub).

Parameters:

Name Type Description Default
steps int

Number of Langevin steps.

1000
step_size float

Integration step size.

1e-3
temperature float

Noise scale (temperature).

1.0

Methods:

Name Description
fit

Fit model parameters with Langevin dynamics (stub).

Attributes:

Name Type Description
step_size
steps
temperature
Source code in src/psyphy/inference/langevin.py
def __init__(self, steps: int = 1000, step_size: float = 1e-3, temperature: float = 1.0):
    self.steps = steps
    self.step_size = step_size
    self.temperature = temperature

step_size

step_size = step_size

steps

steps = steps

temperature

temperature = temperature

fit

fit(model, data) -> Posterior

Fit model parameters with Langevin dynamics (stub).

Parameters:

Name Type Description Default
model WPPM

Model instance.

required
data ResponseData

Observed trials.

required

Returns:

Type Description
Posterior

Posterior wrapper (MVP: params from init).

Source code in src/psyphy/inference/langevin.py
def fit(self, model, data) -> Posterior:
    """
    Fit model parameters with Langevin dynamics (stub).

    Parameters
    ----------
    model : WPPM
        Model instance.
    data : ResponseData
        Observed trials.

    Returns
    -------
    Posterior
        Posterior wrapper (MVP: params from init).
    """
    return Posterior(params=model.init_params(None), model=model)

LaplaceApproximation

Laplace approximation around MAP estimate.

Methods:

Name Description
from_map

Construct a Gaussian approximation centered at MAP.

from_map

from_map(map_posterior: Posterior) -> Posterior

Return posterior approximation from MAP.

Parameters:

Name Type Description Default
map_posterior Posterior

Posterior object from MAP optimization.

required

Returns:

Type Description
Posterior

Same posterior object (MVP).

Source code in src/psyphy/inference/laplace.py
def from_map(self, map_posterior: Posterior) -> Posterior:
    """
    Return posterior approximation from MAP.

    Parameters
    ----------
    map_posterior : Posterior
        Posterior object from MAP optimization.

    Returns
    -------
    Posterior
        Same posterior object (MVP).
    """
    return map_posterior

MAPOptimizer

MAPOptimizer(
    steps: int = 500,
    learning_rate: float = 5e-05,
    momentum: float = 0.9,
    optimizer: GradientTransformation | None = None,
    *,
    track_history: bool = False,
    log_every: int = 10
)

Bases: InferenceEngine

MAP (Maximum A Posteriori) optimizer.

Parameters:

Name Type Description Default
steps int

Number of optimization steps.

500
optimizer GradientTransformation

Optax optimizer to use. Default: SGD with momentum.

None
Notes
  • Loss function = negative log posterior.
  • Gradients computed with jax.grad.

Create a MAP optimizer.

Parameters:

Name Type Description Default
steps int

Number of optimization steps.

500
optimizer GradientTransformation | None

Optax optimizer to use.

None
learning_rate float

Learning rate for the default optimizer (SGD with momentum).

5e-05
momentum float

Momentum for the default optimizer (SGD with momentum).

0.9
track_history bool

When True, record loss history during fitting for plotting.

False
log_every int

Record every N steps (also records the last step).

10

Methods:

Name Description
fit

Fit model parameters with MAP optimization.

get_history

Return (steps, losses) recorded during the last fit when tracking was enabled.

Attributes:

Name Type Description
log_every
loss_history list[float]
loss_steps list[int]
optimizer
steps
track_history
Source code in src/psyphy/inference/map_optimizer.py
def __init__(
    self,
    steps: int = 500,
    learning_rate: float = 5e-5,
    momentum: float = 0.9,
    optimizer: optax.GradientTransformation | None = None,
    *,
    track_history: bool = False,
    log_every: int = 10,
):
    """Create a MAP optimizer.

    Parameters
    ----------
    steps : int
        Number of optimization steps.
    optimizer : optax.GradientTransformation | None
        Optax optimizer to use.
    learning_rate : float, optional
        Learning rate for the default optimizer (SGD with momentum).
    momentum : float, optional
        Momentum for the default optimizer (SGD with momentum).
    track_history : bool, optional
        When True, record loss history during fitting for plotting.
    log_every : int, optional
        Record every N steps (also records the last step).
    """
    self.steps = steps
    self.optimizer = optimizer or optax.sgd(learning_rate=learning_rate, momentum=momentum)
    self.track_history = track_history
    self.log_every = max(1, int(log_every))
    # Exposed after fit() when tracking is enabled
    self.loss_steps: list[int] = []
    self.loss_history: list[float] = []

log_every

log_every = max(1, int(log_every))

loss_history

loss_history: list[float] = []

loss_steps

loss_steps: list[int] = []

optimizer

optimizer = optimizer or sgd(
    learning_rate=learning_rate, momentum=momentum
)

steps

steps = steps

track_history

track_history = track_history

fit

fit(
    model,
    data,
    init_params: dict | None = None,
    seed: int | None = None,
) -> Posterior

Fit model parameters with MAP optimization.

Parameters:

Name Type Description Default
model WPPM

Model instance.

required
data ResponseData

Observed trials.

required
init_params dict | None

Initial parameter PyTree to start optimization from. If provided, this takes precedence over the seed.

None
seed int | None

PRNG seed used to draw initial parameters from the model's prior when init_params is not provided. If None, defaults to 0.

None

Returns:

Type Description
Posterior

Posterior wrapper around MAP params and model.

Source code in src/psyphy/inference/map_optimizer.py
def fit(self, model, data, init_params: dict | None = None, seed: int | None = None) -> Posterior:
    """
    Fit model parameters with MAP optimization.

    Parameters
    ----------
    model : WPPM
        Model instance.
    data : ResponseData
        Observed trials.
    init_params : dict | None, optional
        Initial parameter PyTree to start optimization from. If provided,
        this takes precedence over the seed.
    seed : int | None, optional
        PRNG seed used to draw initial parameters from the model's prior
        when init_params is not provided. If None, defaults to 0.

    Returns
    -------
    Posterior
        Posterior wrapper around MAP params and model.
    """

    def loss_fn(params):
        return -model.log_posterior_from_data(params, data)

    # Initialize parameters
    if init_params is not None:
        params = init_params
    else:
        rng_seed = 0 if seed is None else int(seed)
        params = model.init_params(jax.random.PRNGKey(rng_seed))
    opt_state = self.optimizer.init(params)

    @jax.jit
    def step(params, opt_state):
        # Ensure params and opt_state are JAX PyTrees for JIT compatibility
        loss, grads = jax.value_and_grad(loss_fn)(params)  # auto-diff
        updates, opt_state = self.optimizer.update(grads, opt_state, params)  # optimizer update
        params = optax.apply_updates(params, updates)  # apply updates
        # Only return JAX-compatible types (PyTrees of arrays, scalars)
        return params, opt_state, loss

    # clear any previous history
    if self.track_history:
        self.loss_steps.clear()
        self.loss_history.clear()

    for i in range(self.steps):
        params, opt_state, loss = step(params, opt_state)
        if self.track_history and ((i % self.log_every == 0) or (i == self.steps - 1)):
            # Pull scalar to host and record
            try:
                self.loss_steps.append(i)
                self.loss_history.append(float(loss))
            except Exception:
                # Best-effort; do not break fitting if logging fails
                pass

    return Posterior(params=params, model=model)

get_history

get_history() -> tuple[list[int], list[float]]

Return (steps, losses) recorded during the last fit when tracking was enabled.

Source code in src/psyphy/inference/map_optimizer.py
def get_history(self) -> tuple[list[int], list[float]]:
    """Return (steps, losses) recorded during the last fit when tracking was enabled."""
    return self.loss_steps, self.loss_history

OddityTask

OddityTask(slope: float = 1.5)

Bases: TaskLikelihood

Three-alternative forced-choice oddity task (MVP placeholder) ("pick the odd-one out).

Methods:

Name Description
loglik
predict

Attributes:

Name Type Description
chance_level float
performance_range float
slope
Source code in src/psyphy/model/task.py
def __init__(self, slope: float = 1.5) -> None:
    self.slope = float(slope)
    self.chance_level: float = 1.0 / 3.0
    self.performance_range: float = 1.0 - self.chance_level

chance_level

chance_level: float = 1.0 / 3.0

performance_range

performance_range: float = 1.0 - chance_level

slope

slope = float(slope)

loglik

loglik(
    params: Any, data: Any, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def loglik(self, params: Any, data: Any, model: Any, noise: Any) -> jnp.ndarray:
    refs, probes, responses = data.to_numpy()
    ps = jnp.array([self.predict(params, (r, p), model, noise) for r, p in zip(refs, probes)])
    eps = 1e-9
    return jnp.sum(jnp.where(responses == 1, jnp.log(ps + eps), jnp.log(1.0 - ps + eps)))

predict

predict(
    params: Any, stimuli: Stimulus, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def predict(self, params: Any, stimuli: Stimulus, model: Any, noise: Any) -> jnp.ndarray:
    d = model.discriminability(params, stimuli)
    g = 0.5 * (jnp.tanh(self.slope * d) + 1.0)
    return self.chance_level + self.performance_range * g

Posterior

Posterior(params, model)

Bases: BasePosterior

MVP Posterior (MAP only).

Parameters:

Name Type Description Default
params dict

MAP parameter dictionary.

required
model WPPM

Model instance used for predictions.

required
Notes
  • This is effectively a MAPPosterior.
  • Future subclasses (LaplacePosterior, MCMCPosterior) will extend BasePosterior with real sampling logic.

Methods:

Name Description
MAP_params

Return the MAP parameters.

predict_prob

Predict probability of correct response for a stimulus.

predict_thresholds

Predict discrimination threshold contour around a reference stimulus.

sample

Draw parameter samples from the posterior.

Attributes:

Name Type Description
model
params
Source code in src/psyphy/posterior/posterior.py
def __init__(self, params, model):
    self.params = params
    self.model = model

model

model = model

params

params = params

MAP_params

MAP_params()

Return the MAP parameters.

Returns:

Type Description
dict

Parameter dictionary.

Source code in src/psyphy/posterior/posterior.py
def MAP_params(self):
    """
    Return the MAP parameters.

    Returns
    -------
    dict
        Parameter dictionary.
    """
    return self.params

predict_prob

predict_prob(stimulus)

Predict probability of correct response for a stimulus.

Parameters:

Name Type Description Default
stimulus tuple

(reference, probe).

required

Returns:

Type Description
ndarray

Probability of correct response.

Notes

Delegates to WPPM.predict_prob(). This is not recursion: Posterior calls WPPM’s method with stored params.

Source code in src/psyphy/posterior/posterior.py
def predict_prob(self, stimulus):
    """
    Predict probability of correct response for a stimulus.

    Parameters
    ----------
    stimulus : tuple
        (reference, probe).

    Returns
    -------
    jnp.ndarray
        Probability of correct response.

    Notes
    -----
    Delegates to WPPM.predict_prob().
    This is not recursion: Posterior calls WPPM’s method with stored params.
    """
    return self.model.predict_prob(self.params, stimulus)

predict_thresholds

predict_thresholds(
    reference,
    criterion: float = 0.667,
    directions: int = 16,
)

Predict discrimination threshold contour around a reference stimulus.

Parameters:

Name Type Description Default
reference ndarray

Reference point in model space.

required
criterion float

Target performance (e.g., 2/3 for oddity).

0.667
directions int

Number of directions to probe.

16

Returns:

Type Description
ndarray

Contour points (MVP: unit circle).

MVP

Returns a placeholder unit circle.

Future
  • Search outward in each direction until performance crosses criterion.
  • Average over posterior samples (Laplace, MCMC) to get credible intervals.
Source code in src/psyphy/posterior/posterior.py
def predict_thresholds(self, reference, criterion: float = 0.667, directions: int = 16):
    """
    Predict discrimination threshold contour around a reference stimulus.

    Parameters
    ----------
    reference : jnp.ndarray
        Reference point in model space.
    criterion : float, default=0.667
        Target performance (e.g., 2/3 for oddity).
    directions : int, default=16
        Number of directions to probe.

    Returns
    -------
    jnp.ndarray
        Contour points (MVP: unit circle).

    MVP
    ---
    Returns a placeholder unit circle.

    Future
    ------
    - Search outward in each direction until performance crosses criterion.
    - Average over posterior samples (Laplace, MCMC) to get credible intervals.
    """
    angles = jnp.linspace(0, 2 * jnp.pi, directions, endpoint=False)
    contour = jnp.stack([reference + jnp.array([jnp.cos(a), jnp.sin(a)]) for a in angles])
    return contour

sample

sample(num_samples: int = 1)

Draw parameter samples from the posterior.

Parameters:

Name Type Description Default
num_samples int

Number of samples.

1

Returns:

Type Description
list of dict

Parameter sets.

MVP

Returns MAP params repeated n times.

Future
  • LaplacePosterior: draw from N(mean, cov).
  • MCMCPosterior: return stored samples.
Source code in src/psyphy/posterior/posterior.py
def sample(self, num_samples: int = 1):
    """
    Draw parameter samples from the posterior.

    Parameters
    ----------
    num_samples : int, default=1
        Number of samples.

    Returns
    -------
    list of dict
        Parameter sets.

    MVP
    ---
    Returns MAP params repeated n times.

    Future
    ------
    - LaplacePosterior: draw from N(mean, cov).
    - MCMCPosterior: return stored samples.
    """
    return [self.params] * num_samples

Prior

Prior(
    input_dim: int,
    scale: float = 0.5,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    extra_embedding_dims: int = 0,
)

Prior distribution over WPPM parameters

Parameters:

Name Type Description Default
input_dim int

Dimensionality of the model space (same as WPPM.input_dim)

required
scale float

Stddev of Gaussian prior for log_diag entries (MVP only).

0.5
variance_scale float

Forward-compatible stub for Full WPPM mode. Will scale covariance magnitudes

1.0
lengthscale float

Forward-compatible stub for Full WPPM mode; controls smoothness of covariance field: - small lengthscale --> rapid variation across space - large lengthscale --> smoother field, long-range correlations.

1.0
extra_embedding_dims int

Forward-compatible stub for Full WPPM mode. Will expand embedding space.

0

Methods:

Name Description
default

Convenience constructor with MVP defaults.

log_prob

Compute log prior density (up to a constant)

sample_params

Sample initial parameters from the prior.

Attributes:

Name Type Description
extra_embedding_dims int
input_dim int
lengthscale float
scale float
variance_scale float

extra_embedding_dims

extra_embedding_dims: int = 0

input_dim

input_dim: int

lengthscale

lengthscale: float = 1.0

scale

scale: float = 0.5

variance_scale

variance_scale: float = 1.0

default

default(input_dim: int, scale: float = 0.5) -> 'Prior'

Convenience constructor with MVP defaults.

Source code in src/psyphy/model/prior.py
@classmethod
def default(cls, input_dim: int, scale: float = 0.5) -> "Prior":
    """Convenience constructor with MVP defaults."""
    return cls(input_dim=input_dim, scale=scale)

log_prob

log_prob(params: Params) -> ndarray

Compute log prior density (up to a constant)

MVP: Isotropic Gaussian on log_diag Full WPPM mode: Will implement structured prior over basis weights and lengthscale-regularized covariance fields

Source code in src/psyphy/model/prior.py
def log_prob(self, params: Params) -> jnp.ndarray:
    """
    Compute log prior density (up to a constant)

    MVP:
        Isotropic Gaussian on log_diag
    Full WPPM mode:
        Will implement structured prior over basis weights and
        lengthscale-regularized covariance fields
    """
    log_diag = params["log_diag"]
    var = self.scale**2
    return -0.5 * jnp.sum((log_diag**2) / var)

sample_params

sample_params(key: KeyArray) -> Params

Sample initial parameters from the prior.

MVP: Returns {"log_diag": shape (input_dim,)}. Full WPPM mode: Will also include basis weights, structured covariance params, and hyperparameters for GP (variance_scale, lengthscale).

Source code in src/psyphy/model/prior.py
def sample_params(self, key: jr.KeyArray) -> Params:
    """
    Sample initial parameters from the prior.

    MVP:
        Returns {"log_diag": shape (input_dim,)}.
    Full WPPM mode:
        Will also include basis weights, structured covariance params,
        and hyperparameters for GP (variance_scale, lengthscale).
    """
    log_diag = jr.normal(key, shape=(self.input_dim,)) * self.scale
    return {"log_diag": log_diag}

ResponseData

ResponseData()

Container for psychophysical trial data.

Attributes:

Name Type Description
refs List[Any]

List of reference stimuli.

probes List[Any]

List of probe stimuli.

responses List[int]

List of subject responses (e.g., 0/1 or categorical).

Methods:

Name Description
add_batch

Append responses for a batch of trials.

add_trial

append a single trial.

to_numpy

Return refs, probes, responses as numpy arrays.

Source code in src/psyphy/data/dataset.py
def __init__(self) -> None:
    self.refs: List[Any] = []
    self.probes: List[Any] = []
    self.responses: List[int] = []

probes

probes: List[Any] = []

refs

refs: List[Any] = []

responses

responses: List[int] = []

add_batch

add_batch(
    responses: List[int], trial_batch: TrialBatch
) -> None

Append responses for a batch of trials.

Parameters:

Name Type Description Default
responses List[int]

Responses corresponding to each (ref, probe) in the trial batch.

required
trial_batch TrialBatch

The batch of proposed trials.

required
Source code in src/psyphy/data/dataset.py
def add_batch(self, responses: List[int], trial_batch: TrialBatch) -> None:
    """
    Append responses for a batch of trials.

    Parameters
    ----------
    responses : List[int]
        Responses corresponding to each (ref, probe) in the trial batch.
    trial_batch : TrialBatch
        The batch of proposed trials.
    """
    for (ref, probe), resp in zip(trial_batch.stimuli, responses):
        self.add_trial(ref, probe, resp)

add_trial

add_trial(ref: Any, probe: Any, resp: int) -> None

append a single trial.

Parameters:

Name Type Description Default
ref Any

Reference stimulus (numpy array, list, etc.)

required
probe Any

Probe stimulus

required
resp int

Subject response (binary or categorical)

required
Source code in src/psyphy/data/dataset.py
def add_trial(self, ref: Any, probe: Any, resp: int) -> None:
    """
    append a single trial.

    Parameters
    ----------
    ref : Any
        Reference stimulus (numpy array, list, etc.)
    probe : Any
        Probe stimulus
    resp : int
        Subject response (binary or categorical)
    """
    self.refs.append(ref)
    self.probes.append(probe)
    self.responses.append(resp)

to_numpy

to_numpy() -> Tuple[ndarray, ndarray, ndarray]

Return refs, probes, responses as numpy arrays.

Returns:

Name Type Description
refs ndarray
probes ndarray
responses ndarray
Source code in src/psyphy/data/dataset.py
def to_numpy(self) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
    """
    Return refs, probes, responses as numpy arrays.

    Returns
    -------
    refs : np.ndarray
    probes : np.ndarray
    responses : np.ndarray
    """
    return (
        np.array(self.refs),
        np.array(self.probes),
        np.array(self.responses),
    )

StudentTNoise

StudentTNoise(df: float = 3.0, scale: float = 1.0)

Methods:

Name Description
log_prob

Attributes:

Name Type Description
df float
scale float

df

df: float = 3.0

scale

scale: float = 1.0

log_prob

log_prob(residual: float) -> float
Source code in src/psyphy/model/noise.py
def log_prob(self, residual: float) -> float:
    _ = residual
    return -0.5

TrialBatch

TrialBatch(stimuli: List[Tuple[Any, Any]])

Container for a proposed batch of trials

Attributes:

Name Type Description
stimuli List[Tuple[Any, Any]]

Each trial is a (reference, probe) tuple.

Methods:

Name Description
from_stimuli

Construct a TrialBatch from a list of stimuli (ref, probe) pairs.

Source code in src/psyphy/data/dataset.py
def __init__(self, stimuli: List[Tuple[Any, Any]]) -> None:
    self.stimuli = list(stimuli)

stimuli

stimuli = list(stimuli)

from_stimuli

from_stimuli(pairs: List[Tuple[Any, Any]]) -> TrialBatch

Construct a TrialBatch from a list of stimuli (ref, probe) pairs.

Source code in src/psyphy/data/dataset.py
@classmethod
def from_stimuli(cls, pairs: List[Tuple[Any, Any]]) -> TrialBatch:
    """
    Construct a TrialBatch from a list of stimuli (ref, probe) pairs.
    """
    return cls(pairs)

TwoAFC

TwoAFC(slope: float = 2.0)

Bases: TaskLikelihood

2-alternative forced-choice task (MVP placeholder).

Methods:

Name Description
loglik
predict

Attributes:

Name Type Description
chance_level float
performance_range float
slope
Source code in src/psyphy/model/task.py
def __init__(self, slope: float = 2.0) -> None:
    self.slope = float(slope)
    self.chance_level: float = 0.5
    self.performance_range: float = 1.0 - self.chance_level

chance_level

chance_level: float = 0.5

performance_range

performance_range: float = 1.0 - chance_level

slope

slope = float(slope)

loglik

loglik(
    params: Any, data: Any, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def loglik(self, params: Any, data: Any, model: Any, noise: Any) -> jnp.ndarray:
    refs, probes, responses = data.to_numpy()
    ps = jnp.array([self.predict(params, (r, p), model, noise) for r, p in zip(refs, probes)])
    eps = 1e-9
    return jnp.sum(jnp.where(responses == 1, jnp.log(ps + eps), jnp.log(1.0 - ps + eps)))

predict

predict(
    params: Any, stimuli: Stimulus, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def predict(self, params: Any, stimuli: Stimulus, model: Any, noise: Any) -> jnp.ndarray:
    d = model.discriminability(params, stimuli)
    return self.chance_level + self.performance_range * jnp.tanh(self.slope * d)

WPPM

WPPM(
    input_dim: int,
    prior: Prior,
    task: TaskLikelihood,
    noise: Any | None = None,
    *,
    extra_dims: int = 0,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    diag_term: float = 1e-06
)

Wishart Process Psychophysical Model (WPPM).

Parameters:

Name Type Description Default
input_dim int

Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and probe live in R^{input_dim}.

required
prior Prior

Prior distribution over model parameters. MVP uses a simple Gaussian prior over diagonal log-variances (see Prior.sample_params()).

required
task TaskLikelihood

Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask, TwoAFC)

required
noise Any

Noise model describing internal representation noise (e.g., GaussianNoise). Not used in MVP mapping but passed to the task interface for future MC sims.

None
Forward-compatible hyperparameters (MVP stubs)

extra_dims : int, default=0 Additional embedding dimensions for basis expansions (unused in MVP). variance_scale : float, default=1.0 Global scaling factor for covariance magnitude (unused in MVP). lengthscale : float, default=1.0 Smoothness/length-scale for spatial covariance variation (unused in MVP). (formerly "decay_rate") diag_term : float, default=1e-6 Small positive value added to the covariance diagonal for numerical stability. MVP uses this in matrix solves; the research model will also use it.

Methods:

Name Description
discriminability

Compute scalar discriminability d >= 0 for a (reference, probe) pair

init_params

Sample initial parameters from the prior.

local_covariance

Return local covariance Σ(x) at stimulus location x.

log_likelihood

Compute the log-likelihood for arrays of trials.

log_likelihood_from_data

Compute log-likelihood directly from a ResponseData object.

log_posterior_from_data

Convenience helper if you want log posterior in one call (MVP).

predict_prob

Predict probability of a correct response for a single stimulus.

Attributes:

Name Type Description
diag_term
extra_dims
input_dim
lengthscale
noise
prior
task
variance_scale
Source code in src/psyphy/model/wppm.py
def __init__(
    self,
    input_dim: int,
    prior: Prior,
    task: TaskLikelihood,
    noise: Any | None = None,
    *,
    extra_dims: int = 0,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    diag_term: float = 1e-6,
) -> None:
    # --- core components ---
    self.input_dim = int(input_dim)   # stimulus-space dimensionality
    self.prior = prior                # prior over parameter PyTree
    self.task = task                  # task mapping and likelihood
    self.noise = noise                # noise model 

    # --- forward-compatible hyperparameters (stubs in MVP) ---
    self.extra_dims = int(extra_dims)
    self.variance_scale = float(variance_scale)
    self.lengthscale = float(lengthscale)
    self.diag_term = float(diag_term)

diag_term

diag_term = float(diag_term)

extra_dims

extra_dims = int(extra_dims)

input_dim

input_dim = int(input_dim)

lengthscale

lengthscale = float(lengthscale)

noise

noise = noise

prior

prior = prior

task

task = task

variance_scale

variance_scale = float(variance_scale)

discriminability

discriminability(
    params: Params, stimulus: Stimulus
) -> ndarray

Compute scalar discriminability d >= 0 for a (reference, probe) pair

MVP: d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) ) with Σ(ref) the local covariance at the reference, - We add diag_term * I for numerical stability before inversion Future (full WPPM mode): d is implicit via Monte Carlo simulation of internal noisy responses under the task's decision rule (no closed form). In that case, tasks will directly implement predict/loglik with MC, and this method may be used only for diagnostics.

Parameters:

Name Type Description Default
params dict

Model parameters.

required
stimulus tuple

(reference, probe) arrays of shape (input_dim,).

required

Returns:

Name Type Description
d ndarray

Nonnegative scalar discriminability.

Source code in src/psyphy/model/wppm.py
def discriminability(self, params: Params, stimulus: Stimulus) -> jnp.ndarray:
    """
    Compute scalar discriminability d >= 0 for a (reference, probe) pair

    MVP:
        d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) )
        with Σ(ref) the local covariance at the reference,
        - We add `diag_term * I` for numerical stability before inversion
    Future (full WPPM mode):
        d is implicit via Monte Carlo simulation of internal noisy responses
        under the task's decision rule (no closed form). In that case, tasks
        will directly implement predict/loglik with MC, and this method may be
        used only for diagnostics.

    Parameters
    ----------
    params : dict
        Model parameters.
    stimulus : tuple
        (reference, probe) arrays of shape (input_dim,).

    Returns
    -------
    d : jnp.ndarray
        Nonnegative scalar discriminability.
    """
    ref, probe = stimulus
    delta = probe - ref                                # difference vector in input space
    Sigma = self.local_covariance(params, ref)         # local covariance at reference
    # Add jitter for stable solve; diag_term is configurable
    jitter = self.diag_term * jnp.eye(self.input_dim)
    # Solve (Σ + jitter)^{-1} delta using a PD-aware solver
    x = jax.scipy.linalg.solve(Sigma + jitter, delta, assume_a="pos")
    d2 = jnp.dot(delta, x)                             # quadratic form
    # Guard against tiny negative values from numerical error
    return jnp.sqrt(jnp.maximum(d2, 0.0))

init_params

init_params(key: KeyArray) -> Params

Sample initial parameters from the prior.

MVP parameters: {"log_diag": shape (input_dim,)} which defines a constant diagonal covariance across the space.

Returns:

Name Type Description
params dict[str, ndarray]
Source code in src/psyphy/model/wppm.py
def init_params(self, key: jr.KeyArray) -> Params:
    """
    Sample initial parameters from the prior.

    MVP parameters:
        {"log_diag": shape (input_dim,)}
    which defines a constant diagonal covariance across the space.

    Returns
    -------
    params : dict[str, jnp.ndarray]
    """
    return self.prior.sample_params(key)

local_covariance

local_covariance(params: Params, x: ndarray) -> ndarray

Return local covariance Σ(x) at stimulus location x.

MVP: Σ(x) = diag(exp(log_diag)), constant across x. - Positive-definite because exp(log_diag) > 0. Future (full WPPM mode): Σ(x) varies smoothly with x via basis expansions and a Wishart-process prior controlled by (extra_dims, variance_scale, lengthscale). Those hyperparameters are exposed here but not used in MVP.

Parameters:

Name Type Description Default
params dict

model parameters (MVP expects "log_diag": (input_dim,)).

required
x ndarray

Stimulus location (unused in MVP because Σ is constant).

required

Returns:

Type Description
Σ : jnp.ndarray, shape (input_dim, input_dim)
Source code in src/psyphy/model/wppm.py
def local_covariance(self, params: Params, x: jnp.ndarray) -> jnp.ndarray:
    """
    Return local covariance Σ(x) at stimulus location x.

    MVP:
        Σ(x) = diag(exp(log_diag)), constant across x.
        - Positive-definite because exp(log_diag) > 0.
    Future (full WPPM mode):
        Σ(x) varies smoothly with x via basis expansions and a Wishart-process
        prior controlled by (extra_dims, variance_scale, lengthscale). Those
        hyperparameters are exposed here but not used in MVP.

    Parameters
    ----------
    params : dict
        model parameters (MVP expects "log_diag": (input_dim,)).
    x : jnp.ndarray
        Stimulus location (unused in MVP because Σ is constant).

    Returns
    -------
    Σ : jnp.ndarray, shape (input_dim, input_dim)
    """
    log_diag = params["log_diag"]               # unconstrained diagonal log-variances
    diag = jnp.exp(log_diag)                    # enforce positivity
    return jnp.diag(diag)                       # constant diagonal covariance

log_likelihood

log_likelihood(
    params: Params,
    refs: ndarray,
    probes: ndarray,
    responses: ndarray,
) -> ndarray

Compute the log-likelihood for arrays of trials.

IMPORTANT: We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV) or MC likelihood logic in multiple places. This keeps responsibilities clean and makes adding new tasks straightforward.

Parameters:

Name Type Description Default
params dict

Model parameters.

required
refs (ndarray, shape(N, input_dim))
required
probes (ndarray, shape(N, input_dim))
required
responses (ndarray, shape(N))

Typically 0/1; task may support richer encodings.

required

Returns:

Name Type Description
loglik ndarray

Scalar log-likelihood (task-only; add prior outside if needed)

Source code in src/psyphy/model/wppm.py
def log_likelihood(self, params: Params, refs: jnp.ndarray, probes: jnp.ndarray, responses: jnp.ndarray) -> jnp.ndarray:
    """
    Compute the log-likelihood for arrays of trials.

    IMPORTANT:
        We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV)
        or MC likelihood logic in multiple places. This keeps responsibilities
        clean and makes adding new tasks straightforward.

    Parameters
    ----------
    params : dict
        Model parameters.
    refs : jnp.ndarray, shape (N, input_dim)
    probes : jnp.ndarray, shape (N, input_dim)
    responses : jnp.ndarray, shape (N,)
        Typically 0/1; task may support richer encodings.

    Returns
    -------
    loglik : jnp.ndarray
        Scalar log-likelihood (task-only; add prior outside if needed)
    """
    # We need a ResponseData-like object. To keep this method usable from
    # array inputs, we construct one on the fly. If you already have a
    # ResponseData instance, prefer `log_likelihood_from_data`.
    from psyphy.data.dataset import ResponseData  # local import to avoid cycles
    data = ResponseData()
    # ResponseData.add_trial(ref, probe, resp)
    for r, p, y in zip(refs, probes, responses):
        data.add_trial(r, p, int(y))
    return self.task.loglik(params, data, self, self.noise)

log_likelihood_from_data

log_likelihood_from_data(
    params: Params, data: Any
) -> ndarray

Compute log-likelihood directly from a ResponseData object.

Why delegate to the task? - The task knows the decision rule (oddity, 2AFC, ...). - The task can use the model (this WPPM) to fetch discriminabilities - and the task can use the noise model if it needs MC simulation

Parameters:

Name Type Description Default
params dict

Model parameters.

required
data ResponseData

Collected trial data.

required

Returns:

Name Type Description
loglik ndarray

scalar log-likelihood (task-only; add prior outside if needed)

Source code in src/psyphy/model/wppm.py
def log_likelihood_from_data(self, params: Params, data: Any) -> jnp.ndarray:
    """
    Compute log-likelihood directly from a ResponseData object.

    Why delegate to the task?
        - The task knows the decision rule (oddity, 2AFC, ...).
        - The task can use the model (this WPPM) to fetch discriminabilities
        - and the task can use the noise model if it needs MC simulation

    Parameters
    ----------
    params : dict
        Model parameters.
    data : ResponseData
        Collected trial data.

    Returns
    -------
    loglik : jnp.ndarray
        scalar log-likelihood (task-only; add prior outside if needed)
    """
    return self.task.loglik(params, data, self, self.noise)

log_posterior_from_data

log_posterior_from_data(
    params: Params, data: Any
) -> ndarray

Convenience helper if you want log posterior in one call (MVP).

This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.

Returns:

Type Description
jnp.ndarray : scalar log posterior = loglik(params | data) + log_prior(params)
Source code in src/psyphy/model/wppm.py
def log_posterior_from_data(self, params: Params, data: Any) -> jnp.ndarray:
    """
    Convenience helper if you want log posterior in one call (MVP).

    This simply adds the prior log-probability to the task log-likelihood.
    Inference engines (e.g., MAP optimizer) typically optimize this quantity.

    Returns
    -------
    jnp.ndarray : scalar log posterior = loglik(params | data) + log_prior(params)
    """
    return self.log_likelihood_from_data(params, data) + self.prior.log_prob(params)

predict_prob

predict_prob(params: Params, stimulus: Stimulus) -> ndarray

Predict probability of a correct response for a single stimulus.

Design choice: WPPM computes discriminability & covariance; the TASK defines how that translates to performance. We therefore delegate to: task.predict(params, stimulus, model=self, noise=self.noise)

Parameters:

Name Type Description Default
params dict
required
stimulus (reference, probe)
required

Returns:

Name Type Description
p_correct ndarray
Source code in src/psyphy/model/wppm.py
def predict_prob(self, params: Params, stimulus: Stimulus) -> jnp.ndarray:
    """
    Predict probability of a correct response for a single stimulus.

    Design choice:
        WPPM computes discriminability & covariance; the TASK defines how
        that translates to performance. We therefore delegate to:
            task.predict(params, stimulus, model=self, noise=self.noise)

    Parameters
    ----------
    params : dict
    stimulus : (reference, probe)

    Returns
    -------
    p_correct : jnp.ndarray
    """
    return self.task.predict(params, stimulus, self, self.noise)