Skip to content

Model

Package


model

psyphy.model

Model-layer API: everything model-related in one place.

Includes
  • WPPM (core model)
  • Priors (Prior)
  • Tasks (TaskLikelihood base, OddityTask, TwoAFC)
  • Noise models (GaussianNoise, StudentTNoise)

All functions/classes use JAX arrays (jax.numpy as jnp) for autodiff and optimization with Optax.

Typical usage
1
from psyphy.model import WPPM, Prior, OddityTask, GaussianNoise

Classes:

Name Description
GaussianNoise
OddityTask

Three-alternative forced-choice oddity task (MVP placeholder) ("pick the odd-one out).

Prior

Prior distribution over WPPM parameters

StudentTNoise
TaskLikelihood

Abstract base class for task likelihoods

TwoAFC

2-alternative forced-choice task (MVP placeholder).

WPPM

Wishart Process Psychophysical Model (WPPM).

GaussianNoise

GaussianNoise(sigma: float = 1.0)

Methods:

Name Description
log_prob

Attributes:

Name Type Description
sigma float

sigma

sigma: float = 1.0

log_prob

log_prob(residual: float) -> float
Source code in src/psyphy/model/noise.py
def log_prob(self, residual: float) -> float:
    _ = residual
    return -0.5

OddityTask

OddityTask(slope: float = 1.5)

Bases: TaskLikelihood

Three-alternative forced-choice oddity task (MVP placeholder) ("pick the odd-one out).

Methods:

Name Description
loglik
predict

Attributes:

Name Type Description
chance_level float
performance_range float
slope
Source code in src/psyphy/model/task.py
def __init__(self, slope: float = 1.5) -> None:
    self.slope = float(slope)
    self.chance_level: float = 1.0 / 3.0
    self.performance_range: float = 1.0 - self.chance_level

chance_level

chance_level: float = 1.0 / 3.0

performance_range

performance_range: float = 1.0 - chance_level

slope

slope = float(slope)

loglik

loglik(
    params: Any, data: Any, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def loglik(self, params: Any, data: Any, model: Any, noise: Any) -> jnp.ndarray:
    refs, probes, responses = data.to_numpy()
    ps = jnp.array([self.predict(params, (r, p), model, noise) for r, p in zip(refs, probes)])
    eps = 1e-9
    return jnp.sum(jnp.where(responses == 1, jnp.log(ps + eps), jnp.log(1.0 - ps + eps)))

predict

predict(
    params: Any, stimuli: Stimulus, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def predict(self, params: Any, stimuli: Stimulus, model: Any, noise: Any) -> jnp.ndarray:
    d = model.discriminability(params, stimuli)
    g = 0.5 * (jnp.tanh(self.slope * d) + 1.0)
    return self.chance_level + self.performance_range * g

Prior

Prior(
    input_dim: int,
    scale: float = 0.5,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    extra_embedding_dims: int = 0,
)

Prior distribution over WPPM parameters

Parameters:

Name Type Description Default
input_dim int

Dimensionality of the model space (same as WPPM.input_dim)

required
scale float

Stddev of Gaussian prior for log_diag entries (MVP only).

0.5
variance_scale float

Forward-compatible stub for Full WPPM mode. Will scale covariance magnitudes

1.0
lengthscale float

Forward-compatible stub for Full WPPM mode; controls smoothness of covariance field: - small lengthscale --> rapid variation across space - large lengthscale --> smoother field, long-range correlations.

1.0
extra_embedding_dims int

Forward-compatible stub for Full WPPM mode. Will expand embedding space.

0

Methods:

Name Description
default

Convenience constructor with MVP defaults.

log_prob

Compute log prior density (up to a constant)

sample_params

Sample initial parameters from the prior.

Attributes:

Name Type Description
extra_embedding_dims int
input_dim int
lengthscale float
scale float
variance_scale float

extra_embedding_dims

extra_embedding_dims: int = 0

input_dim

input_dim: int

lengthscale

lengthscale: float = 1.0

scale

scale: float = 0.5

variance_scale

variance_scale: float = 1.0

default

default(input_dim: int, scale: float = 0.5) -> 'Prior'

Convenience constructor with MVP defaults.

Source code in src/psyphy/model/prior.py
@classmethod
def default(cls, input_dim: int, scale: float = 0.5) -> "Prior":
    """Convenience constructor with MVP defaults."""
    return cls(input_dim=input_dim, scale=scale)

log_prob

log_prob(params: Params) -> ndarray

Compute log prior density (up to a constant)

MVP: Isotropic Gaussian on log_diag Full WPPM mode: Will implement structured prior over basis weights and lengthscale-regularized covariance fields

Source code in src/psyphy/model/prior.py
def log_prob(self, params: Params) -> jnp.ndarray:
    """
    Compute log prior density (up to a constant)

    MVP:
        Isotropic Gaussian on log_diag
    Full WPPM mode:
        Will implement structured prior over basis weights and
        lengthscale-regularized covariance fields
    """
    log_diag = params["log_diag"]
    var = self.scale**2
    return -0.5 * jnp.sum((log_diag**2) / var)

sample_params

sample_params(key: KeyArray) -> Params

Sample initial parameters from the prior.

MVP: Returns {"log_diag": shape (input_dim,)}. Full WPPM mode: Will also include basis weights, structured covariance params, and hyperparameters for GP (variance_scale, lengthscale).

Source code in src/psyphy/model/prior.py
def sample_params(self, key: jr.KeyArray) -> Params:
    """
    Sample initial parameters from the prior.

    MVP:
        Returns {"log_diag": shape (input_dim,)}.
    Full WPPM mode:
        Will also include basis weights, structured covariance params,
        and hyperparameters for GP (variance_scale, lengthscale).
    """
    log_diag = jr.normal(key, shape=(self.input_dim,)) * self.scale
    return {"log_diag": log_diag}

StudentTNoise

StudentTNoise(df: float = 3.0, scale: float = 1.0)

Methods:

Name Description
log_prob

Attributes:

Name Type Description
df float
scale float

df

df: float = 3.0

scale

scale: float = 1.0

log_prob

log_prob(residual: float) -> float
Source code in src/psyphy/model/noise.py
def log_prob(self, residual: float) -> float:
    _ = residual
    return -0.5

TaskLikelihood

Bases: ABC

Abstract base class for task likelihoods

Methods:

Name Description
loglik

Compute log-likelihood of observed responses under this task

predict

Predict probability of correct response for a stimulus.

loglik

loglik(
    params: Any, data: Any, model: Any, noise: Any
) -> ndarray

Compute log-likelihood of observed responses under this task

Source code in src/psyphy/model/task.py
@abstractmethod
def loglik(self, params: Any, data: Any, model: Any, noise: Any) -> jnp.ndarray:
    """Compute log-likelihood of observed responses under this task"""
    ...

predict

predict(
    params: Any, stimuli: Stimulus, model: Any, noise: Any
) -> ndarray

Predict probability of correct response for a stimulus.

Source code in src/psyphy/model/task.py
@abstractmethod
def predict(self, params: Any, stimuli: Stimulus, model: Any, noise: Any) -> jnp.ndarray:
    """Predict probability of correct response for a stimulus."""
    ...

TwoAFC

TwoAFC(slope: float = 2.0)

Bases: TaskLikelihood

2-alternative forced-choice task (MVP placeholder).

Methods:

Name Description
loglik
predict

Attributes:

Name Type Description
chance_level float
performance_range float
slope
Source code in src/psyphy/model/task.py
def __init__(self, slope: float = 2.0) -> None:
    self.slope = float(slope)
    self.chance_level: float = 0.5
    self.performance_range: float = 1.0 - self.chance_level

chance_level

chance_level: float = 0.5

performance_range

performance_range: float = 1.0 - chance_level

slope

slope = float(slope)

loglik

loglik(
    params: Any, data: Any, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def loglik(self, params: Any, data: Any, model: Any, noise: Any) -> jnp.ndarray:
    refs, probes, responses = data.to_numpy()
    ps = jnp.array([self.predict(params, (r, p), model, noise) for r, p in zip(refs, probes)])
    eps = 1e-9
    return jnp.sum(jnp.where(responses == 1, jnp.log(ps + eps), jnp.log(1.0 - ps + eps)))

predict

predict(
    params: Any, stimuli: Stimulus, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def predict(self, params: Any, stimuli: Stimulus, model: Any, noise: Any) -> jnp.ndarray:
    d = model.discriminability(params, stimuli)
    return self.chance_level + self.performance_range * jnp.tanh(self.slope * d)

WPPM

WPPM(
    input_dim: int,
    prior: Prior,
    task: TaskLikelihood,
    noise: Any | None = None,
    *,
    extra_dims: int = 0,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    diag_term: float = 1e-06
)

Wishart Process Psychophysical Model (WPPM).

Parameters:

Name Type Description Default
input_dim int

Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and probe live in R^{input_dim}.

required
prior Prior

Prior distribution over model parameters. MVP uses a simple Gaussian prior over diagonal log-variances (see Prior.sample_params()).

required
task TaskLikelihood

Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask, TwoAFC)

required
noise Any

Noise model describing internal representation noise (e.g., GaussianNoise). Not used in MVP mapping but passed to the task interface for future MC sims.

None
Forward-compatible hyperparameters (MVP stubs)

extra_dims : int, default=0 Additional embedding dimensions for basis expansions (unused in MVP). variance_scale : float, default=1.0 Global scaling factor for covariance magnitude (unused in MVP). lengthscale : float, default=1.0 Smoothness/length-scale for spatial covariance variation (unused in MVP). (formerly "decay_rate") diag_term : float, default=1e-6 Small positive value added to the covariance diagonal for numerical stability. MVP uses this in matrix solves; the research model will also use it.

Methods:

Name Description
discriminability

Compute scalar discriminability d >= 0 for a (reference, probe) pair

init_params

Sample initial parameters from the prior.

local_covariance

Return local covariance Σ(x) at stimulus location x.

log_likelihood

Compute the log-likelihood for arrays of trials.

log_likelihood_from_data

Compute log-likelihood directly from a ResponseData object.

log_posterior_from_data

Convenience helper if you want log posterior in one call (MVP).

predict_prob

Predict probability of a correct response for a single stimulus.

Attributes:

Name Type Description
diag_term
extra_dims
input_dim
lengthscale
noise
prior
task
variance_scale
Source code in src/psyphy/model/wppm.py
def __init__(
    self,
    input_dim: int,
    prior: Prior,
    task: TaskLikelihood,
    noise: Any | None = None,
    *,
    extra_dims: int = 0,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    diag_term: float = 1e-6,
) -> None:
    # --- core components ---
    self.input_dim = int(input_dim)   # stimulus-space dimensionality
    self.prior = prior                # prior over parameter PyTree
    self.task = task                  # task mapping and likelihood
    self.noise = noise                # noise model 

    # --- forward-compatible hyperparameters (stubs in MVP) ---
    self.extra_dims = int(extra_dims)
    self.variance_scale = float(variance_scale)
    self.lengthscale = float(lengthscale)
    self.diag_term = float(diag_term)

diag_term

diag_term = float(diag_term)

extra_dims

extra_dims = int(extra_dims)

input_dim

input_dim = int(input_dim)

lengthscale

lengthscale = float(lengthscale)

noise

noise = noise

prior

prior = prior

task

task = task

variance_scale

variance_scale = float(variance_scale)

discriminability

discriminability(
    params: Params, stimulus: Stimulus
) -> ndarray

Compute scalar discriminability d >= 0 for a (reference, probe) pair

MVP: d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) ) with Σ(ref) the local covariance at the reference, - We add diag_term * I for numerical stability before inversion Future (full WPPM mode): d is implicit via Monte Carlo simulation of internal noisy responses under the task's decision rule (no closed form). In that case, tasks will directly implement predict/loglik with MC, and this method may be used only for diagnostics.

Parameters:

Name Type Description Default
params dict

Model parameters.

required
stimulus tuple

(reference, probe) arrays of shape (input_dim,).

required

Returns:

Name Type Description
d ndarray

Nonnegative scalar discriminability.

Source code in src/psyphy/model/wppm.py
def discriminability(self, params: Params, stimulus: Stimulus) -> jnp.ndarray:
    """
    Compute scalar discriminability d >= 0 for a (reference, probe) pair

    MVP:
        d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) )
        with Σ(ref) the local covariance at the reference,
        - We add `diag_term * I` for numerical stability before inversion
    Future (full WPPM mode):
        d is implicit via Monte Carlo simulation of internal noisy responses
        under the task's decision rule (no closed form). In that case, tasks
        will directly implement predict/loglik with MC, and this method may be
        used only for diagnostics.

    Parameters
    ----------
    params : dict
        Model parameters.
    stimulus : tuple
        (reference, probe) arrays of shape (input_dim,).

    Returns
    -------
    d : jnp.ndarray
        Nonnegative scalar discriminability.
    """
    ref, probe = stimulus
    delta = probe - ref                                # difference vector in input space
    Sigma = self.local_covariance(params, ref)         # local covariance at reference
    # Add jitter for stable solve; diag_term is configurable
    jitter = self.diag_term * jnp.eye(self.input_dim)
    # Solve (Σ + jitter)^{-1} delta using a PD-aware solver
    x = jax.scipy.linalg.solve(Sigma + jitter, delta, assume_a="pos")
    d2 = jnp.dot(delta, x)                             # quadratic form
    # Guard against tiny negative values from numerical error
    return jnp.sqrt(jnp.maximum(d2, 0.0))

init_params

init_params(key: KeyArray) -> Params

Sample initial parameters from the prior.

MVP parameters: {"log_diag": shape (input_dim,)} which defines a constant diagonal covariance across the space.

Returns:

Name Type Description
params dict[str, ndarray]
Source code in src/psyphy/model/wppm.py
def init_params(self, key: jr.KeyArray) -> Params:
    """
    Sample initial parameters from the prior.

    MVP parameters:
        {"log_diag": shape (input_dim,)}
    which defines a constant diagonal covariance across the space.

    Returns
    -------
    params : dict[str, jnp.ndarray]
    """
    return self.prior.sample_params(key)

local_covariance

local_covariance(params: Params, x: ndarray) -> ndarray

Return local covariance Σ(x) at stimulus location x.

MVP: Σ(x) = diag(exp(log_diag)), constant across x. - Positive-definite because exp(log_diag) > 0. Future (full WPPM mode): Σ(x) varies smoothly with x via basis expansions and a Wishart-process prior controlled by (extra_dims, variance_scale, lengthscale). Those hyperparameters are exposed here but not used in MVP.

Parameters:

Name Type Description Default
params dict

model parameters (MVP expects "log_diag": (input_dim,)).

required
x ndarray

Stimulus location (unused in MVP because Σ is constant).

required

Returns:

Type Description
Σ : jnp.ndarray, shape (input_dim, input_dim)
Source code in src/psyphy/model/wppm.py
def local_covariance(self, params: Params, x: jnp.ndarray) -> jnp.ndarray:
    """
    Return local covariance Σ(x) at stimulus location x.

    MVP:
        Σ(x) = diag(exp(log_diag)), constant across x.
        - Positive-definite because exp(log_diag) > 0.
    Future (full WPPM mode):
        Σ(x) varies smoothly with x via basis expansions and a Wishart-process
        prior controlled by (extra_dims, variance_scale, lengthscale). Those
        hyperparameters are exposed here but not used in MVP.

    Parameters
    ----------
    params : dict
        model parameters (MVP expects "log_diag": (input_dim,)).
    x : jnp.ndarray
        Stimulus location (unused in MVP because Σ is constant).

    Returns
    -------
    Σ : jnp.ndarray, shape (input_dim, input_dim)
    """
    log_diag = params["log_diag"]               # unconstrained diagonal log-variances
    diag = jnp.exp(log_diag)                    # enforce positivity
    return jnp.diag(diag)                       # constant diagonal covariance

log_likelihood

log_likelihood(
    params: Params,
    refs: ndarray,
    probes: ndarray,
    responses: ndarray,
) -> ndarray

Compute the log-likelihood for arrays of trials.

IMPORTANT: We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV) or MC likelihood logic in multiple places. This keeps responsibilities clean and makes adding new tasks straightforward.

Parameters:

Name Type Description Default
params dict

Model parameters.

required
refs (ndarray, shape(N, input_dim))
required
probes (ndarray, shape(N, input_dim))
required
responses (ndarray, shape(N))

Typically 0/1; task may support richer encodings.

required

Returns:

Name Type Description
loglik ndarray

Scalar log-likelihood (task-only; add prior outside if needed)

Source code in src/psyphy/model/wppm.py
def log_likelihood(self, params: Params, refs: jnp.ndarray, probes: jnp.ndarray, responses: jnp.ndarray) -> jnp.ndarray:
    """
    Compute the log-likelihood for arrays of trials.

    IMPORTANT:
        We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV)
        or MC likelihood logic in multiple places. This keeps responsibilities
        clean and makes adding new tasks straightforward.

    Parameters
    ----------
    params : dict
        Model parameters.
    refs : jnp.ndarray, shape (N, input_dim)
    probes : jnp.ndarray, shape (N, input_dim)
    responses : jnp.ndarray, shape (N,)
        Typically 0/1; task may support richer encodings.

    Returns
    -------
    loglik : jnp.ndarray
        Scalar log-likelihood (task-only; add prior outside if needed)
    """
    # We need a ResponseData-like object. To keep this method usable from
    # array inputs, we construct one on the fly. If you already have a
    # ResponseData instance, prefer `log_likelihood_from_data`.
    from psyphy.data.dataset import ResponseData  # local import to avoid cycles
    data = ResponseData()
    # ResponseData.add_trial(ref, probe, resp)
    for r, p, y in zip(refs, probes, responses):
        data.add_trial(r, p, int(y))
    return self.task.loglik(params, data, self, self.noise)

log_likelihood_from_data

log_likelihood_from_data(
    params: Params, data: Any
) -> ndarray

Compute log-likelihood directly from a ResponseData object.

Why delegate to the task? - The task knows the decision rule (oddity, 2AFC, ...). - The task can use the model (this WPPM) to fetch discriminabilities - and the task can use the noise model if it needs MC simulation

Parameters:

Name Type Description Default
params dict

Model parameters.

required
data ResponseData

Collected trial data.

required

Returns:

Name Type Description
loglik ndarray

scalar log-likelihood (task-only; add prior outside if needed)

Source code in src/psyphy/model/wppm.py
def log_likelihood_from_data(self, params: Params, data: Any) -> jnp.ndarray:
    """
    Compute log-likelihood directly from a ResponseData object.

    Why delegate to the task?
        - The task knows the decision rule (oddity, 2AFC, ...).
        - The task can use the model (this WPPM) to fetch discriminabilities
        - and the task can use the noise model if it needs MC simulation

    Parameters
    ----------
    params : dict
        Model parameters.
    data : ResponseData
        Collected trial data.

    Returns
    -------
    loglik : jnp.ndarray
        scalar log-likelihood (task-only; add prior outside if needed)
    """
    return self.task.loglik(params, data, self, self.noise)

log_posterior_from_data

log_posterior_from_data(
    params: Params, data: Any
) -> ndarray

Convenience helper if you want log posterior in one call (MVP).

This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.

Returns:

Type Description
jnp.ndarray : scalar log posterior = loglik(params | data) + log_prior(params)
Source code in src/psyphy/model/wppm.py
def log_posterior_from_data(self, params: Params, data: Any) -> jnp.ndarray:
    """
    Convenience helper if you want log posterior in one call (MVP).

    This simply adds the prior log-probability to the task log-likelihood.
    Inference engines (e.g., MAP optimizer) typically optimize this quantity.

    Returns
    -------
    jnp.ndarray : scalar log posterior = loglik(params | data) + log_prior(params)
    """
    return self.log_likelihood_from_data(params, data) + self.prior.log_prob(params)

predict_prob

predict_prob(params: Params, stimulus: Stimulus) -> ndarray

Predict probability of a correct response for a single stimulus.

Design choice: WPPM computes discriminability & covariance; the TASK defines how that translates to performance. We therefore delegate to: task.predict(params, stimulus, model=self, noise=self.noise)

Parameters:

Name Type Description Default
params dict
required
stimulus (reference, probe)
required

Returns:

Name Type Description
p_correct ndarray
Source code in src/psyphy/model/wppm.py
def predict_prob(self, params: Params, stimulus: Stimulus) -> jnp.ndarray:
    """
    Predict probability of a correct response for a single stimulus.

    Design choice:
        WPPM computes discriminability & covariance; the TASK defines how
        that translates to performance. We therefore delegate to:
            task.predict(params, stimulus, model=self, noise=self.noise)

    Parameters
    ----------
    params : dict
    stimulus : (reference, probe)

    Returns
    -------
    p_correct : jnp.ndarray
    """
    return self.task.predict(params, stimulus, self, self.noise)

Wishart Psyochophysical Process Model (WPPM)


wppm

wppm.py

Wishart Process Psychophysical Model (WPPM) — MVP-style implementation with forward-compatible hooks for the full WPPM model.

Goals

1) MVP that runs today: - Local covariance Σ(x) is diagonal and constant across the space. - Discriminability is Mahalanobis distance under Σ(reference). - Task mapping (e.g., Oddity, 2AFC) converts discriminability -> p(correct). - Likelihood is delegated to the TaskLikelihood (no Bernoulli code here).

2) Forward compatibility with full WPPM model: - Expose hyperparameters needed to for example use Model config used in Hong et al.: * extra_dims: embedding size for basis expansions (unused in MVP) * variance_scale: global covariance scale (unused in MVP) * lengthscale: smoothness/length-scale for covariance field (unused in MVP) * diag_term: numerical stabilizer added to covariance diagonals (used in MVP) - Later, replace local_covariance with a basis-expansion Wishart process and swap discriminability/likelihood with MC observer simulation.

All numerics use JAX (jax.numpy as jnp) to support autodiff and Optax optimizers

Classes:

Name Description
WPPM

Wishart Process Psychophysical Model (WPPM).

Attributes:

Name Type Description
Params
Stimulus

Params

Params = Dict[str, ndarray]

Stimulus

Stimulus = Tuple[ndarray, ndarray]

WPPM

WPPM(
    input_dim: int,
    prior: Prior,
    task: TaskLikelihood,
    noise: Any | None = None,
    *,
    extra_dims: int = 0,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    diag_term: float = 1e-06
)

Wishart Process Psychophysical Model (WPPM).

Parameters:

Name Type Description Default
input_dim int

Dimensionality of the input stimulus space (e.g., 2 for isoluminant plane, 3 for RGB). Both reference and probe live in R^{input_dim}.

required
prior Prior

Prior distribution over model parameters. MVP uses a simple Gaussian prior over diagonal log-variances (see Prior.sample_params()).

required
task TaskLikelihood

Psychophysical task mapping that defines how discriminability translates to p(correct) and how log-likelihood of responses is computed. (e.g., OddityTask, TwoAFC)

required
noise Any

Noise model describing internal representation noise (e.g., GaussianNoise). Not used in MVP mapping but passed to the task interface for future MC sims.

None
Forward-compatible hyperparameters (MVP stubs)

extra_dims : int, default=0 Additional embedding dimensions for basis expansions (unused in MVP). variance_scale : float, default=1.0 Global scaling factor for covariance magnitude (unused in MVP). lengthscale : float, default=1.0 Smoothness/length-scale for spatial covariance variation (unused in MVP). (formerly "decay_rate") diag_term : float, default=1e-6 Small positive value added to the covariance diagonal for numerical stability. MVP uses this in matrix solves; the research model will also use it.

Methods:

Name Description
discriminability

Compute scalar discriminability d >= 0 for a (reference, probe) pair

init_params

Sample initial parameters from the prior.

local_covariance

Return local covariance Σ(x) at stimulus location x.

log_likelihood

Compute the log-likelihood for arrays of trials.

log_likelihood_from_data

Compute log-likelihood directly from a ResponseData object.

log_posterior_from_data

Convenience helper if you want log posterior in one call (MVP).

predict_prob

Predict probability of a correct response for a single stimulus.

Attributes:

Name Type Description
diag_term
extra_dims
input_dim
lengthscale
noise
prior
task
variance_scale
Source code in src/psyphy/model/wppm.py
def __init__(
    self,
    input_dim: int,
    prior: Prior,
    task: TaskLikelihood,
    noise: Any | None = None,
    *,
    extra_dims: int = 0,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    diag_term: float = 1e-6,
) -> None:
    # --- core components ---
    self.input_dim = int(input_dim)   # stimulus-space dimensionality
    self.prior = prior                # prior over parameter PyTree
    self.task = task                  # task mapping and likelihood
    self.noise = noise                # noise model 

    # --- forward-compatible hyperparameters (stubs in MVP) ---
    self.extra_dims = int(extra_dims)
    self.variance_scale = float(variance_scale)
    self.lengthscale = float(lengthscale)
    self.diag_term = float(diag_term)

diag_term

diag_term = float(diag_term)

extra_dims

extra_dims = int(extra_dims)

input_dim

input_dim = int(input_dim)

lengthscale

lengthscale = float(lengthscale)

noise

noise = noise

prior

prior = prior

task

task = task

variance_scale

variance_scale = float(variance_scale)

discriminability

discriminability(
    params: Params, stimulus: Stimulus
) -> ndarray

Compute scalar discriminability d >= 0 for a (reference, probe) pair

MVP: d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) ) with Σ(ref) the local covariance at the reference, - We add diag_term * I for numerical stability before inversion Future (full WPPM mode): d is implicit via Monte Carlo simulation of internal noisy responses under the task's decision rule (no closed form). In that case, tasks will directly implement predict/loglik with MC, and this method may be used only for diagnostics.

Parameters:

Name Type Description Default
params dict

Model parameters.

required
stimulus tuple

(reference, probe) arrays of shape (input_dim,).

required

Returns:

Name Type Description
d ndarray

Nonnegative scalar discriminability.

Source code in src/psyphy/model/wppm.py
def discriminability(self, params: Params, stimulus: Stimulus) -> jnp.ndarray:
    """
    Compute scalar discriminability d >= 0 for a (reference, probe) pair

    MVP:
        d = sqrt( (probe - ref)^T Σ(ref)^{-1} (probe - ref) )
        with Σ(ref) the local covariance at the reference,
        - We add `diag_term * I` for numerical stability before inversion
    Future (full WPPM mode):
        d is implicit via Monte Carlo simulation of internal noisy responses
        under the task's decision rule (no closed form). In that case, tasks
        will directly implement predict/loglik with MC, and this method may be
        used only for diagnostics.

    Parameters
    ----------
    params : dict
        Model parameters.
    stimulus : tuple
        (reference, probe) arrays of shape (input_dim,).

    Returns
    -------
    d : jnp.ndarray
        Nonnegative scalar discriminability.
    """
    ref, probe = stimulus
    delta = probe - ref                                # difference vector in input space
    Sigma = self.local_covariance(params, ref)         # local covariance at reference
    # Add jitter for stable solve; diag_term is configurable
    jitter = self.diag_term * jnp.eye(self.input_dim)
    # Solve (Σ + jitter)^{-1} delta using a PD-aware solver
    x = jax.scipy.linalg.solve(Sigma + jitter, delta, assume_a="pos")
    d2 = jnp.dot(delta, x)                             # quadratic form
    # Guard against tiny negative values from numerical error
    return jnp.sqrt(jnp.maximum(d2, 0.0))

init_params

init_params(key: KeyArray) -> Params

Sample initial parameters from the prior.

MVP parameters: {"log_diag": shape (input_dim,)} which defines a constant diagonal covariance across the space.

Returns:

Name Type Description
params dict[str, ndarray]
Source code in src/psyphy/model/wppm.py
def init_params(self, key: jr.KeyArray) -> Params:
    """
    Sample initial parameters from the prior.

    MVP parameters:
        {"log_diag": shape (input_dim,)}
    which defines a constant diagonal covariance across the space.

    Returns
    -------
    params : dict[str, jnp.ndarray]
    """
    return self.prior.sample_params(key)

local_covariance

local_covariance(params: Params, x: ndarray) -> ndarray

Return local covariance Σ(x) at stimulus location x.

MVP: Σ(x) = diag(exp(log_diag)), constant across x. - Positive-definite because exp(log_diag) > 0. Future (full WPPM mode): Σ(x) varies smoothly with x via basis expansions and a Wishart-process prior controlled by (extra_dims, variance_scale, lengthscale). Those hyperparameters are exposed here but not used in MVP.

Parameters:

Name Type Description Default
params dict

model parameters (MVP expects "log_diag": (input_dim,)).

required
x ndarray

Stimulus location (unused in MVP because Σ is constant).

required

Returns:

Type Description
Σ : jnp.ndarray, shape (input_dim, input_dim)
Source code in src/psyphy/model/wppm.py
def local_covariance(self, params: Params, x: jnp.ndarray) -> jnp.ndarray:
    """
    Return local covariance Σ(x) at stimulus location x.

    MVP:
        Σ(x) = diag(exp(log_diag)), constant across x.
        - Positive-definite because exp(log_diag) > 0.
    Future (full WPPM mode):
        Σ(x) varies smoothly with x via basis expansions and a Wishart-process
        prior controlled by (extra_dims, variance_scale, lengthscale). Those
        hyperparameters are exposed here but not used in MVP.

    Parameters
    ----------
    params : dict
        model parameters (MVP expects "log_diag": (input_dim,)).
    x : jnp.ndarray
        Stimulus location (unused in MVP because Σ is constant).

    Returns
    -------
    Σ : jnp.ndarray, shape (input_dim, input_dim)
    """
    log_diag = params["log_diag"]               # unconstrained diagonal log-variances
    diag = jnp.exp(log_diag)                    # enforce positivity
    return jnp.diag(diag)                       # constant diagonal covariance

log_likelihood

log_likelihood(
    params: Params,
    refs: ndarray,
    probes: ndarray,
    responses: ndarray,
) -> ndarray

Compute the log-likelihood for arrays of trials.

IMPORTANT: We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV) or MC likelihood logic in multiple places. This keeps responsibilities clean and makes adding new tasks straightforward.

Parameters:

Name Type Description Default
params dict

Model parameters.

required
refs (ndarray, shape(N, input_dim))
required
probes (ndarray, shape(N, input_dim))
required
responses (ndarray, shape(N))

Typically 0/1; task may support richer encodings.

required

Returns:

Name Type Description
loglik ndarray

Scalar log-likelihood (task-only; add prior outside if needed)

Source code in src/psyphy/model/wppm.py
def log_likelihood(self, params: Params, refs: jnp.ndarray, probes: jnp.ndarray, responses: jnp.ndarray) -> jnp.ndarray:
    """
    Compute the log-likelihood for arrays of trials.

    IMPORTANT:
        We delegate to the TaskLikelihood to avoid duplicating Bernoulli (MPV)
        or MC likelihood logic in multiple places. This keeps responsibilities
        clean and makes adding new tasks straightforward.

    Parameters
    ----------
    params : dict
        Model parameters.
    refs : jnp.ndarray, shape (N, input_dim)
    probes : jnp.ndarray, shape (N, input_dim)
    responses : jnp.ndarray, shape (N,)
        Typically 0/1; task may support richer encodings.

    Returns
    -------
    loglik : jnp.ndarray
        Scalar log-likelihood (task-only; add prior outside if needed)
    """
    # We need a ResponseData-like object. To keep this method usable from
    # array inputs, we construct one on the fly. If you already have a
    # ResponseData instance, prefer `log_likelihood_from_data`.
    from psyphy.data.dataset import ResponseData  # local import to avoid cycles
    data = ResponseData()
    # ResponseData.add_trial(ref, probe, resp)
    for r, p, y in zip(refs, probes, responses):
        data.add_trial(r, p, int(y))
    return self.task.loglik(params, data, self, self.noise)

log_likelihood_from_data

log_likelihood_from_data(
    params: Params, data: Any
) -> ndarray

Compute log-likelihood directly from a ResponseData object.

Why delegate to the task? - The task knows the decision rule (oddity, 2AFC, ...). - The task can use the model (this WPPM) to fetch discriminabilities - and the task can use the noise model if it needs MC simulation

Parameters:

Name Type Description Default
params dict

Model parameters.

required
data ResponseData

Collected trial data.

required

Returns:

Name Type Description
loglik ndarray

scalar log-likelihood (task-only; add prior outside if needed)

Source code in src/psyphy/model/wppm.py
def log_likelihood_from_data(self, params: Params, data: Any) -> jnp.ndarray:
    """
    Compute log-likelihood directly from a ResponseData object.

    Why delegate to the task?
        - The task knows the decision rule (oddity, 2AFC, ...).
        - The task can use the model (this WPPM) to fetch discriminabilities
        - and the task can use the noise model if it needs MC simulation

    Parameters
    ----------
    params : dict
        Model parameters.
    data : ResponseData
        Collected trial data.

    Returns
    -------
    loglik : jnp.ndarray
        scalar log-likelihood (task-only; add prior outside if needed)
    """
    return self.task.loglik(params, data, self, self.noise)

log_posterior_from_data

log_posterior_from_data(
    params: Params, data: Any
) -> ndarray

Convenience helper if you want log posterior in one call (MVP).

This simply adds the prior log-probability to the task log-likelihood. Inference engines (e.g., MAP optimizer) typically optimize this quantity.

Returns:

Type Description
jnp.ndarray : scalar log posterior = loglik(params | data) + log_prior(params)
Source code in src/psyphy/model/wppm.py
def log_posterior_from_data(self, params: Params, data: Any) -> jnp.ndarray:
    """
    Convenience helper if you want log posterior in one call (MVP).

    This simply adds the prior log-probability to the task log-likelihood.
    Inference engines (e.g., MAP optimizer) typically optimize this quantity.

    Returns
    -------
    jnp.ndarray : scalar log posterior = loglik(params | data) + log_prior(params)
    """
    return self.log_likelihood_from_data(params, data) + self.prior.log_prob(params)

predict_prob

predict_prob(params: Params, stimulus: Stimulus) -> ndarray

Predict probability of a correct response for a single stimulus.

Design choice: WPPM computes discriminability & covariance; the TASK defines how that translates to performance. We therefore delegate to: task.predict(params, stimulus, model=self, noise=self.noise)

Parameters:

Name Type Description Default
params dict
required
stimulus (reference, probe)
required

Returns:

Name Type Description
p_correct ndarray
Source code in src/psyphy/model/wppm.py
def predict_prob(self, params: Params, stimulus: Stimulus) -> jnp.ndarray:
    """
    Predict probability of a correct response for a single stimulus.

    Design choice:
        WPPM computes discriminability & covariance; the TASK defines how
        that translates to performance. We therefore delegate to:
            task.predict(params, stimulus, model=self, noise=self.noise)

    Parameters
    ----------
    params : dict
    stimulus : (reference, probe)

    Returns
    -------
    p_correct : jnp.ndarray
    """
    return self.task.predict(params, stimulus, self, self.noise)

Priors


prior

prior.py

Prior distributions for WPPM parameters

MVP implementation: - Gaussian prior over diagonal log-variances

Forward compatibility (Full WPPM mode): - Exposes hyperparameters that will be used when the full Wishart Process covariance field is implemented: * variance_scale : global scaling factor for covariance magnitude * lengthscale : smoothness/length-scale controlling spatial variation * extra_embedding_dims : embedding dimension for basis expansions

Connections
  • WPPM calls Prior.sample_params() to initialize model parameters
  • WPPM adds Prior.log_prob(params) to task log-likelihoods to form the log posterior
  • In Full WPPM mode, Prior will generate structured parameters for basis expansions and lengthscale-controlled smooth covariance fields

Classes:

Name Description
Prior

Prior distribution over WPPM parameters

Attributes:

Name Type Description
Params

Params

Params = Dict[str, ndarray]

Prior

Prior(
    input_dim: int,
    scale: float = 0.5,
    variance_scale: float = 1.0,
    lengthscale: float = 1.0,
    extra_embedding_dims: int = 0,
)

Prior distribution over WPPM parameters

Parameters:

Name Type Description Default
input_dim int

Dimensionality of the model space (same as WPPM.input_dim)

required
scale float

Stddev of Gaussian prior for log_diag entries (MVP only).

0.5
variance_scale float

Forward-compatible stub for Full WPPM mode. Will scale covariance magnitudes

1.0
lengthscale float

Forward-compatible stub for Full WPPM mode; controls smoothness of covariance field: - small lengthscale --> rapid variation across space - large lengthscale --> smoother field, long-range correlations.

1.0
extra_embedding_dims int

Forward-compatible stub for Full WPPM mode. Will expand embedding space.

0

Methods:

Name Description
default

Convenience constructor with MVP defaults.

log_prob

Compute log prior density (up to a constant)

sample_params

Sample initial parameters from the prior.

Attributes:

Name Type Description
extra_embedding_dims int
input_dim int
lengthscale float
scale float
variance_scale float

extra_embedding_dims

extra_embedding_dims: int = 0

input_dim

input_dim: int

lengthscale

lengthscale: float = 1.0

scale

scale: float = 0.5

variance_scale

variance_scale: float = 1.0

default

default(input_dim: int, scale: float = 0.5) -> 'Prior'

Convenience constructor with MVP defaults.

Source code in src/psyphy/model/prior.py
@classmethod
def default(cls, input_dim: int, scale: float = 0.5) -> "Prior":
    """Convenience constructor with MVP defaults."""
    return cls(input_dim=input_dim, scale=scale)

log_prob

log_prob(params: Params) -> ndarray

Compute log prior density (up to a constant)

MVP: Isotropic Gaussian on log_diag Full WPPM mode: Will implement structured prior over basis weights and lengthscale-regularized covariance fields

Source code in src/psyphy/model/prior.py
def log_prob(self, params: Params) -> jnp.ndarray:
    """
    Compute log prior density (up to a constant)

    MVP:
        Isotropic Gaussian on log_diag
    Full WPPM mode:
        Will implement structured prior over basis weights and
        lengthscale-regularized covariance fields
    """
    log_diag = params["log_diag"]
    var = self.scale**2
    return -0.5 * jnp.sum((log_diag**2) / var)

sample_params

sample_params(key: KeyArray) -> Params

Sample initial parameters from the prior.

MVP: Returns {"log_diag": shape (input_dim,)}. Full WPPM mode: Will also include basis weights, structured covariance params, and hyperparameters for GP (variance_scale, lengthscale).

Source code in src/psyphy/model/prior.py
def sample_params(self, key: jr.KeyArray) -> Params:
    """
    Sample initial parameters from the prior.

    MVP:
        Returns {"log_diag": shape (input_dim,)}.
    Full WPPM mode:
        Will also include basis weights, structured covariance params,
        and hyperparameters for GP (variance_scale, lengthscale).
    """
    log_diag = jr.normal(key, shape=(self.input_dim,)) * self.scale
    return {"log_diag": log_diag}

Noise


noise

Classes:

Name Description
GaussianNoise
StudentTNoise

GaussianNoise

GaussianNoise(sigma: float = 1.0)

Methods:

Name Description
log_prob

Attributes:

Name Type Description
sigma float

sigma

sigma: float = 1.0

log_prob

log_prob(residual: float) -> float
Source code in src/psyphy/model/noise.py
def log_prob(self, residual: float) -> float:
    _ = residual
    return -0.5

StudentTNoise

StudentTNoise(df: float = 3.0, scale: float = 1.0)

Methods:

Name Description
log_prob

Attributes:

Name Type Description
df float
scale float

df

df: float = 3.0

scale

scale: float = 1.0

log_prob

log_prob(residual: float) -> float
Source code in src/psyphy/model/noise.py
def log_prob(self, residual: float) -> float:
    _ = residual
    return -0.5

Tasks


task

task.py

Task likelihoods for different psychophysical experimetns.

Each TaskLikelihood defines: - predict(params, stimuli, model, noise) Map discriminability (computed by model) to probability of correct response.

  • loglik(params, data, model, noise) Compute log-likelihood of observed responses under this task.

MVP implementation: - OddityTask (3AFC) and TwoAFC. - Both use simple sigmoid-like mappings of discriminability -> performance - loglik implemented as Bernoulli log-prob with these predictions

Forward compatibility (Full WPPM mode): - Tasks will call into WPPM for discriminability computed via Monte Carlo observer simulations, not closed forms. - Noise models will be used explicitly to generate internal noisy reps. - This ensures the same API supports both MVP and Full WPPM mode.

Connections
  • WPPM delegates to task.predict and task.loglik (never re-implements likelihood)
  • noise model is passed through from WPPM so tasks can simulate responses.
  • we can define new tasks by subclassing TaskLikelihood and implementing predict() and loglik().

Classes:

Name Description
OddityTask

Three-alternative forced-choice oddity task (MVP placeholder) ("pick the odd-one out).

TaskLikelihood

Abstract base class for task likelihoods

TwoAFC

2-alternative forced-choice task (MVP placeholder).

Attributes:

Name Type Description
Stimulus

Stimulus

Stimulus = Tuple[ndarray, ndarray]

OddityTask

OddityTask(slope: float = 1.5)

Bases: TaskLikelihood

Three-alternative forced-choice oddity task (MVP placeholder) ("pick the odd-one out).

Methods:

Name Description
loglik
predict

Attributes:

Name Type Description
chance_level float
performance_range float
slope
Source code in src/psyphy/model/task.py
def __init__(self, slope: float = 1.5) -> None:
    self.slope = float(slope)
    self.chance_level: float = 1.0 / 3.0
    self.performance_range: float = 1.0 - self.chance_level

chance_level

chance_level: float = 1.0 / 3.0

performance_range

performance_range: float = 1.0 - chance_level

slope

slope = float(slope)

loglik

loglik(
    params: Any, data: Any, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def loglik(self, params: Any, data: Any, model: Any, noise: Any) -> jnp.ndarray:
    refs, probes, responses = data.to_numpy()
    ps = jnp.array([self.predict(params, (r, p), model, noise) for r, p in zip(refs, probes)])
    eps = 1e-9
    return jnp.sum(jnp.where(responses == 1, jnp.log(ps + eps), jnp.log(1.0 - ps + eps)))

predict

predict(
    params: Any, stimuli: Stimulus, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def predict(self, params: Any, stimuli: Stimulus, model: Any, noise: Any) -> jnp.ndarray:
    d = model.discriminability(params, stimuli)
    g = 0.5 * (jnp.tanh(self.slope * d) + 1.0)
    return self.chance_level + self.performance_range * g

TaskLikelihood

Bases: ABC

Abstract base class for task likelihoods

Methods:

Name Description
loglik

Compute log-likelihood of observed responses under this task

predict

Predict probability of correct response for a stimulus.

loglik

loglik(
    params: Any, data: Any, model: Any, noise: Any
) -> ndarray

Compute log-likelihood of observed responses under this task

Source code in src/psyphy/model/task.py
@abstractmethod
def loglik(self, params: Any, data: Any, model: Any, noise: Any) -> jnp.ndarray:
    """Compute log-likelihood of observed responses under this task"""
    ...

predict

predict(
    params: Any, stimuli: Stimulus, model: Any, noise: Any
) -> ndarray

Predict probability of correct response for a stimulus.

Source code in src/psyphy/model/task.py
@abstractmethod
def predict(self, params: Any, stimuli: Stimulus, model: Any, noise: Any) -> jnp.ndarray:
    """Predict probability of correct response for a stimulus."""
    ...

TwoAFC

TwoAFC(slope: float = 2.0)

Bases: TaskLikelihood

2-alternative forced-choice task (MVP placeholder).

Methods:

Name Description
loglik
predict

Attributes:

Name Type Description
chance_level float
performance_range float
slope
Source code in src/psyphy/model/task.py
def __init__(self, slope: float = 2.0) -> None:
    self.slope = float(slope)
    self.chance_level: float = 0.5
    self.performance_range: float = 1.0 - self.chance_level

chance_level

chance_level: float = 0.5

performance_range

performance_range: float = 1.0 - chance_level

slope

slope = float(slope)

loglik

loglik(
    params: Any, data: Any, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def loglik(self, params: Any, data: Any, model: Any, noise: Any) -> jnp.ndarray:
    refs, probes, responses = data.to_numpy()
    ps = jnp.array([self.predict(params, (r, p), model, noise) for r, p in zip(refs, probes)])
    eps = 1e-9
    return jnp.sum(jnp.where(responses == 1, jnp.log(ps + eps), jnp.log(1.0 - ps + eps)))

predict

predict(
    params: Any, stimuli: Stimulus, model: Any, noise: Any
) -> ndarray
Source code in src/psyphy/model/task.py
def predict(self, params: Any, stimuli: Stimulus, model: Any, noise: Any) -> jnp.ndarray:
    d = model.discriminability(params, stimuli)
    return self.chance_level + self.performance_range * jnp.tanh(self.slope * d)