Skip to content

Utils


Package


utils

utils

Shared utility functions and helpers for psyphy.

This subpackage provides: - candidates : functions for generating candidate stimulus pools. - math : mathematical utilities (basis functions, distances, kernels). - rng : random number handling for reproducibility.

MVP implementation
  • candidates: grid, Sobol, custom pools.
  • math: Chebyshev basis, Mahalanobis distance, RBF kernel.
  • rng: seed() and split() for JAX PRNG keys.
Full WPPM mode
  • candidates: adaptive refinement around posterior uncertainty.
  • math: richer kernels and basis expansions for Wishart processes.
  • rng: experiment-wide RNG registry.

Functions:

Name Description
chebyshev_basis

Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x.

custom_candidates

Wrap a user-defined list of probes into candidate pairs.

grid_candidates

Generate grid-based candidate probes around a reference.

mahalanobis_distance

Compute squared Mahalanobis distance between x and mean.

rbf_kernel

Radial Basis Function (RBF) kernel between two sets of points.

seed

Create a new PRNG key from an integer seed.

sobol_candidates

Generate Sobol quasi-random candidates within bounds.

split

Split a PRNG key into multiple independent keys.

chebyshev_basis

chebyshev_basis(x: ndarray, degree: int) -> ndarray

Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x.

Parameters:

Name Type Description Default
x ndarray

Input points of shape (N,). For best numerical properties, values should lie in [-1, 1].

required
degree int

Maximum polynomial degree (>= 0). The output includes columns for T_0 through T_degree.

required

Returns:

Type Description
ndarray

Array of shape (N, degree + 1) where column j contains T_j(x).

Raises:

Type Description
ValueError

If degree is negative or x is not 1-D.

Notes

Uses the three-term recurrence: T_0(x) = 1 T_1(x) = x T_{n+1}(x) = 2 x T_n(x) - T_{n-1}(x) The Chebyshev polynomials are orthogonal on [-1, 1] with weight (1 / sqrt(1 - x^2)).

Examples:

1
2
3
>>> import jax.numpy as jnp
>>> x = jnp.linspace(-1, 1, 5)
>>> B = chebyshev_basis(x, degree=3)  # columns: T0, T1, T2, T3
Source code in src/psyphy/utils/math.py
def chebyshev_basis(x: jnp.ndarray, degree: int) -> jnp.ndarray:
    """
    Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x.

    Parameters
    ----------
    x : jnp.ndarray
        Input points of shape (N,). For best numerical properties, values should lie in [-1, 1].
    degree : int
        Maximum polynomial degree (>= 0). The output includes columns for T_0 through T_degree.

    Returns
    -------
    jnp.ndarray
        Array of shape (N, degree + 1) where column j contains T_j(x).

    Raises
    ------
    ValueError
        If `degree` is negative or `x` is not 1-D.

    Notes
    -----
    Uses the three-term recurrence:
        T_0(x) = 1
        T_1(x) = x
        T_{n+1}(x) = 2 x T_n(x) - T_{n-1}(x)
    The Chebyshev polynomials are orthogonal on [-1, 1] with weight (1 / sqrt(1 - x^2)).

    Examples
    --------
    >>> import jax.numpy as jnp
    >>> x = jnp.linspace(-1, 1, 5)
    >>> B = chebyshev_basis(x, degree=3)  # columns: T0, T1, T2, T3
    """
    if degree < 0:
        raise ValueError("degree must be >= 0")
    if x.ndim != 1:
        raise ValueError("x must be 1-D (shape (N,))")

    # Ensure a floating dtype (Chebyshev recurrences are polynomial in x)
    x = x.astype(jnp.result_type(x, 0.0))

    N = x.shape[0]

    # Handle small degrees explicitly.
    if degree == 0:
        return jnp.ones((N, 1), dtype=x.dtype)
    if degree == 1:
        return jnp.stack([jnp.ones_like(x), x], axis=1)

    # Initialize T0 and T1 columns.
    T0 = jnp.ones_like(x)
    T1 = x

    # Scan to generate T2..T_degree in a JIT-friendly way (avoids Python-side loops).
    def step(carry, _):
        # compute next Chebyshev polynomial
        Tm1, Tm = carry
        Tnext = 2.0 * x * Tm - Tm1
        return (Tm, Tnext), Tnext # new carry, plus an output to collect

    # Jax friendly loop
    (final_Tm1_ignored, final_Tm_ignored), Ts = lax.scan(step, (T0, T1), xs=None, length=degree - 1)
    # Ts has shape (degree-1, N) and holds [T2, T3, ..., T_degree]
    B = jnp.concatenate(
        [T0[:, None], T1[:, None], jnp.swapaxes(Ts, 0, 1)],
        axis=1
    )
    return B

custom_candidates

custom_candidates(
    reference: ndarray, probe_list: List[ndarray]
) -> List[Stimulus]

Wrap a user-defined list of probes into candidate pairs.

Parameters:

Name Type Description Default
reference (ndarray, shape(D))

Reference stimulus.

required
probe_list list of jnp.ndarray

Explicitly chosen probe vectors.

required

Returns:

Type Description
list of Stimulus

Candidate (reference, probe) pairs.

Notes
  • Useful when hardware constraints (monitor gamut, auditory frequencies) restrict the set of valid stimuli.
  • Full WPPM mode: this pool could be pruned or expanded dynamically depending on posterior fit quality.
Source code in src/psyphy/utils/candidates.py
def custom_candidates(reference: jnp.ndarray, probe_list: List[jnp.ndarray]) -> List[Stimulus]:
    """
    Wrap a user-defined list of probes into candidate pairs.

    Parameters
    ----------
    reference : jnp.ndarray, shape (D,)
        Reference stimulus.
    probe_list : list of jnp.ndarray
        Explicitly chosen probe vectors.

    Returns
    -------
    list of Stimulus
        Candidate (reference, probe) pairs.

    Notes
    -----
    - Useful when hardware constraints (monitor gamut, auditory frequencies)
      restrict the set of valid stimuli.
    - Full WPPM mode: this pool could be pruned or expanded dynamically
      depending on posterior fit quality.
    """
    return [(reference, probe) for probe in probe_list]

grid_candidates

grid_candidates(
    reference: ndarray,
    radii: List[float],
    directions: int = 16,
) -> List[Stimulus]

Generate grid-based candidate probes around a reference.

Parameters:

Name Type Description Default
reference (ndarray, shape(D))

Reference stimulus in model space.

required
radii list of float

Distances from reference to probe.

required
directions int

Number of angular directions.

16

Returns:

Type Description
list of Stimulus

Candidate (reference, probe) pairs.

Notes
  • MVP: probes lie on concentric circles around reference.
  • Full WPPM mode: could adaptively refine grid around regions of high posterior uncertainty.
Source code in src/psyphy/utils/candidates.py
def grid_candidates(reference: jnp.ndarray, radii: List[float], directions: int = 16) -> List[Stimulus]:
    """
    Generate grid-based candidate probes around a reference.

    Parameters
    ----------
    reference : jnp.ndarray, shape (D,)
        Reference stimulus in model space.
    radii : list of float
        Distances from reference to probe.
    directions : int, default=16
        Number of angular directions.

    Returns
    -------
    list of Stimulus
        Candidate (reference, probe) pairs.

    Notes
    -----
    - MVP: probes lie on concentric circles around reference.
    - Full WPPM mode: could adaptively refine grid around regions of
      high posterior uncertainty.
    """
    candidates = []
    angles = jnp.linspace(0, 2 * jnp.pi, directions, endpoint=False)
    for r in radii:
        probes = [reference + r * jnp.array([jnp.cos(a), jnp.sin(a)]) for a in angles]
        candidates.extend([(reference, p) for p in probes])
    return candidates

mahalanobis_distance

mahalanobis_distance(
    x: ndarray, mean: ndarray, cov_inv: ndarray
) -> ndarray

Compute squared Mahalanobis distance between x and mean.

Parameters:

Name Type Description Default
x ndarray

Data vector, shape (D,).

required
mean ndarray

Mean vector, shape (D,).

required
cov_inv ndarray

Inverse covariance matrix, shape (D, D).

required

Returns:

Type Description
ndarray

Scalar squared Mahalanobis distance.

Notes
  • Formula: d^2 = (x - mean)^T Σ^{-1} (x - mean)
  • Used in WPPM discriminability calculations.
Source code in src/psyphy/utils/math.py
def mahalanobis_distance(x: jnp.ndarray, mean: jnp.ndarray, cov_inv: jnp.ndarray) -> jnp.ndarray:
    """
    Compute squared Mahalanobis distance between x and mean.

    Parameters
    ----------
    x : jnp.ndarray
        Data vector, shape (D,).
    mean : jnp.ndarray
        Mean vector, shape (D,).
    cov_inv : jnp.ndarray
        Inverse covariance matrix, shape (D, D).

    Returns
    -------
    jnp.ndarray
        Scalar squared Mahalanobis distance.

    Notes
    -----
    - Formula: d^2 = (x - mean)^T Σ^{-1} (x - mean)
    - Used in WPPM discriminability calculations.
    """
    delta = x - mean
    return jnp.dot(delta, cov_inv @ delta)

rbf_kernel

rbf_kernel(
    x1: ndarray, x2: ndarray, lengthscale: float = 1.0
) -> ndarray

Radial Basis Function (RBF) kernel between two sets of points.

Parameters:

Name Type Description Default
x1 ndarray

First set of points, shape (N, D).

required
x2 ndarray

Second set of points, shape (M, D).

required
lengthscale float

Length-scale parameter controlling smoothness.

1.0

Returns:

Type Description
ndarray

Kernel matrix of shape (N, M).

Notes
  • RBF kernel: k(x, x') = exp(-||x - x'||^2 / (2 * lengthscale^2))
  • Default used for Gaussian processes for smooth covariance priors in Full WPPM mode.
Source code in src/psyphy/utils/math.py
def rbf_kernel(x1: jnp.ndarray, x2: jnp.ndarray, lengthscale: float = 1.0) -> jnp.ndarray:
    """
    Radial Basis Function (RBF) kernel between two sets of points.


    Parameters
    ----------
    x1 : jnp.ndarray
        First set of points, shape (N, D).
    x2 : jnp.ndarray
        Second set of points, shape (M, D).
    lengthscale : float, default=1.0
        Length-scale parameter controlling smoothness.

    Returns
    -------
    jnp.ndarray
        Kernel matrix of shape (N, M).

    Notes
    -----
    - RBF kernel: k(x, x') = exp(-||x - x'||^2 / (2 * lengthscale^2))
    - Default used for Gaussian processes for smooth covariance priors in Full WPPM mode.
    """
    sqdist = jnp.sum((x1[:, None, :] - x2[None, :, :])**2, axis=-1)
    return jnp.exp(-0.5 * sqdist / (lengthscale**2))

seed

seed(seed_value: int) -> KeyArray

Create a new PRNG key from an integer seed.

Parameters:

Name Type Description Default
seed_value int

Seed for random number generation.

required

Returns:

Type Description
KeyArray

New PRNG key.

Source code in src/psyphy/utils/rng.py
def seed(seed_value: int) -> jax.random.KeyArray:
    """
    Create a new PRNG key from an integer seed.

    Parameters
    ----------
    seed_value : int
        Seed for random number generation.

    Returns
    -------
    jax.random.KeyArray
        New PRNG key.
    """
    return jr.PRNGKey(seed_value)

sobol_candidates

sobol_candidates(
    reference: ndarray,
    n: int,
    bounds: List[Tuple[float, float]],
    seed: int = 0,
) -> List[Stimulus]

Generate Sobol quasi-random candidates within bounds.

Parameters:

Name Type Description Default
reference (ndarray, shape(D))

Reference stimulus.

required
n int

Number of candidates to generate.

required
bounds list of (low, high)

Bounds per dimension.

required
seed int

Random seed.

0

Returns:

Type Description
list of Stimulus

Candidate (reference, probe) pairs.

Notes
  • MVP: uniform coverage of space using low-discrepancy Sobol sequence.
  • Full WPPM mode: Sobol could be used for initialization, then hand off to posterior-aware strategies.
Source code in src/psyphy/utils/candidates.py
def sobol_candidates(reference: jnp.ndarray, n: int, bounds: List[Tuple[float, float]], seed: int = 0) -> List[Stimulus]:
    """
    Generate Sobol quasi-random candidates within bounds.

    Parameters
    ----------
    reference : jnp.ndarray, shape (D,)
        Reference stimulus.
    n : int
        Number of candidates to generate.
    bounds : list of (low, high)
        Bounds per dimension.
    seed : int, default=0
        Random seed.

    Returns
    -------
    list of Stimulus
        Candidate (reference, probe) pairs.

    Notes
    -----
    - MVP: uniform coverage of space using low-discrepancy Sobol sequence.
    - Full WPPM mode: Sobol could be used for initialization,
      then hand off to posterior-aware strategies.
    """
    from scipy.stats.qmc import Sobol
    dim = len(bounds)
    engine = Sobol(d=dim, scramble=True, seed=seed)
    raw = engine.random(n)
    scaled = [low + (high - low) * raw[:, i] for i, (low, high) in enumerate(bounds)]
    probes = np.stack(scaled, axis=-1)
    return [(reference, jnp.array(p)) for p in probes]

split

split(key: KeyArray, num: int = 2) -> Tuple[KeyArray, ...]

Split a PRNG key into multiple independent keys.

Parameters:

Name Type Description Default
key KeyArray

RNG key to split.

required
num int

Number of new keys to return.

2

Returns:

Type Description
tuple of jax.random.KeyArray

Independent new PRNG keys.

Source code in src/psyphy/utils/rng.py
def split(key: jax.random.KeyArray, num: int = 2) -> Tuple[jax.random.KeyArray, ...]:
    """
    Split a PRNG key into multiple independent keys.

    Parameters
    ----------
    key : jax.random.KeyArray
        RNG key to split.
    num : int, default=2
        Number of new keys to return.

    Returns
    -------
    tuple of jax.random.KeyArray
        Independent new PRNG keys.
    """
    return jr.split(key, num=num)

RNG


rng

rng.py

Random number utilities for psyphy.

This module standardizes RNG handling across the package, especially important when mixing NumPy and JAX.

MVP implementation: - Wrappers around JAX PRNG keys. - Helpers for reproducibility.

Future extensions: - Experiment-wide RNG registry. - Splitting strategies for parallel adaptive placement.

Examples:

1
2
3
4
>>> import jax
>>> from psyphy.utils.rng import seed, split
>>> key = seed(0)
>>> k1, k2 = split(key)

Functions:

Name Description
seed

Create a new PRNG key from an integer seed.

split

Split a PRNG key into multiple independent keys.

seed

seed(seed_value: int) -> KeyArray

Create a new PRNG key from an integer seed.

Parameters:

Name Type Description Default
seed_value int

Seed for random number generation.

required

Returns:

Type Description
KeyArray

New PRNG key.

Source code in src/psyphy/utils/rng.py
def seed(seed_value: int) -> jax.random.KeyArray:
    """
    Create a new PRNG key from an integer seed.

    Parameters
    ----------
    seed_value : int
        Seed for random number generation.

    Returns
    -------
    jax.random.KeyArray
        New PRNG key.
    """
    return jr.PRNGKey(seed_value)

split

split(key: KeyArray, num: int = 2) -> Tuple[KeyArray, ...]

Split a PRNG key into multiple independent keys.

Parameters:

Name Type Description Default
key KeyArray

RNG key to split.

required
num int

Number of new keys to return.

2

Returns:

Type Description
tuple of jax.random.KeyArray

Independent new PRNG keys.

Source code in src/psyphy/utils/rng.py
def split(key: jax.random.KeyArray, num: int = 2) -> Tuple[jax.random.KeyArray, ...]:
    """
    Split a PRNG key into multiple independent keys.

    Parameters
    ----------
    key : jax.random.KeyArray
        RNG key to split.
    num : int, default=2
        Number of new keys to return.

    Returns
    -------
    tuple of jax.random.KeyArray
        Independent new PRNG keys.
    """
    return jr.split(key, num=num)

Math


math

math.py

Math utilities for psyphy.

Includes: - chebyshev_basis : compute Chebyshev polynomial basis. - mahalanobis_distance : discriminability metric used in WPPM MVP. - rbf_kernel : kernel function, useful in Full WPPM mode covariance priors.

All functions use JAX (jax.numpy) for compatibility with autodiff.

Notes
  • math.chebyshev_basis is relevant when implementing Full WPPM mode, where covariance fields are expressed in a basis expansion.
  • math.mahalanobis_distance is directly used in WPPM MVP discriminability.
  • math.rbf_kernel is a placeholder for Gaussian-process-style covariance priors.

Examples:

1
2
3
4
5
>>> import jax.numpy as jnp
>>> from psyphy.utils import math
>>> x = jnp.linspace(-1, 1, 5)
>>> math.chebyshev_basis(x, degree=3).shape
(5, 4)

Functions:

Name Description
chebyshev_basis

Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x.

mahalanobis_distance

Compute squared Mahalanobis distance between x and mean.

rbf_kernel

Radial Basis Function (RBF) kernel between two sets of points.

chebyshev_basis

chebyshev_basis(x: ndarray, degree: int) -> ndarray

Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x.

Parameters:

Name Type Description Default
x ndarray

Input points of shape (N,). For best numerical properties, values should lie in [-1, 1].

required
degree int

Maximum polynomial degree (>= 0). The output includes columns for T_0 through T_degree.

required

Returns:

Type Description
ndarray

Array of shape (N, degree + 1) where column j contains T_j(x).

Raises:

Type Description
ValueError

If degree is negative or x is not 1-D.

Notes

Uses the three-term recurrence: T_0(x) = 1 T_1(x) = x T_{n+1}(x) = 2 x T_n(x) - T_{n-1}(x) The Chebyshev polynomials are orthogonal on [-1, 1] with weight (1 / sqrt(1 - x^2)).

Examples:

1
2
3
>>> import jax.numpy as jnp
>>> x = jnp.linspace(-1, 1, 5)
>>> B = chebyshev_basis(x, degree=3)  # columns: T0, T1, T2, T3
Source code in src/psyphy/utils/math.py
def chebyshev_basis(x: jnp.ndarray, degree: int) -> jnp.ndarray:
    """
    Construct the Chebyshev polynomial basis matrix T_0..T_degree evaluated at x.

    Parameters
    ----------
    x : jnp.ndarray
        Input points of shape (N,). For best numerical properties, values should lie in [-1, 1].
    degree : int
        Maximum polynomial degree (>= 0). The output includes columns for T_0 through T_degree.

    Returns
    -------
    jnp.ndarray
        Array of shape (N, degree + 1) where column j contains T_j(x).

    Raises
    ------
    ValueError
        If `degree` is negative or `x` is not 1-D.

    Notes
    -----
    Uses the three-term recurrence:
        T_0(x) = 1
        T_1(x) = x
        T_{n+1}(x) = 2 x T_n(x) - T_{n-1}(x)
    The Chebyshev polynomials are orthogonal on [-1, 1] with weight (1 / sqrt(1 - x^2)).

    Examples
    --------
    >>> import jax.numpy as jnp
    >>> x = jnp.linspace(-1, 1, 5)
    >>> B = chebyshev_basis(x, degree=3)  # columns: T0, T1, T2, T3
    """
    if degree < 0:
        raise ValueError("degree must be >= 0")
    if x.ndim != 1:
        raise ValueError("x must be 1-D (shape (N,))")

    # Ensure a floating dtype (Chebyshev recurrences are polynomial in x)
    x = x.astype(jnp.result_type(x, 0.0))

    N = x.shape[0]

    # Handle small degrees explicitly.
    if degree == 0:
        return jnp.ones((N, 1), dtype=x.dtype)
    if degree == 1:
        return jnp.stack([jnp.ones_like(x), x], axis=1)

    # Initialize T0 and T1 columns.
    T0 = jnp.ones_like(x)
    T1 = x

    # Scan to generate T2..T_degree in a JIT-friendly way (avoids Python-side loops).
    def step(carry, _):
        # compute next Chebyshev polynomial
        Tm1, Tm = carry
        Tnext = 2.0 * x * Tm - Tm1
        return (Tm, Tnext), Tnext # new carry, plus an output to collect

    # Jax friendly loop
    (final_Tm1_ignored, final_Tm_ignored), Ts = lax.scan(step, (T0, T1), xs=None, length=degree - 1)
    # Ts has shape (degree-1, N) and holds [T2, T3, ..., T_degree]
    B = jnp.concatenate(
        [T0[:, None], T1[:, None], jnp.swapaxes(Ts, 0, 1)],
        axis=1
    )
    return B

mahalanobis_distance

mahalanobis_distance(
    x: ndarray, mean: ndarray, cov_inv: ndarray
) -> ndarray

Compute squared Mahalanobis distance between x and mean.

Parameters:

Name Type Description Default
x ndarray

Data vector, shape (D,).

required
mean ndarray

Mean vector, shape (D,).

required
cov_inv ndarray

Inverse covariance matrix, shape (D, D).

required

Returns:

Type Description
ndarray

Scalar squared Mahalanobis distance.

Notes
  • Formula: d^2 = (x - mean)^T Σ^{-1} (x - mean)
  • Used in WPPM discriminability calculations.
Source code in src/psyphy/utils/math.py
def mahalanobis_distance(x: jnp.ndarray, mean: jnp.ndarray, cov_inv: jnp.ndarray) -> jnp.ndarray:
    """
    Compute squared Mahalanobis distance between x and mean.

    Parameters
    ----------
    x : jnp.ndarray
        Data vector, shape (D,).
    mean : jnp.ndarray
        Mean vector, shape (D,).
    cov_inv : jnp.ndarray
        Inverse covariance matrix, shape (D, D).

    Returns
    -------
    jnp.ndarray
        Scalar squared Mahalanobis distance.

    Notes
    -----
    - Formula: d^2 = (x - mean)^T Σ^{-1} (x - mean)
    - Used in WPPM discriminability calculations.
    """
    delta = x - mean
    return jnp.dot(delta, cov_inv @ delta)

rbf_kernel

rbf_kernel(
    x1: ndarray, x2: ndarray, lengthscale: float = 1.0
) -> ndarray

Radial Basis Function (RBF) kernel between two sets of points.

Parameters:

Name Type Description Default
x1 ndarray

First set of points, shape (N, D).

required
x2 ndarray

Second set of points, shape (M, D).

required
lengthscale float

Length-scale parameter controlling smoothness.

1.0

Returns:

Type Description
ndarray

Kernel matrix of shape (N, M).

Notes
  • RBF kernel: k(x, x') = exp(-||x - x'||^2 / (2 * lengthscale^2))
  • Default used for Gaussian processes for smooth covariance priors in Full WPPM mode.
Source code in src/psyphy/utils/math.py
def rbf_kernel(x1: jnp.ndarray, x2: jnp.ndarray, lengthscale: float = 1.0) -> jnp.ndarray:
    """
    Radial Basis Function (RBF) kernel between two sets of points.


    Parameters
    ----------
    x1 : jnp.ndarray
        First set of points, shape (N, D).
    x2 : jnp.ndarray
        Second set of points, shape (M, D).
    lengthscale : float, default=1.0
        Length-scale parameter controlling smoothness.

    Returns
    -------
    jnp.ndarray
        Kernel matrix of shape (N, M).

    Notes
    -----
    - RBF kernel: k(x, x') = exp(-||x - x'||^2 / (2 * lengthscale^2))
    - Default used for Gaussian processes for smooth covariance priors in Full WPPM mode.
    """
    sqdist = jnp.sum((x1[:, None, :] - x2[None, :, :])**2, axis=-1)
    return jnp.exp(-0.5 * sqdist / (lengthscale**2))


Stimulus candidates


candidates

candidates.py

Utilities for generating candidate stimulus pools.

Definition

A candidate pool is the set of all possible (reference, probe) pairs that an adaptive placement strategy may select from.

Separation of concerns
  • Candidate generation (this module) defines what stimuli are possible.
  • Trial placement strategies (e.g., GreedyMAPPlacement, InfoGainPlacement) define which of those candidates to present next.
Why this matters
  • Researchers: think of the candidate pool as the "menu" of allowable trials.
  • Developers: placement strategies should not generate candidates but only select from a given pool.
MVP implementation
  • Grid-based candidates (probes on circles around a reference).
  • Sobol sequence candidates (low-discrepancy exploration).
  • Custom user-defined candidate pools.
Full WPPM mode
  • Candidate generation could adaptively refine itself based on posterior uncertainty (e.g., dynamic grids).
  • Candidate pools could be constrained by device gamut or subject-specific calibration.

Functions:

Name Description
custom_candidates

Wrap a user-defined list of probes into candidate pairs.

grid_candidates

Generate grid-based candidate probes around a reference.

sobol_candidates

Generate Sobol quasi-random candidates within bounds.

Attributes:

Name Type Description
Stimulus

Stimulus

Stimulus = Tuple[ndarray, ndarray]

custom_candidates

custom_candidates(
    reference: ndarray, probe_list: List[ndarray]
) -> List[Stimulus]

Wrap a user-defined list of probes into candidate pairs.

Parameters:

Name Type Description Default
reference (ndarray, shape(D))

Reference stimulus.

required
probe_list list of jnp.ndarray

Explicitly chosen probe vectors.

required

Returns:

Type Description
list of Stimulus

Candidate (reference, probe) pairs.

Notes
  • Useful when hardware constraints (monitor gamut, auditory frequencies) restrict the set of valid stimuli.
  • Full WPPM mode: this pool could be pruned or expanded dynamically depending on posterior fit quality.
Source code in src/psyphy/utils/candidates.py
def custom_candidates(reference: jnp.ndarray, probe_list: List[jnp.ndarray]) -> List[Stimulus]:
    """
    Wrap a user-defined list of probes into candidate pairs.

    Parameters
    ----------
    reference : jnp.ndarray, shape (D,)
        Reference stimulus.
    probe_list : list of jnp.ndarray
        Explicitly chosen probe vectors.

    Returns
    -------
    list of Stimulus
        Candidate (reference, probe) pairs.

    Notes
    -----
    - Useful when hardware constraints (monitor gamut, auditory frequencies)
      restrict the set of valid stimuli.
    - Full WPPM mode: this pool could be pruned or expanded dynamically
      depending on posterior fit quality.
    """
    return [(reference, probe) for probe in probe_list]

grid_candidates

grid_candidates(
    reference: ndarray,
    radii: List[float],
    directions: int = 16,
) -> List[Stimulus]

Generate grid-based candidate probes around a reference.

Parameters:

Name Type Description Default
reference (ndarray, shape(D))

Reference stimulus in model space.

required
radii list of float

Distances from reference to probe.

required
directions int

Number of angular directions.

16

Returns:

Type Description
list of Stimulus

Candidate (reference, probe) pairs.

Notes
  • MVP: probes lie on concentric circles around reference.
  • Full WPPM mode: could adaptively refine grid around regions of high posterior uncertainty.
Source code in src/psyphy/utils/candidates.py
def grid_candidates(reference: jnp.ndarray, radii: List[float], directions: int = 16) -> List[Stimulus]:
    """
    Generate grid-based candidate probes around a reference.

    Parameters
    ----------
    reference : jnp.ndarray, shape (D,)
        Reference stimulus in model space.
    radii : list of float
        Distances from reference to probe.
    directions : int, default=16
        Number of angular directions.

    Returns
    -------
    list of Stimulus
        Candidate (reference, probe) pairs.

    Notes
    -----
    - MVP: probes lie on concentric circles around reference.
    - Full WPPM mode: could adaptively refine grid around regions of
      high posterior uncertainty.
    """
    candidates = []
    angles = jnp.linspace(0, 2 * jnp.pi, directions, endpoint=False)
    for r in radii:
        probes = [reference + r * jnp.array([jnp.cos(a), jnp.sin(a)]) for a in angles]
        candidates.extend([(reference, p) for p in probes])
    return candidates

sobol_candidates

sobol_candidates(
    reference: ndarray,
    n: int,
    bounds: List[Tuple[float, float]],
    seed: int = 0,
) -> List[Stimulus]

Generate Sobol quasi-random candidates within bounds.

Parameters:

Name Type Description Default
reference (ndarray, shape(D))

Reference stimulus.

required
n int

Number of candidates to generate.

required
bounds list of (low, high)

Bounds per dimension.

required
seed int

Random seed.

0

Returns:

Type Description
list of Stimulus

Candidate (reference, probe) pairs.

Notes
  • MVP: uniform coverage of space using low-discrepancy Sobol sequence.
  • Full WPPM mode: Sobol could be used for initialization, then hand off to posterior-aware strategies.
Source code in src/psyphy/utils/candidates.py
def sobol_candidates(reference: jnp.ndarray, n: int, bounds: List[Tuple[float, float]], seed: int = 0) -> List[Stimulus]:
    """
    Generate Sobol quasi-random candidates within bounds.

    Parameters
    ----------
    reference : jnp.ndarray, shape (D,)
        Reference stimulus.
    n : int
        Number of candidates to generate.
    bounds : list of (low, high)
        Bounds per dimension.
    seed : int, default=0
        Random seed.

    Returns
    -------
    list of Stimulus
        Candidate (reference, probe) pairs.

    Notes
    -----
    - MVP: uniform coverage of space using low-discrepancy Sobol sequence.
    - Full WPPM mode: Sobol could be used for initialization,
      then hand off to posterior-aware strategies.
    """
    from scipy.stats.qmc import Sobol
    dim = len(bounds)
    engine = Sobol(d=dim, scramble=True, seed=seed)
    raw = engine.random(n)
    scaled = [low + (high - low) * raw[:, i] for i, (low, high) in enumerate(bounds)]
    probes = np.stack(scaled, axis=-1)
    return [(reference, jnp.array(p)) for p in probes]