velovi.VELOVAE#
- class velovi.VELOVAE(n_input, true_time_switch=None, n_hidden=128, n_latent=10, n_layers=1, dropout_rate=0.1, log_variational=False, latent_distribution='normal', use_batch_norm='both', use_layer_norm='both', use_observed_lib_size=True, var_activation=Softplus(beta=1, threshold=20), model_steady_states=True, gamma_unconstr_init=None, alpha_unconstr_init=None, alpha_1_unconstr_init=None, lambda_alpha_unconstr_init=None, switch_spliced=None, switch_unspliced=None, t_max=20, penalty_scale=0.2, dirichlet_concentration=0.25, linear_decoder=False, time_dep_transcription_rate=False)[source]#
Bases:
BaseModuleClass
Variational auto-encoder model.
This is an implementation of the veloVI model descibed in [Gayoso et al., 2022]
- Parameters:
n_input (int) – Number of input genes
n_hidden (int) – Number of nodes per hidden layer
n_latent (int) – Dimensionality of the latent space
n_layers (int) – Number of hidden layers used for encoder and decoder NNs
dropout_rate (float) – Dropout rate for neural networks
log_variational (bool) – Log(data+1) prior to encoding for numerical stability. Not normalization.
latent_distribution (str) –
One of
'normal'
- Isotropic normal'ln'
- Logistic normal with normal params N(0, 1)
use_layer_norm (Literal['encoder', 'decoder', 'none', 'both']) – Whether to use layer norm in layers
use_observed_lib_size (bool) – Use observed library size for RNA as scaling factor in mean of conditional distribution
var_activation (Callable | None) – Callable used to ensure positivity of the variational distributions’ variance. When
None
, defaults totorch.exp
.true_time_switch (ndarray | None) –
use_batch_norm (Literal['encoder', 'decoder', 'none', 'both']) –
model_steady_states (bool) –
gamma_unconstr_init (ndarray | None) –
alpha_unconstr_init (ndarray | None) –
alpha_1_unconstr_init (ndarray | None) –
lambda_alpha_unconstr_init (ndarray | None) –
switch_spliced (ndarray | None) –
switch_unspliced (ndarray | None) –
t_max (float) –
penalty_scale (float) –
dirichlet_concentration (float) –
linear_decoder (bool) –
time_dep_transcription_rate (bool) –
Attributes table#
Methods table#
|
Runs the generative model. |
Extract per-gene weights (for each Z, shape is genes by dim(Z)) in the linear decoder. |
|
|
|
|
High level inference method. |
|
Compute the loss for a minibatch of data. |
|
Not implemented. |
Attributes#
Methods#
- VELOVAE.generative(z, gamma, beta, alpha, alpha_1, lambda_alpha, latent_dim=None)[source]#
Runs the generative model.
- VELOVAE.get_loadings()[source]#
Extract per-gene weights (for each Z, shape is genes by dim(Z)) in the linear decoder.
- Return type:
- VELOVAE.get_px(px_pi, px_rho, px_tau, scale, gamma, beta, alpha, alpha_1, lambda_alpha)[source]#
- Return type:
- VELOVAE.inference(spliced, unspliced, n_samples=1)[source]#
High level inference method.
Runs the inference (encoder) model.
- VELOVAE.loss(tensors, inference_outputs, generative_outputs, kl_weight=1.0, n_obs=1.0)[source]#
Compute the loss for a minibatch of data.
This function uses the outputs of the inference and generative functions to compute a loss. This many optionally include other penalty terms, which should be computed here.
This function should return an object of type
LossOutput
.