API

    Summary statistics

    PosteriorStats.SummaryStatsType
    struct SummaryStats{D, V<:(AbstractVector)}

    A container for a column table of values computed by summarize.

    This object implements the Tables and TableTraits column table interfaces. It has a custom show method.

    SummaryStats behaves like an OrderedDict of columns, where the columns can be accessed using either Symbols or a 1-based integer index.

    • name::String: The name of the collection of summary statistics, used as the table title in display.

    • data::Any: The summary statistics for each parameter. It must implement the Tables interface.

    • parameter_names::AbstractVector: Names of the parameters

    SummaryStats([name::String,] data[, parameter_names])
    SummaryStats(data[, parameter_names]; name::String="SummaryStats")

    Construct a SummaryStats from tabular data with optional stats name and param_names.

    data must not contain a column :parameter, as this is reserved for the parameter names, which are always in the first column.

    source
    PosteriorStats.default_statsFunction
    default_stats(focus=Statistics.mean; prob_interval=0.94, kwargs...)

    Default statistics to be computed with summarize.

    The value of focus determines the statistics to be returned:

    If prob_interval is set to a different value than the default, then different HDI and ETI statistics are computed accordingly. hdi refers to the highest-density interval, while eti refers to the equal-tailed interval.

    See also: hdi, eti

    source
    PosteriorStats.summarizeFunction
    summarize(data, stats_funs...; name="SummaryStats", [var_names]) -> SummaryStats

    Compute the summary statistics in stats_funs on each param in data.

    stats_funs is a collection of functions that reduces a matrix with shape (draws, chains) to a scalar or a collection of scalars. Alternatively, an item in stats_funs may be a Pair of the form name => fun specifying the name to be used for the statistic or of the form (name1, ...) => fun when the function returns a collection. When the function returns a collection, the names in this latter format must be provided.

    If no stats functions are provided, then those specified in default_summary_stats are computed.

    var_names specifies the names of the parameters in data. If not provided, the names are inferred from data.

    To support computing summary statistics from a custom object, overload this method specifying the type of data.

    See also SummaryStats, default_summary_stats, default_stats, default_diagnostics.

    Examples

    Compute Statistics.mean, Statistics.std and the Monte Carlo standard error (MCSE) of the mean estimate:

    julia> using Statistics, StatsBase
    
    julia> x = randn(1000, 4, 3) .+ reshape(0:10:20, 1, 1, :);
    
    julia> summarize(x, mean, std, :mcse_mean => sem; name="Mean/Std")
    Mean/Std
           mean    std  mcse_mean
     1   0.0003  0.989      0.016
     2  10.02    0.988      0.016
     3  19.98    0.988      0.016

    Avoid recomputing the mean by using StatsBase.mean_and_std, and provide parameter names:

    julia> summarize(x, (:mean, :std) => mean_and_std, mad; var_names=[:a, :b, :c])
    SummaryStats
             mean    std    mad
     a   0.000275  0.989  0.978
     b  10.0       0.988  0.995
     c  20.0       0.988  0.979

    Note that when an estimator and its MCSE are both computed, the MCSE is used to determine the number of significant digits that will be displayed.

    julia> summarize(x; var_names=[:a, :b, :c])
    SummaryStats
           mean   std  hdi_94%        mcse_mean  mcse_std  ess_tail  ess_bulk  rha ⋯
     a   0.0003  0.99  -1.92 .. 1.78      0.016     0.012      3567      3663  1.0 ⋯
     b  10.02    0.99   8.17 .. 11.9      0.016     0.011      3841      3906  1.0 ⋯
     c  19.98    0.99   18.1 .. 21.9      0.016     0.012      3892      3749  1.0 ⋯
                                                                    1 column omitted

    Compute just the statistics with an 89% HDI on all parameters, and provide the parameter names:

    julia> summarize(x, default_stats(; prob_interval=0.89)...; var_names=[:a, :b, :c])
    SummaryStats
             mean    std  hdi_89%
     a   0.000275  0.989  -1.63 .. 1.52
     b  10.0       0.988   8.53 .. 11.6
     c  20.0       0.988   18.5 .. 21.6

    Compute the summary stats focusing on Statistics.median:

    julia> summarize(x, default_summary_stats(median)...; var_names=[:a, :b, :c])
    SummaryStats
        median    mad  eti_94%        mcse_median  ess_tail  ess_median  rhat
     a   0.004  0.978  -1.83 .. 1.89        0.020      3567        3336  1.00
     b  10.02   0.995   8.17 .. 11.9        0.023      3841        3787  1.00
     c  19.99   0.979   18.1 .. 21.9        0.020      3892        3829  1.00

    Compute multiple quantiles simultaneously:

    julia> qs = (0.05, 0.25, 0.5, 0.75, 0.95);
    
    julia> summarize(x, (:q5, :q25, :q50, :q75, :q95) => Base.Fix2(Statistics.quantile, qs))
    SummaryStats
           q5     q25       q50     q75    q95
     1  -1.61  -0.668   0.00447   0.653   1.64
     2   8.41   9.34   10.0      10.7    11.6
     3  18.4   19.3    20.0      20.6    21.6
    source

    Credible intervals

    PosteriorStats.hdiFunction
    hdi(samples::AbstractVecOrMat{<:Real}; [prob, sorted, method]) -> IntervalSets.ClosedInterval
    hdi(samples::AbstractArray{<:Real}; [prob, sorted, method]) -> Array{<:IntervalSets.ClosedInterval}

    Estimate the highest density interval (HDI) of samples for the probability prob.

    The HDI is the minimum width Bayesian credible interval (BCI). That is, it is the smallest possible interval containing (100*prob)% of the probability mass.[1] This implementation uses the algorithm of Chen and Shao [2].

    See also: hdi!, eti, eti!.

    Arguments

    • samples: an array of shape (draws[, chains[, params...]]). If multiple parameters are present, a marginal HDI is computed for each.

    Keywords

    • prob: the probability mass to be contained in the HDI. Default is 0.94.
    • sorted=false: if true, the input samples are assumed to be sorted.
    • method::Symbol: the method used to estimate the HDI. Available options are:
      • :unimodal: Assumes a unimodal distribution (default). Bounds are entries in samples.
      • :multimodal: Fits a density estimator to samples and finds the HDI of the estimated density. For continuous data, the density estimator is a kernel density estimate (KDE) computed using kde_reflected. For discrete data, a histogram with bin width 1 is used.
      • :multimodal_sample: Like :multimodal, but uses the density estimator to find the entries in samples with the highest density and computes the HDI from those points. This can correct for inaccuracies in the density estimator.
    • is_discrete::Union{Bool,Nothing}=nothing: Specify if the data is discrete (integer-valued). If nothing, it's automatically determined.
    • kwargs: For continuous data and multimodal methods, remaining keywords are forwarded to kde_reflected.

    Returns

    • intervals: If samples is a vector or matrix, then a single IntervalSets.ClosedInterval is returned for :unimodal method, or a vector of ClosedInterval for multimodal methods. For higher dimensional inputs, an array with the shape (params...,) is returned, containing marginal HDIs for each parameter.
    Note

    Any default value of prob is arbitrary. The default value of prob=0.94 instead of a more common default like prob=0.95 is chosen to remind the user of this arbitrariness.

    Examples

    Here we calculate the 83% HDI for a normal random variable:

    julia> x = randn(2_000);
    
    julia> hdi(x; prob=0.83)
    -1.3826605224220527 .. 1.259817149822839

    We can also calculate the HDI for a 3-dimensional array of samples:

    julia> x = randn(1_000, 1, 1) .+ reshape(0:5:10, 1, 1, :);
    
    julia> hdi(x)
    3-element Vector{IntervalSets.ClosedInterval{Float64}}:
     -1.6402043796029502 .. 2.041852066407182
     3.35979562039705 .. 7.041852066407182
     8.35979562039705 .. 12.041852066407182

    For multimodal distributions, you can use the :multimodal method:

    julia> x = vcat(randn(1000), randn(1000) .+ 5);
    
    julia> hdi(x; method=:multimodal)
    2-element Vector{IntervalSets.ClosedInterval{Float64}}:
     -1.8882401079608677 .. 2.0017686164555037
     2.9839268475847436 .. 6.9235952578447275

    References

    • [1] Hyndman, J. Comput. Graph. Stat., 50:2 (1996)
    • [2] Chen & Shao, J Comput. Graph. Stat., 8:1 (1999)
    source
    PosteriorStats.hdi!Function
    hdi!(samples::AbstractArray{<:Real}; [prob, sorted])

    A version of hdi that partially sorts samples in-place while computing the HDI.

    See also: hdi, eti, eti!.

    source
    PosteriorStats.etiFunction
    eti(samples::AbstractVecOrMat{<:Real}; [prob, kwargs...]) -> IntervalSets.ClosedInterval
    eti(samples::AbstractArray{<:Real}; [prob, kwargs...]) -> Array{<:IntervalSets.ClosedInterval}

    Estimate the equal-tailed interval (ETI) of samples for the probability prob.

    The ETI of a given probability is the credible interval wih the property that the probability of being below the interval is equal to the probability of being above it. That is, it is defined by the (1-prob)/2 and 1 - (1-prob)/2quantiles of the samples.

    See also: eti!, hdi, hdi!.

    Arguments

    • samples: an array of shape (draws[, chains[, params...]]). If multiple parameters are present

    Keywords

    • prob: the probability mass to be contained in the ETI. Default is 0.94.
    • kwargs: remaining keywords are passed to Statistics.quantile.

    Returns

    • intervals: If samples is a vector or matrix, then a single IntervalSets.ClosedInterval is returned. Otherwise, an array with the shape (params...,), is returned, containing a marginal ETI for each parameter.
    Note

    Any default value of prob is arbitrary. The default value of prob=0.94 instead of a more common default like prob=0.95 is chosen to reminder the user of this arbitrariness.

    Examples

    Here we calculate the 83% ETI for a normal random variable:

    julia> x = randn(2_000);
    
    julia> eti(x; prob=0.83)
    -1.3740585250299766 .. 1.2860771129421198

    We can also calculate the ETI for a 3-dimensional array of samples:

    julia> x = randn(1_000, 1, 1) .+ reshape(0:5:10, 1, 1, :);
    
    julia> eti(x)
    3-element Vector{IntervalSets.ClosedInterval{Float64}}:
     -1.951006825019686 .. 1.9011666217153793
     3.048993174980314 .. 6.9011666217153795
     8.048993174980314 .. 11.90116662171538
    source
    PosteriorStats.eti!Function
    eti!(samples::AbstractArray{<:Real}; [prob, kwargs...])

    A version of eti that partially sorts samples in-place while computing the ETI.

    See also: eti, hdi, hdi!.

    source

    LOO and WAIC

    PosteriorStats.AbstractELPDResultType
    abstract type AbstractELPDResult

    An abstract type representing the result of an ELPD computation.

    Every subtype stores estimates of both the expected log predictive density (elpd) and the effective number of parameters p, as well as standard errors and pointwise estimates of each, from which other relevant estimates can be computed.

    Subtypes implement the following functions:

    source
    PosteriorStats.PSISLOOResultType

    Results of Pareto-smoothed importance sampling leave-one-out cross-validation (PSIS-LOO).

    See also: loo, AbstractELPDResult

    • estimates: Estimates of the expected log pointwise predictive density (ELPD) and effective number of parameters (p)

    • pointwise: Pointwise estimates

    • psis_result: A PSIS.PSISResult with Pareto-smoothed importance sampling (PSIS) results

    source
    PosteriorStats.WAICResultType

    Results of computing the widely applicable information criterion (WAIC).

    See also: waic, AbstractELPDResult

    • estimates: Estimates of the expected log pointwise predictive density (ELPD) and effective number of parameters (p)

    • pointwise: Pointwise estimates

    source
    PosteriorStats.elpd_estimatesFunction
    elpd_estimates(result::AbstractELPDResult; pointwise=false) -> (; elpd, se_elpd, lpd)

    Return the (E)LPD estimates from the result.

    source
    PosteriorStats.information_criterionFunction
    information_criterion(elpd, scale::Symbol)

    Compute the information criterion for the given scale from the elpd estimate.

    scale must be one of (:deviance, :log, :negative_log).

    See also: loo, waic

    source
    information_criterion(result::AbstractELPDResult, scale::Symbol; pointwise=false)

    Compute information criterion for the given scale from the existing ELPD result.

    scale must be one of (:deviance, :log, :negative_log).

    If pointwise=true, then pointwise estimates are returned.

    source
    PosteriorStats.looFunction
    loo(log_likelihood; reff=nothing, kwargs...) -> PSISLOOResult{<:NamedTuple,<:NamedTuple}

    Compute the Pareto-smoothed importance sampling leave-one-out cross-validation (PSIS-LOO). [3, 4]

    log_likelihood must be an array of log-likelihood values with shape (chains, draws[, params...]).

    Keywords

    • reff::Union{Real,AbstractArray{<:Real}}: The relative effective sample size(s) of the likelihood values. If an array, it must have the same data dimensions as the corresponding log-likelihood variable. If not provided, then this is estimated using MCMCDiagnosticTools.ess.
    • kwargs: Remaining keywords are forwarded to PSIS.psis.

    See also: PSISLOOResult, waic

    Examples

    Manually compute $R_\mathrm{eff}$ and calculate PSIS-LOO of a model:

    julia> using ArviZExampleData, MCMCDiagnosticTools
    
    julia> idata = load_example_data("centered_eight");
    
    julia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));
    
    julia> reff = ess(log_like; kind=:basic, split_chains=1, relative=true);
    
    julia> loo(log_like; reff)
    PSISLOOResult with estimates
     elpd  se_elpd    p  se_p
      -31      1.4  0.9  0.33
    
    and PSISResult with 500 draws, 4 chains, and 8 parameters
    Pareto shape (k) diagnostic values:
                        Count      Min. ESS
     (-Inf, 0.5]  good  5 (62.5%)  290
      (0.5, 0.7]  okay  3 (37.5%)  399

    References

    • [3] Vehtari et al. Stat. Comput. 27 (2017).
    • [4] Vehtari. Cross-validation FAQ.
    source
    PosteriorStats.waicFunction
    waic(log_likelihood::AbstractArray) -> WAICResult{<:NamedTuple,<:NamedTuple}

    Compute the widely applicable information criterion (WAIC). [5]

    log_likelihood must be an array of log-likelihood values with shape (chains, draws[, params...]).

    See also: WAICResult, loo

    Examples

    Calculate WAIC of a model:

    julia> using ArviZExampleData
    
    julia> idata = load_example_data("centered_eight");
    
    julia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));
    
    julia> waic(log_like)
    WAICResult with estimates
     elpd  se_elpd    p  se_p
      -31      1.4  0.9  0.32

    References

    • [5] Watanabe, JMLR 11(116) (2010)
    source

    Model comparison

    PosteriorStats.ModelComparisonResultType
    ModelComparisonResult

    Result of model comparison using ELPD.

    This struct implements the Tables and TableTraits interfaces.

    Each field returns a collection of the corresponding entry for each model:

    • name: Names of the models, if provided.

    • rank: Ranks of the models (ordered by decreasing ELPD)

    • elpd_diff: ELPD of a model subtracted from the largest ELPD of any model

    • se_elpd_diff: Standard error of the ELPD difference

    • weight: Model weights computed with weights_method

    • elpd_result: AbstactELPDResults for each model, which can be used to access useful stats like ELPD estimates, pointwise estimates, and Pareto shape values for PSIS-LOO

    • weights_method: Method used to compute model weights with model_weights

    source
    PosteriorStats.compareFunction
    compare(models; kwargs...) -> ModelComparisonResult

    Compare models based on their expected log pointwise predictive density (ELPD).

    The ELPD is estimated either by Pareto smoothed importance sampling leave-one-out cross-validation (LOO) or using the widely applicable information criterion (WAIC). loo is the default and recommended method for computing the ELPD. For more theory, see Spiegelhalter et al. [6].

    Arguments

    • models: a Tuple, NamedTuple, or AbstractVector whose values are either AbstractELPDResult entries or any argument to elpd_method.

    Keywords

    • weights_method::AbstractModelWeightsMethod=Stacking(): the method to be used to weight the models. See model_weights for details
    • elpd_method=loo: a method that computes an AbstractELPDResult from an argument in models.
    • sort::Bool=true: Whether to sort models by decreasing ELPD.

    Returns

    • ModelComparisonResult: A container for the model comparison results. The fields contain a similar collection to models.

    Examples

    Compare the centered and non centered models of the eight school problem using the defaults: loo and Stacking weights. A custom myloo method formates the inputs as expected by loo.

    julia> using ArviZExampleData
    
    julia> models = (
               centered=load_example_data("centered_eight"),
               non_centered=load_example_data("non_centered_eight"),
           );
    
    julia> function myloo(idata)
               log_like = PermutedDimsArray(idata.log_likelihood.obs, (2, 3, 1))
               return loo(log_like)
           end;
    
    julia> mc = compare(models; elpd_method=myloo)
    ┌ Warning: 1 parameters had Pareto shape values 0.7 < k ≤ 1. Resulting importance sampling estimates are likely to be unstable.
    └ @ PSIS ~/.julia/packages/PSIS/...
    ModelComparisonResult with Stacking weights
                   rank  elpd  se_elpd  elpd_diff  se_elpd_diff  weight    p  se_p ⋯
     non_centered     1   -31      1.5       0            0.0       1.0  0.9  0.32 ⋯
     centered         2   -31      1.4       0.03         0.061     0.0  0.9  0.33 ⋯
    julia> mc.weight |> pairs
    pairs(::NamedTuple) with 2 entries:
      :non_centered => 1.0
      :centered     => 3.50546e-31

    Compare the same models from pre-computed PSIS-LOO results and computing BootstrappedPseudoBMA weights:

    julia> elpd_results = mc.elpd_result;
    
    julia> compare(elpd_results; weights_method=BootstrappedPseudoBMA())
    ModelComparisonResult with BootstrappedPseudoBMA weights
                   rank  elpd  se_elpd  elpd_diff  se_elpd_diff  weight    p  se_p ⋯
     non_centered     1   -31      1.5       0            0.0      0.51  0.9  0.32 ⋯
     centered         2   -31      1.4       0.03         0.061    0.49  0.9  0.33 ⋯

    References

    • [6] Spiegelhalter et al. J. R. Stat. Soc. B 64 (2002)
    source
    PosteriorStats.model_weightsFunction
    model_weights(elpd_results; method=Stacking())
    model_weights(method::AbstractModelWeightsMethod, elpd_results)

    Compute weights for each model in elpd_results using method.

    elpd_results is a Tuple, NamedTuple, or AbstractVector with AbstractELPDResult entries. The weights are returned in the same type of collection.

    Stacking is the recommended approach, as it performs well even when the true data generating process is not included among the candidate models. See Yao et al. [7] for details.

    See also: AbstractModelWeightsMethod, compare

    Examples

    Compute Stacking weights for two models:

    julia> using ArviZExampleData
    
    julia> models = (
               centered=load_example_data("centered_eight"),
               non_centered=load_example_data("non_centered_eight"),
           );
    
    julia> elpd_results = map(models) do idata
               log_like = PermutedDimsArray(idata.log_likelihood.obs, (2, 3, 1))
               return loo(log_like)
           end;
    ┌ Warning: 1 parameters had Pareto shape values 0.7 < k ≤ 1. Resulting importance sampling estimates are likely to be unstable.
    └ @ PSIS ~/.julia/packages/PSIS/...
    
    julia> model_weights(elpd_results; method=Stacking()) |> pairs
    pairs(::NamedTuple) with 2 entries:
      :centered     => 3.50546e-31
      :non_centered => 1.0

    Now we compute BootstrappedPseudoBMA weights for the same models:

    julia> model_weights(elpd_results; method=BootstrappedPseudoBMA()) |> pairs
    pairs(::NamedTuple) with 2 entries:
      :centered     => 0.492513
      :non_centered => 0.507487

    References

    • [7] Yao et al. Bayesian Analysis 13, 3 (2018)
    source

    The following model weighting methods are available

    PosteriorStats.BootstrappedPseudoBMAType
    struct BootstrappedPseudoBMA{R<:Random.AbstractRNG, T<:Real} <: AbstractModelWeightsMethod

    Model weighting method using pseudo Bayesian Model Averaging using Akaike-type weighting with the Bayesian bootstrap (pseudo-BMA+)[7].

    The Bayesian bootstrap stabilizes the model weights.

    BootstrappedPseudoBMA(; rng=Random.default_rng(), samples=1_000, alpha=1)
    BootstrappedPseudoBMA(rng, samples, alpha)

    Construct the method.

    • rng::Random.AbstractRNG: The random number generator to use for the Bayesian bootstrap

    • samples::Int64: The number of samples to draw for bootstrapping

    • alpha::Real: The shape parameter in the Dirichlet distribution used for the Bayesian bootstrap. The default (1) corresponds to a uniform distribution on the simplex.

    See also: Stacking

    References

    • [7] Yao et al. Bayesian Analysis 13, 3 (2018)
    source
    PosteriorStats.PseudoBMAType
    struct PseudoBMA <: AbstractModelWeightsMethod

    Model weighting method using pseudo Bayesian Model Averaging (pseudo-BMA) and Akaike-type weighting.

    PseudoBMA(; regularize=false)
    PseudoBMA(regularize)

    Construct the method with optional regularization of the weights using the standard error of the ELPD estimate.

    Note

    This approach is not recommended, as it produces unstable weight estimates. It is recommended to instead use BootstrappedPseudoBMA to stabilize the weights or Stacking. For details, see Yao et al. [7].

    See also: Stacking

    References

    • [7] Yao et al. Bayesian Analysis 13, 3 (2018)
    source
    PosteriorStats.StackingType
    struct Stacking{O<:Optim.AbstractOptimizer} <: AbstractModelWeightsMethod

    Model weighting using stacking of predictive distributions[7].

    Stacking(; optimizer=Optim.LBFGS(), options=Optim.Options()
    Stacking(optimizer[, options])

    Construct the method, optionally customizing the optimization.

    • optimizer::Optim.AbstractOptimizer: The optimizer to use for the optimization of the weights. The optimizer must support projected gradient optimization via a manifold field.

    • options::Optim.Options: The Optim options to use for the optimization of the weights.

    See also: BootstrappedPseudoBMA

    References

    • [7] Yao et al. Bayesian Analysis 13, 3 (2018)
    source

    Predictive checks

    PosteriorStats.loo_pitFunction
    loo_pit(y, y_pred, log_weights; kwargs...) -> Union{Real,AbstractArray}

    Compute leave-one-out probability integral transform (LOO-PIT) checks.

    Arguments

    • y: array of observations with shape (params...,)
    • y_pred: array of posterior predictive samples with shape (draws, chains, params...).
    • log_weights: array of normalized log LOO importance weights with shape (draws, chains, params...).

    Keywords

    • is_discrete: If not provided, then it is set to true iff elements of y and y_pred are all integer-valued. If true, then data are smoothed using smooth_data to make them non-discrete before estimating LOO-PIT values.
    • kwargs: Remaining keywords are forwarded to smooth_data if data is discrete.

    Returns

    • pitvals: LOO-PIT values with same size as y. If y is a scalar, then pitvals is a scalar.

    LOO-PIT is a marginal posterior predictive check. If $y_{-i}$ is the array $y$ of observations with the $i$th observation left out, and $y_i^*$ is a posterior prediction of the $i$th observation, then the LOO-PIT value for the $i$th observation is defined as

    \[P(y_i^* \le y_i \mid y_{-i}) = \int_{-\infty}^{y_i} p(y_i^* \mid y_{-i}) \mathrm{d} y_i^*\]

    The LOO posterior predictions and the corresponding observations should have similar distributions, so if conditional predictive distributions are well-calibrated, then all LOO-PIT values should be approximately uniformly distributed on $[0, 1]$. [8]

    Examples

    Calculate LOO-PIT values using as test quantity the observed values themselves.

    julia> using ArviZExampleData
    
    julia> idata = load_example_data("centered_eight");
    
    julia> y = idata.observed_data.obs;
    
    julia> y_pred = PermutedDimsArray(idata.posterior_predictive.obs, (:draw, :chain, :school));
    
    julia> log_like = PermutedDimsArray(idata.log_likelihood.obs, (:draw, :chain, :school));
    
    julia> log_weights = loo(log_like).psis_result.log_weights;
    
    julia> loo_pit(y, y_pred, log_weights)
    8-element DimArray{Float64,1} with dimensions:
      Dim{:school} Categorical{String} String[Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered
     "Choate"            0.942759
     "Deerfield"         0.641057
     "Phillips Andover"  0.32729
     "Phillips Exeter"   0.581451
     "Hotchkiss"         0.288523
     "Lawrenceville"     0.393741
     "St. Paul's"        0.886175
     "Mt. Hermon"        0.638821

    Calculate LOO-PIT values using as test quantity the square of the difference between each observation and mu.

    julia> using Statistics
    
    julia> mu = idata.posterior.mu;
    
    julia> T = y .- median(mu);
    
    julia> T_pred = y_pred .- mu;
    
    julia> loo_pit(T .^ 2, T_pred .^ 2, log_weights)
    8-element DimArray{Float64,1} with dimensions:
      Dim{:school} Categorical{String} String[Choate, Deerfield, …, St. Paul's, Mt. Hermon] Unordered
     "Choate"            0.868148
     "Deerfield"         0.27421
     "Phillips Andover"  0.321719
     "Phillips Exeter"   0.193169
     "Hotchkiss"         0.370422
     "Lawrenceville"     0.195601
     "St. Paul's"        0.817408
     "Mt. Hermon"        0.326795

    References

    • [8] Gabry et al. J. R. Stat. Soc. Ser. A Stat. Soc. 182 (2019).
    source
    PosteriorStats.r2_scoreFunction
    r2_score(y_true::AbstractVector, y_pred::AbstractArray) -> (; r2, r2_std)

    $R²$ for linear Bayesian regression models.[9]

    Arguments

    • y_true: Observed data of length noutputs
    • y_pred: Predicted data with size (ndraws[, nchains], noutputs)

    Examples

    julia> using ArviZExampleData
    
    julia> idata = load_example_data("regression1d");
    
    julia> y_true = idata.observed_data.y;
    
    julia> y_pred = PermutedDimsArray(idata.posterior_predictive.y, (:draw, :chain, :y_dim_0));
    
    julia> r2_score(y_true, y_pred) |> pairs
    pairs(::NamedTuple) with 2 entries:
      :r2     => 0.683197
      :r2_std => 0.0368838

    References

    • [9] Gelman et al, The Am. Stat., 73(3) (2019)
    source

    Utilities

    PosteriorStats.kde_reflectedFunction
    kde_reflected(data::AbstractVector{<:Real}; bounds=extrema(data), kwargs...)

    Compute the boundary-corrected kernel density estimate (KDE) of data using reflection.

    For $x \in (l, u)$, the reflected KDE has the density

    \[\hat{f}_R(x) = \hat{f}(x) + \hat{f}(2l - x) + \hat{f}(2u - x),\]

    where $\hat{f}$ is the usual KDE of data. This is equivalent to augmenting the original data with 2 additional copies of the data reflected around each bound, computing the usual KDE, trimming the KDE to the bounds, and renormalizing.

    Any non-finite bounds are ignored. Remaining kwargs are passed to KernelDensity.kde. The default bandwidth is estimated using the Improved Sheather-Jones (ISJ) method [10].

    References

    • [10] Botev et al. Ann. Stat., 38: 5 (2010)
    source
    PosteriorStats.smooth_dataFunction
    smooth_data(y; dims=:, interp_method=CubicSpline, offset_frac=0.01)

    Smooth y along dims using interp_method.

    interp_method is a 2-argument callabale that takes the arguments y and x and returns a DataInterpolations.jl interpolation method, defaulting to a cubic spline interpolator.

    offset_frac is the fraction of the length of y to use as an offset when interpolating.

    source