Reference

Exported

ArviZ.rcParamsConstant
rcParams

Dictionary to contain ArviZ default parameters, with validation when setting items.

Examples

julia> rcParams["plot.backend"]
"matplotlib"

julia> rcParams["plot.backend"] = "bokeh"
"bokeh"

julia> rcParams["plot.backend"]
"bokeh"
source
ArviZ.InferenceDataType
InferenceData(::PyObject)
InferenceData(; kwargs...)

Loose wrapper around arviz.InferenceData, which is a container for inference data storage using xarray.

InferenceData can be constructed either from an arviz.InferenceData or from multiple Datasets assigned to groups specified as kwargs.

Instead of directly creating an InferenceData, use the exported from_xyz functions or convert_to_inference_data.

source
ArviZ.concat!Function
concat!(data1::InferenceData, data::InferenceData...; kwargs...) -> InferenceData

In-place version of concat, where data1 is modified to contain the concatenation of data and args. See concat for a description of kwargs.

source
ArviZ.convert_to_inference_dataMethod
convert_to_inference_data(obj::NamedTuple; kwargs...) -> InferenceData
convert_to_inference_data(obj::Vector{<:NamedTuple}; kwargs...) -> InferenceData
convert_to_inference_data(obj::Matrix{<:NamedTuple}; kwargs...) -> InferenceData
convert_to_inference_data(obj::Vector{Vector{<:NamedTuple}}; kwargs...) -> InferenceData

Convert obj to an InferenceData. See from_namedtuple for a description of obj possibilities and kwargs.

source
ArviZ.convert_to_inference_dataMethod
convert_to_inference_data(
    obj::SampleChains.AbstractChain;
    group=:posterior,
    kwargs...,
) -> InferenceData
convert_to_inference_data(
    obj::SampleChains.AbstractChain;
    group=:posterior,
    kwargs...,
) -> InferenceData

Convert the chains obj to an InferenceData with the specified group.

Remaining kwargs are forwarded to from_samplechains.

source
ArviZ.from_mcmcchainsFunction
from_mcmcchains(posterior::MCMCChains.Chains; kwargs...) -> InferenceData
from_mcmcchains(; kwargs...) -> InferenceData
from_mcmcchains(
    posterior::MCMCChains.Chains,
    posterior_predictive::Any,
    predictions::Any,
    log_likelihood::Any;
    kwargs...
) -> InferenceData

Convert data in an MCMCChains.Chains format into an InferenceData.

Any keyword argument below without an an explicitly annotated type above is allowed, so long as it can be passed to convert_to_inference_data.

Arguments

  • posterior::MCMCChains.Chains: Draws from the posterior

Keywords

  • posterior_predictive::Any=nothing: Draws from the posterior predictive distribution or name(s) of predictive variables in posterior
  • predictions::Any=nothing: Out-of-sample predictions for the posterior.
  • prior::Any=nothing: Draws from the prior
  • prior_predictive::Any=nothing: Draws from the prior predictive distribution or name(s) of predictive variables in prior
  • observed_data::Dict{String,Array}=nothing: Observed data on which the posterior is conditional. It should only contain data which is modeled as a random variable. Keys are parameter names and values.
  • constant_data::Dict{String,Array}=nothing: Model constants, data included in the model which is not modeled as a random variable. Keys are parameter names and values.
  • predictions_constant_data::Dict{String,Array}=nothing: Constants relevant to the model predictions (i.e. new x values in a linear regression).
  • log_likelihood::Any=nothing: Pointwise log-likelihood for the data. It is recommended to use this argument as a dictionary whose keys are observed variable names and whose values are log likelihood arrays.
  • log_likelihood::String=nothing: Name of variable in posterior with log likelihoods
  • library=MCMCChains: Name of library that generated the chains
  • coords::Dict{String,Vector}=Dict(): Map from named dimension to named indices
  • dims::Dict{String,Vector{String}}=Dict(): Map from variable name to names of its dimensions
  • eltypes::Dict{String,DataType}=Dict(): Apply eltypes to specific variables. This is used to assign discrete eltypes to discrete variables.

Returns

  • InferenceData: The data with groups corresponding to the provided data
source
ArviZ.from_namedtupleFunction
from_namedtuple(posterior::NamedTuple; kwargs...) -> InferenceData
from_namedtuple(posterior::Vector{<:NamedTuple}; kwargs...) -> InferenceData
from_namedtuple(posterior::Matrix{<:NamedTuple}; kwargs...) -> InferenceData
from_namedtuple(posterior::Vector{Vector{<:NamedTuple}}; kwargs...) -> InferenceData
from_namedtuple(
    posterior::NamedTuple,
    sample_stats::Any,
    posterior_predictive::Any,
    predictions::Any,
    log_likelihood::Any;
    kwargs...
) -> InferenceData

Convert a NamedTuple or container of NamedTuples to an InferenceData.

If containers are passed, they are flattened into a single NamedTuple with array elements whose first dimensions correspond to the dimensions of the containers.

Arguments

  • posterior: The data to be converted. It may be of the following types:
    • ::NamedTuple: The keys are the variable names and the values are arrays with dimensions (nchains, ndraws, sizes...).
    • ::Vector{<:NamedTuple}: Each element is a NamedTuple from a chain with Array/MonteCarloMeasurements.Particle values with dimensions (ndraws, sizes...).
    • ::Matrix{<:NamedTuple}: Each element is a single draw from a single chain, with array/scalar values with dimensions sizes. The dimensions of the matrix container are (nchains, ndraws)
    • ::Vector{Vector{<:NamedTuple}}: The same as the above case.

Keywords

  • posterior_predictive::Any=nothing: Draws from the posterior predictive distribution
  • sample_stats::Any=nothing: Statistics of the posterior sampling process
  • predictions::Any=nothing: Out-of-sample predictions for the posterior.
  • prior::Any=nothing: Draws from the prior
  • prior_predictive::Any=nothing: Draws from the prior predictive distribution
  • sample_stats_prior::Any=nothing: Statistics of the prior sampling process
  • observed_data::Dict{String,Array}=nothing: Observed data on which the posterior is conditional. It should only contain data which is modeled as a random variable. Keys are parameter names and values.
  • constant_data::Dict{String,Array}=nothing: Model constants, data included in the model which is not modeled as a random variable. Keys are parameter names and values.
  • predictions_constant_data::Dict{String,Array}=nothing: Constants relevant to the model predictions (i.e. new x values in a linear regression).
  • log_likelihood::Any=nothing: Pointwise log-likelihood for the data. It is recommended to use this argument as a dictionary whose keys are observed variable names and whose values are log likelihood arrays.
  • library=nothing: Name of library that generated the draws
  • coords::Dict{String,Vector}=Dict(): Map from named dimension to named indices
  • dims::Dict{String,Vector{String}}=Dict(): Map from variable name to names of its dimensions

Returns

  • InferenceData: The data with groups corresponding to the provided data

Examples

using ArviZ
nchains, ndraws = 2, 10

data1 = (
    x = rand(nchains, ndraws),
    y = randn(nchains, ndraws, 2),
    z = randn(nchains, ndraws, 3, 2),
)
idata1 = from_namedtuple(data1)

data2 = [(x = rand(ndraws), y = randn(ndraws, 2), z = randn(ndraws, 3, 2)) for _ = 1:nchains];
idata2 = from_namedtuple(data2)

data3 = [(x = rand(), y = randn(2), z = randn(3, 2)) for _ = 1:nchains, _ = 1:ndraws];
idata3 = from_namedtuple(data3)

data4 = [[(x = rand(), y = randn(2), z = randn(3, 2)) for _ = 1:ndraws] for _ = 1:nchains];
idata4 = from_namedtuple(data4)
source
ArviZ.from_samplechainsFunction
from_samplechains(
    posterior=nothing;
    prior=nothing,
    library=SampleChains,
    kwargs...,
) -> InferenceData

Convert SampleChains samples to an InferenceData.

Either posterior or prior may be a SampleChains.AbstractChain or SampleChains.MultiChain object.

For descriptions of remaining kwargs, see from_namedtuple.

source
ArviZ.psislwFunction
psislw(log_weights, reff=1.0) -> (lw_out, kss)

Pareto smoothed importance sampling (PSIS).

Note

This function is deprecated and is just a thin wrapper around psis.

Arguments

  • log_weights: Array of size (nobs, ndraws)
  • reff: relative MCMC efficiency, ess / n

Returns

  • lw_out: Smoothed log weights
  • kss: Pareto tail indices
source
ArviZ.with_interactive_backendFunction
with_interactive_backend(f; backend::Symbol = nothing)

Execute the thunk f in a temporary interactive context with the chosen backend, or provide no arguments to use a default.

Examples

idata = load_arviz_data("centered_eight")
plot_posterior(idata) # inline
with_interactive_backend() do
    plot_density(idata) # interactive
end
plot_trace(idata) # inline
source
ArviZ.with_rc_contextFunction
with_rc_context(f; rc = nothing, fname = nothing)

Execute the thunk f within a context controlled by temporary rc params.

See rcParams for supported params or to modify params long-term.

Examples

with_rc_context(fname = "pystan.rc") do
    idata = load_arviz_data("radon")
    plot_posterior(idata; var_names=["gamma"])
end

The plot would have settings from pystan.rc.

A dictionary can also be passed to the context manager:

with_rc_context(rc = Dict("plot.max_subplots" => 1), fname = "pystan.rc") do
    idata = load_arviz_data("radon")
    plot_posterior(idata, var_names=["gamma"])
end

The rc dictionary takes precedence over the settings loaded from fname. Passing a dictionary only is also valid.

source
StatsBase.summarystatsMethod
summarystats(
    data::InferenceData;
    group = :posterior,
    kwargs...,
) -> Union{Dataset,DataFrames.DataFrame}
summarystats(data::Dataset; kwargs...) -> Union{Dataset,DataFrames.DataFrame}

Compute summary statistics on data.

Arguments

  • data::Union{Dataset,InferenceData}: The data on which to compute summary statistics. If data is an InferenceData, only the dataset corresponding to group is used.

Keywords

  • var_names::Vector{String}=nothing: Names of variables to include in summary
  • include_circ::Bool=false: Whether to include circular statistics
  • fmt::String="wide": Return format is either DataFrames.DataFrame ("wide", "long") or Dataset ("xarray").
  • round_to::Int=nothing: Number of decimals used to round results. Use nothing to return raw numbers.
  • stat_funcs::Union{Dict{String,Function},Vector{Function}}=nothing: A vector of functions or a dict of functions with function names as keys used to calculate statistics. By default, the mean, standard deviation, simulation standard error, and highest posterior density intervals are included. The functions will be given one argument, the samples for a variable as an array, The functions should operate on an array, returning a single number. For example, Statistics.mean, or Statistics.var would both work.
  • extend::Bool=true: If true, use the statistics returned by stat_funcs in addition to, rather than in place of, the default statistics. This is only meaningful when stat_funcs is not nothing.
  • hdi_prob::Real=0.94: HDI interval to compute. This is only meaningful when stat_funcs is nothing.
  • order::String="C": If fmt is "wide", use either "C" or "F" unpacking order.
  • skipna::Bool=false: If true, ignores NaN values when computing the summary statistics. It does not affect the behaviour of the functions passed to stat_funcs.
  • coords::Dict{String,Vector}=Dict(): Coordinates specification to be used if the fmt is "xarray".
  • dims::Dict{String,Vector}=Dict(): Dimensions specification for the variables to be used if the fmt is "xarray".

Returns

  • Union{Dataset,DataFrames.DataFrame}: Return type dicated by fmt argument. Return value will contain summary statistics for each variable. Default statistics are:
    • mean
    • sd
    • hdi_3%
    • hdi_97%
    • mcse_mean
    • mcse_sd
    • ess_bulk
    • ess_tail
    • r_hat (only computed for traces with 2 or more chains)

Examples

using ArviZ
idata = load_arviz_data("centered_eight")
summarystats(idata; var_names=["mu", "tau"])

Other statistics can be calculated by passing a list of functions or a dictionary with key, function pairs:

using StatsBase, Statistics
function median_sd(x)
    med = median(x)
    sd = sqrt(mean((x .- med).^2))
    return sd
end

func_dict = Dict(
    "std" => x -> std(x; corrected = false),
    "median_std" => median_sd,
    "5%" => x -> percentile(x, 5),
    "median" => median,
    "95%" => x -> percentile(x, 95),
)

summarystats(idata; var_names = ["mu", "tau"], stat_funcs = func_dict, extend = false)
source
PSIS.PSISResultType
PSISResult

Result of Pareto-smoothed importance sampling (PSIS) using psis.

Properties

  • log_weights: un-normalized Pareto-smoothed log weights
  • weights: normalized Pareto-smoothed weights (allocates a copy)
  • pareto_shape: Pareto $k=ξ$ shape parameter
  • nparams: number of parameters in log_weights
  • ndraws: number of draws in log_weights
  • nchains: number of chains in log_weights
  • reff: the ratio of the effective sample size of the unsmoothed importance ratios and the actual sample size.
  • ess: estimated effective sample size of estimate of mean using smoothed importance samples (see ess_is)
  • log_weights_norm: the logarithm of the normalization constant of log_weights
  • tail_length: length of the upper tail of log_weights that was smoothed
  • tail_dist: the generalized Pareto distribution that was fit to the tail of log_weights. Note that the tail weights are scaled to have a maximum of 1, so tail_dist * exp(maximum(log_ratios)) is the corresponding fit directly to the tail of log_ratios.

Diagnostic

The pareto_shape parameter $k=ξ$ of the generalized Pareto distribution tail_dist can be used to diagnose reliability and convergence of estimates using the importance weights [VehtariSimpson2021].

  • if $k < \frac{1}{3}$, importance sampling is stable, and importance sampling (IS) and PSIS both are reliable.
  • if $k ≤ \frac{1}{2}$, then the importance ratio distributon has finite variance, and the central limit theorem holds. As $k$ approaches the upper bound, IS becomes less reliable, while PSIS still works well but with a higher RMSE.
  • if $\frac{1}{2} < k ≤ 0.7$, then the variance is infinite, and IS can behave quite poorly. However, PSIS works well in this regime.
  • if $0.7 < k ≤ 1$, then it quickly becomes impractical to collect enough importance weights to reliably compute estimates, and importance sampling is not recommended.
  • if $k > 1$, then neither the variance nor the mean of the raw importance ratios exists. The convergence rate is close to zero, and bias can be large with practical sample sizes.

See paretoshapeplot for a diagnostic plot.

PSIS.ess_isFunction
ess_is(weights; reff=1)

Estimate effective sample size (ESS) for importance sampling over the sample dimensions.

Given normalized weights $w_{1:n}$, the ESS is estimated using the L2-norm of the weights:

\[\mathrm{ESS}(w_{1:n}) = \frac{r_{\mathrm{eff}}}{\sum_{i=1}^n w_i^2}\]

where $r_{\mathrm{eff}}$ is the relative efficiency of the log_weights.

ess_is(result::PSISResult; bad_shape_missing=true)

Estimate ESS for Pareto-smoothed importance sampling.

Note

ESS estimates for Pareto shape values $k > 0.7$, which are unreliable and misleadingly high, are set to missing. To avoid this, set bad_shape_missing=false.

PSIS.paretoshapeplotFunction
paretoshapeplot(values; kwargs...)
paretoshapeplot!(values; kwargs...)

Plot shape parameters of fitted Pareto tail distributions for diagnosing convergence.

Arguments

  • values: may be either a vector of Pareto shape parameters or a PSISResult.

Keywords

  • showlines=false: if true, plot horizontal lines indicating relevant Pareto shape thresholds are drawn. See PSISResult for explanation of thresholds.
  • backend::Symbol: backend to use for plotting, defaulting to :Plots, unless :Makie is available.

All remaining keywords are passed to the plotting backend.

See psis, PSISResult.

Note

Plots.jl or a Makie.jl backend must be loaded to use these functions.

Examples

using PSIS, Distributions, Plots
proposal = Normal()
target = TDist(7)
x = rand(proposal, 100, 1_000)
log_ratios = logpdf.(target, x) .- logpdf.(proposal, x)
result = psis(log_ratios)

Plot with Plots.jl.

using Plots
plot(result; showlines=true)

Plot with GLMakie.jl.

using GLMakie
plot(result; showlines=true)
PSIS.paretoshapeplot!Function
paretoshapeplot(values; kwargs...)
paretoshapeplot!(values; kwargs...)

Plot shape parameters of fitted Pareto tail distributions for diagnosing convergence.

Arguments

  • values: may be either a vector of Pareto shape parameters or a PSISResult.

Keywords

  • showlines=false: if true, plot horizontal lines indicating relevant Pareto shape thresholds are drawn. See PSISResult for explanation of thresholds.
  • backend::Symbol: backend to use for plotting, defaulting to :Plots, unless :Makie is available.

All remaining keywords are passed to the plotting backend.

See psis, PSISResult.

Note

Plots.jl or a Makie.jl backend must be loaded to use these functions.

Examples

using PSIS, Distributions, Plots
proposal = Normal()
target = TDist(7)
x = rand(proposal, 100, 1_000)
log_ratios = logpdf.(target, x) .- logpdf.(proposal, x)
result = psis(log_ratios)

Plot with Plots.jl.

using Plots
plot(result; showlines=true)

Plot with GLMakie.jl.

using GLMakie
plot(result; showlines=true)
PSIS.psisFunction
psis(log_ratios, reff = 1.0; kwargs...) -> PSISResult
psis!(log_ratios, reff = 1.0; kwargs...) -> PSISResult

Compute Pareto smoothed importance sampling (PSIS) log weights [VehtariSimpson2021].

While psis computes smoothed log weights out-of-place, psis! smooths them in-place.

Arguments

  • log_ratios: an array of logarithms of importance ratios, with one of the following sizes:

    • (ndraws,): a vector of draws for a single parameter from a single chain
    • (nparams, ndraws): a matrix of draws for a multiple parameter from a single chain
    • (nparams, ndraws, nchains): an array of draws for multiple parameters from multiple chains, e.g. as might be generated with Markov chain Monte Carlo.
  • reff::Union{Real,AbstractVector}: the ratio(s) of effective sample size of log_ratios and the actual sample size reff = ess/(ndraws * nchains), used to account for autocorrelation, e.g. due to Markov chain Monte Carlo.

Keywords

Returns

  • result: a PSISResult object containing the results of the Pareto-smoothing.

A warning is raised if the Pareto shape parameter $k ≥ 0.7$. See PSISResult for details and paretoshapeplot for a diagnostic plot.

PSIS.psis!Function
psis(log_ratios, reff = 1.0; kwargs...) -> PSISResult
psis!(log_ratios, reff = 1.0; kwargs...) -> PSISResult

Compute Pareto smoothed importance sampling (PSIS) log weights [VehtariSimpson2021].

While psis computes smoothed log weights out-of-place, psis! smooths them in-place.

Arguments

  • log_ratios: an array of logarithms of importance ratios, with one of the following sizes:

    • (ndraws,): a vector of draws for a single parameter from a single chain
    • (nparams, ndraws): a matrix of draws for a multiple parameter from a single chain
    • (nparams, ndraws, nchains): an array of draws for multiple parameters from multiple chains, e.g. as might be generated with Markov chain Monte Carlo.
  • reff::Union{Real,AbstractVector}: the ratio(s) of effective sample size of log_ratios and the actual sample size reff = ess/(ndraws * nchains), used to account for autocorrelation, e.g. due to Markov chain Monte Carlo.

Keywords

Returns

  • result: a PSISResult object containing the results of the Pareto-smoothing.

A warning is raised if the Pareto shape parameter $k ≥ 0.7$. See PSISResult for details and paretoshapeplot for a diagnostic plot.

Internal

ArviZ.BokehPlotType
BokehPlot(::PyObject)

Loose wrapper around a Bokeh figure, mostly used for dispatch.

In most cases, use one of the plotting functions with backend=:bokeh to create a BokehPlot instead of using a constructor.

source
ArviZ.DatasetType
Dataset(::PyObject)
Dataset(; data_vars = nothing, coords = Dict(), attrs = Dict())

Loose wrapper around xarray.Dataset, mostly used for dispatch.

Keywords

  • data_vars::Dict{String,Any}: Dict mapping variable names to

    • Vector: Data vector. Single dimension is named after variable.
    • Tuple{String,Vector}: Dimension name and data vector.
    • Tuple{NTuple{N,String},Array{T,N}} where {N,T}: Dimension names and data array.
  • coords::Dict{String,Any}: Dict mapping dimension names to index names. Possible arguments has same form as data_vars.

  • attrs::Dict{String,Any}: Global attributes to save on this dataset.

In most cases, use convert_to_dataset or convert_to_constant_dataset or to create a Dataset instead of directly using a constructor.

source
ArviZ.convert_argumentsMethod
convert_arguments(f, args...; kwargs...) -> NTuple{2}

Convert arguments to the function f before calling.

This function is used primarily for pre-processing arguments within macros before sending to arviz.

source
ArviZ.convert_resultMethod
convert_result(f, result, args...)

Convert result of the function f before returning.

This function is used primarily for post-processing outputs of arviz before returning. The args are primarily used for dispatch.

source
ArviZ.convert_to_constant_datasetFunction
convert_to_constant_dataset(obj::Dict; kwargs...) -> Dataset
convert_to_constant_dataset(obj::NamedTuple; kwargs...) -> Dataset

Convert obj into a Dataset.

Unlike convert_to_dataset, this is intended for containing constant parameters such as observed data and constant data, and the first two dimensions are not required to be the number of chains and draws.

Keywords

  • coords::Dict{String,Vector}: Map from named dimension to index names
  • dims::Dict{String,Vector{String}}: Map from variable name to names of its dimensions
  • library::Any: A library associated with the data to add to attrs.
  • attrs::Dict{String,Any}: Global attributes to save on this dataset.
source
ArviZ.dataset_to_dictFunction
dataset_to_dict(ds::Dataset) -> Tuple{Dict{String,Array},NamedTuple}

Convert a Dataset to a dictionary of Arrays. The function also returns keyword arguments to dict_to_dataset.

source
ArviZ.dict_to_datasetFunction
dict_to_dataset(data::Dict{String,Array}; kwargs...) -> Dataset

Convert a dictionary with data and keys as variable names to a Dataset.

Keywords

  • attrs::Dict{String,Any}: Json serializable metadata to attach to the dataset, in addition to defaults.
  • library::String: Name of library used for performing inference. Will be attached to the attrs metadata.
  • coords::Dict{String,Array}: Coordinates for the dataset
  • dims::Dict{String,Vector{String}}: Dimensions of each variable. The keys are variable names, values are vectors of coordinates.

Examples

using ArviZ
ArviZ.dict_to_dataset(Dict("x" => randn(4, 100), "y" => randn(4, 100)))
source
ArviZ.flattenMethod
flatten(x)

If x is an array of arrays, flatten into a single array whose dimensions are ordered with dimensions of the outermost container first and innermost container last.

source
ArviZ.groupnamesMethod
groupnames(data::InferenceData) -> Vector{Symbol}

Get the names of the groups (datasets) in data.

source
ArviZ.groupsMethod
groups(data::InferenceData) -> Dict{Symbol,Dataset}

Get the groups in data as a dictionary mapping names to datasets.

source
ArviZ.namedtuple_of_arraysMethod
namedtuple_of_arrays(x::NamedTuple) -> NamedTuple
namedtuple_of_arrays(x::AbstractArray{NamedTuple}) -> NamedTuple
namedtuple_of_arrays(x::AbstractArray{AbstractArray{<:NamedTuple}}) -> NamedTuple

Given a container of NamedTuples, concatenate them, using the container dimensions as the dimensions of the resulting arrays.

Examples

using ArviZ
nchains, ndraws = 4, 100
data = [(x=rand(), y=randn(2), z=randn(2, 3)) for _ in 1:nchains, _ in 1:ndraws];
ntarray = ArviZ.namedtuple_of_arrays(data);
source
ArviZ.summaryMethod
summary(
    data;
    group = :posterior,
    coords = nothing,
    dims = nothing,
    kwargs...,
) -> Union{Dataset,DataFrames.DataFrame}

Compute summary statistics on any object that can be passed to convert_to_dataset.

Keywords

  • coords::Dict{String,Vector}=nothing: Map from named dimension to named indices.
  • dims::Dict{String,Vector{String}}=nothing: Map from variable name to names of its dimensions.
  • kwargs: Keyword arguments passed to summarystats.
source
ArviZ.todataframesMethod
todataframes(df; index_name = nothing) -> DataFrames.DataFrame

Convert a Python pandas.DataFrame or pandas.Series into a DataFrames.DataFrame.

If index_name is not nothing, the index is converted into a column with index_name. Otherwise, it is discarded.

source
ArviZ.topandasMethod
topandas(::Type{:DataFrame}, df; index_name = nothing) -> PyObject
topandas(::Type{:Series}, df) -> PyObject
topandas(::Val{:ELPDData}, df) -> PyObject

Convert a DataFrames.DataFrame to the specified pandas type.

If index_name is not nothing, the corresponding column is made the index of the returned dataframe.

source
ArviZ.use_styleMethod
use_style(style::String)
use_style(style::Vector{String})

Use matplotlib style settings from a style specification style.

The style name of "default" is reserved for reverting back to the default style settings.

ArviZ-specific styles are ["arviz-whitegrid", "arviz-darkgrid", "arviz-colors", "arviz-white"]. To see all available style specifications, use styles().

If a Vector of styles is provided, they are applied from first to last.

source
Base.writeMethod
write(io::IO, plot::BokehPlot)
write(filename::AbstractString, plot::BokehPlot)

Write the HTML representation of the Bokeh plot to the I/O stream or file.

source
ArviZ.@forwardplotfunMacro
@forwardplotfun f
@forwardplotfun(f)

Wrap a plotting function arviz.f in f, forwarding its docstrings.

This macro also ensures that outputs for the different backends are correctly handled. Use convert_arguments and convert_result to customize what is passed to and returned from f.

source