Skip to content
#

probabilistic-programming

Here are 378 public repositories matching this topic...

pyro
eb8680
eb8680 commented Dec 14, 2021

NumPyro now has several excellent introductory examples with no direct counterparts in Pyro. Porting one of these to Pyro would be a great way for someone to simultaneously learn more about Bayesian data analysis and make a valuable open source contribution.

If you are reading this and want to give one of them a try, please leave a comment here so that other peo

help wanted Examples good first issue
bryorsnef
bryorsnef commented Nov 15, 2021

The currently implemented version of the horseshoe distribution is not the parameterization that most ML papers use. This limits the ease of use of this as, for example, a prior in a tfp.layers.KLDivergenceAddLoss or in tfp.layers.DenseReparameterization. The regularized horseshoe would also be useful as an implemented distribution.

The alternative parameterization is shown here:
https://www.

hessammehr
hessammehr commented May 11, 2022

A quick search for mixture distributions in numpyro only turns up examples using Categorical in conjunction with an array of distributions. Since sampling from discrete distributions is not always desirable, I have implemented a quick general purpose mixture distribution with continuous log probability.

class Mixture(Distribution):
    arg_constraints = {}

    def __init__(self
enhancement good first issue
cscherrer
cscherrer commented Mar 26, 2021

Rather than trying to rebuild all functionality from Distributions.jl, we're first focusing on reimplementing logdensity (logpdf in Distributions), and delegating most other functions to the current Distributions implementations.

So for example, we have

distproxy(d::Normal{(:μ, :σ)}) = Dists.Normal(d.μ, d.σ)

This makes some functions in Distributions.jl available through

good first issue
willtebbutt
willtebbutt commented Oct 19, 2019

There are a variety of interesting optimisations that can be performed on kernels of the form

k(x, z) = w_1 * k_1(x, z) + w_2 * k_2(x, z) + ... + w_L k_L(x, z)

A naive recursive implementation in terms of the current Sum and Scaled kernels hides opportunities for parallelism in the computation of each term, and the summation over terms.

Notable examples of kernels with th

junpenglao
junpenglao commented Jun 4, 2022

After #210, it should be straightforward to add multi-pathfinder (ref: https://arxiv.org/pdf/2108.03782.pdf). The code snippet below mostly work (still need implementation of Pareto Smoothed important sampling).

multi_pathfinder = jax.vmap(lambda rng_key, x: blackjax.vi.pathfinder.init(rng_key, logprob_fn, x))
n_batch = 100
rng_keys = jax.random.split(rng_key, n_batch)
xs = w0 * j
enhancement good first issue

Improve this page

Add a description, image, and links to the probabilistic-programming topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the probabilistic-programming topic, visit your repo's landing page and select "manage topics."

Learn more