-
Updated
Apr 28, 2022
probabilistic-programming
Here are 378 public repositories matching this topic...
-
Updated
Jun 4, 2022 - Python
-
Updated
Jun 4, 2022 - Jupyter Notebook
-
Updated
Oct 22, 2019 - Jupyter Notebook
The currently implemented version of the horseshoe distribution is not the parameterization that most ML papers use. This limits the ease of use of this as, for example, a prior in a tfp.layers.KLDivergenceAddLoss or in tfp.layers.DenseReparameterization. The regularized horseshoe would also be useful as an implemented distribution.
The alternative parameterization is shown here:
https://www.
-
Updated
Jan 9, 2020 - Python
Ankit Shah and I are trying to use Gen to support a project and would love the addition of a dirichlet distribution
-
Updated
Jun 1, 2022 - Julia
-
Updated
Mar 15, 2021 - Go
A quick search for mixture distributions in numpyro only turns up examples using Categorical
in conjunction with an array of distributions. Since sampling from discrete distributions is not always desirable, I have implemented a quick general purpose mixture distribution with continuous log probability.
class Mixture(Distribution):
arg_constraints = {}
def __init__(self
-
Updated
May 23, 2022 - Python
-
Updated
Jun 4, 2022 - C#
-
Updated
Jun 2, 2022 - Python
-
Updated
Feb 17, 2021 - Python
-
Updated
Aug 7, 2020 - Python
-
Updated
Feb 8, 2022
-
Updated
Jan 17, 2020 - Swift
-
Updated
Aug 13, 2021 - Jupyter Notebook
-
Updated
Oct 2, 2020 - JavaScript
The current example on MDN from Edward tutorials needs small modifications to run on edward2. Documentation covering these modifications will be appreciated.
-
Updated
May 24, 2022 - Julia
Hi,
Looks like there is support for lots of common distribution. There are a handful of other distributions which are not presently supported but could (fingers crossed) be easily implemented. Looking at [Stan's Function Reference] I see...
- Beta Binomial
- [Chi-Square](https://mc-stan.org/docs/2
Improve tests
Rather than trying to rebuild all functionality from Distributions.jl, we're first focusing on reimplementing logdensity
(logpdf
in Distributions), and delegating most other functions to the current Distributions implementations.
So for example, we have
distproxy(d::Normal{(:μ, :σ)}) = Dists.Normal(d.μ, d.σ)
This makes some functions in Distributions.jl available through
There are a variety of interesting optimisations that can be performed on kernels of the form
k(x, z) = w_1 * k_1(x, z) + w_2 * k_2(x, z) + ... + w_L k_L(x, z)
A naive recursive implementation in terms of the current Sum
and Scaled
kernels hides opportunities for parallelism in the computation of each term, and the summation over terms.
Notable examples of kernels with th
GPU Support
After #210, it should be straightforward to add multi-pathfinder (ref: https://arxiv.org/pdf/2108.03782.pdf). The code snippet below mostly work (still need implementation of Pareto Smoothed important sampling).
multi_pathfinder = jax.vmap(lambda rng_key, x: blackjax.vi.pathfinder.init(rng_key, logprob_fn, x))
n_batch = 100
rng_keys = jax.random.split(rng_key, n_batch)
xs = w0 * j
-
Updated
Mar 21, 2022 - Haskell
Respectively, import
should load the predictor+code.
-
Updated
Sep 12, 2019 - Scala
-
Updated
Feb 23, 2022 - JavaScript
Improve this page
Add a description, image, and links to the probabilistic-programming topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the probabilistic-programming topic, visit your repo's landing page and select "manage topics."
NumPyro now has several excellent introductory examples with no direct counterparts in Pyro. Porting one of these to Pyro would be a great way for someone to simultaneously learn more about Bayesian data analysis and make a valuable open source contribution.
If you are reading this and want to give one of them a try, please leave a comment here so that other peo