Automatic Differentiation
 
Loading...
Searching...
No Matches
beta.hpp
Go to the documentation of this file.
1#ifndef STAN_MATH_PRIM_FUN_BETA_HPP
2#define STAN_MATH_PRIM_FUN_BETA_HPP
3
8#include <cmath>
9
10namespace stan {
11namespace math {
12
52template <typename T1, typename T2, require_all_arithmetic_t<T1, T2>* = nullptr>
53inline return_type_t<T1, T2> beta(const T1 a, const T2 b) {
54 using std::exp;
55 return exp(lgamma(a) + lgamma(b) - lgamma(a + b));
56}
57
68template <typename T1, typename T2, require_any_container_t<T1, T2>* = nullptr,
69 require_all_not_var_matrix_t<T1, T2>* = nullptr>
70inline auto beta(const T1& a, const T2& b) {
72 a, b, [](const auto& c, const auto& d) { return beta(c, d); });
73}
74
75} // namespace math
76} // namespace stan
77
78#endif
typename return_type< Ts... >::type return_type_t
Convenience type for the return type of the specified template parameters.
fvar< T > lgamma(const fvar< T > &x)
Return the natural logarithm of the gamma function applied to the specified argument.
Definition lgamma.hpp:21
auto apply_scalar_binary(const T1 &x, const T2 &y, const F &f)
Base template function for vectorization of binary scalar functions defined by applying a functor to ...
fvar< T > beta(const fvar< T > &x1, const fvar< T > &x2)
Return fvar with the beta function applied to the specified arguments and its gradient.
Definition beta.hpp:51
fvar< T > exp(const fvar< T > &x)
Definition exp.hpp:13
The lgamma implementation in stan-math is based on either the reentrant safe lgamma_r implementation ...
Definition fvar.hpp:9