Automatic Differentiation
 
Loading...
Searching...
No Matches
log_softmax.hpp
Go to the documentation of this file.
1#ifndef STAN_MATH_PRIM_FUN_LOG_SOFTMAX_HPP
2#define STAN_MATH_PRIM_FUN_LOG_SOFTMAX_HPP
3
10
11namespace stan {
12namespace math {
13
42template <typename Container, require_st_arithmetic<Container>* = nullptr,
43 require_container_t<Container>* = nullptr>
44inline auto log_softmax(const Container& x) {
45 check_nonzero_size("log_softmax", "v", x);
46 return make_holder(
47 [](const auto& a) {
49 a, [](const auto& v) { return v.array() - log_sum_exp(v); });
50 },
51 to_ref(x));
52}
53
54} // namespace math
55} // namespace stan
56#endif
auto make_holder(const F &func, Args &&... args)
Constructs an expression from given arguments using given functor.
Definition holder.hpp:352
ref_type_t< T && > to_ref(T &&a)
This evaluates expensive Eigen expressions.
Definition to_ref.hpp:17
void check_nonzero_size(const char *function, const char *name, const T_y &y)
Check if the specified matrix/vector is of non-zero size.
auto log_softmax(const T &x)
Return the log softmax of the specified vector or container of vectors.
constexpr decltype(auto) apply(F &&f, Tuple &&t, PreArgs &&... pre_args)
Definition apply.hpp:52
fvar< T > log_sum_exp(const fvar< T > &x1, const fvar< T > &x2)
The lgamma implementation in stan-math is based on either the reentrant safe lgamma_r implementation ...