Automatic Differentiation
 
Loading...
Searching...
No Matches
log_softmax.hpp
Go to the documentation of this file.
1#ifndef STAN_MATH_OPENCL_PRIM_LOG_SOFTMAX_HPP
2#define STAN_MATH_OPENCL_PRIM_LOG_SOFTMAX_HPP
3#ifdef STAN_OPENCL
10
11namespace stan {
12namespace math {
13
21template <typename T,
22 require_all_kernel_expressions_and_none_scalar_t<T>* = nullptr>
23inline matrix_cl<double> log_softmax(const T& a) {
24 check_nonzero_size("log_softmax (OpenCL)", "x", a);
25 return make_holder_cl([](const auto& x) { return x - log_sum_exp(x); },
26 to_ref(a));
27}
28
29} // namespace math
30} // namespace stan
31
32#endif
33#endif
Represents an arithmetic matrix on the OpenCL device.
Definition matrix_cl.hpp:47
auto make_holder_cl(const T &func, Args &&... args)
Constructs an expression from given arguments using given functor.
ref_type_t< T && > to_ref(T &&a)
This evaluates expensive Eigen expressions.
Definition to_ref.hpp:17
void check_nonzero_size(const char *function, const char *name, const T_y &y)
Check if the specified matrix/vector is of non-zero size.
auto log_softmax(const T &x)
Return the log softmax of the specified vector or container of vectors.
fvar< T > log_sum_exp(const fvar< T > &x1, const fvar< T > &x2)
The lgamma implementation in stan-math is based on either the reentrant safe lgamma_r implementation ...
Definition fvar.hpp:9