This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Expression templates and ceil in Eigen dev branch

Tags: None
(comma "," separated)
danielbrake
Registered Member
Posts
15
Karma
0
Hi Eigen,
I use Eigen in combination with Boost.Multiprecision to solve polynomial systems over the complex numbers. Project Bertini located at https://github.com/bertiniteam/b2 . Ok, enough vague background.

Here's my current issue. In Eigen's current dev branch, checked out as of 2016.06.13 (as well as one I checked out 2016.06.01), I cannot print certain matrices to output streams, when using Boost.Multiprecision wrappers around MPFR, with expression templates turned ON. Importantly, this issue is NOT present in the 3.3-beta1 currently available from the Eigen website.

I believe the clang error message is the most useful piece of evidence to post, so here it is:
Code: Select all
eigen/Eigen/Core:330:
/Users/ofloveandhate/repo/eigen/Eigen/src/Core/MathFunctions.h:970:10: error: no viable conversion from
      'expression<boost::multiprecision::detail::function,
      boost::multiprecision::detail::ceil_funct<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic> >,
      boost::multiprecision::detail::expression<boost::multiprecision::detail::negate,
      boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, void, void, void>, [2 * ...]>' to
      'expression<boost::multiprecision::detail::negate, boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, void, [2 * ...]>'
  return ceil(x);
         ^~~~~~~
/Users/ofloveandhate/repo/eigen/Eigen/src/Core/IO.h:134:41: note: in instantiation of function template specialization
      'Eigen::numext::ceil<boost::multiprecision::detail::expression<boost::multiprecision::detail::negate,
      boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, void, void, void> >' requested here
    return cast<RealScalar,int>(numext::ceil(-numext::log(NumTraits<RealScalar>::epsilon())/numext::log(RealScalar(10))));
                                        ^
/Users/ofloveandhate/repo/eigen/Eigen/src/Core/IO.h:181:63: note: in instantiation of member function
      'Eigen::internal::significant_decimals_default_impl<bertini::complex, false>::run' requested here
      explicit_precision = significant_decimals_impl<Scalar>::run();
                                                              ^
/Users/ofloveandhate/repo/eigen/Eigen/src/Core/IO.h:246:20: note: in instantiation of function template specialization
      'Eigen::internal::print_matrix<Eigen::Matrix<bertini::complex, -1, 1, 0, -1, 1> >' requested here
  return internal::print_matrix(s, m.eval(), EIGEN_DEFAULT_IO_FORMAT);
                   ^
./include/bertini2/patch.hpp:437:9: note: in instantiation of function template specialization 'Eigen::operator<<<Eigen::Matrix<bertini::complex, -1, 1,
      0, -1, 1> >' requested here
                                out << c << "\n";
                                    ^
/usr/local/include/boost/multiprecision/detail/number_base.hpp:347:8: note: candidate constructor (the implicit copy constructor) not viable: no known
      conversion from 'typename enable_if_c<number_category<detail::expression<negate, expression<divide_immediates, number<mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, number<mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, void, void>, void, void, void> >::value == number_kind_floating_point,
      detail::expression<detail::function, detail::ceil_funct<typename detail::backend_type<detail::expression<negate, expression<divide_immediates,
      number<mpfr_float_backend<0, allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, number<mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, void, void, void> >::type>, detail::expression<negate,
      expression<divide_immediates, number<mpfr_float_backend<0, allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>,
      number<mpfr_float_backend<0, allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, void, void, void> > >::type'
      (aka 'boost::multiprecision::detail::expression<boost::multiprecision::detail::function,
      boost::multiprecision::detail::ceil_funct<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic> >,
      boost::multiprecision::detail::expression<boost::multiprecision::detail::negate,
      boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, void, void, void>, void, void>') to 'const
      boost::multiprecision::detail::expression<boost::multiprecision::detail::negate,
      boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, void, void, void> &' for 1st argument
struct expression<tag, Arg1, void, void, void>


So some commit between 3.3-beta1 and June 1 2016 broke the ceil function, it appears. In particular, since Boost.Multiprecision with expression templates on returns another expression from its ceil, Eigen fails to compile against certain code. I was not sure that my particular code generating this error is useful, but will happily post code or link to the files in my repo if desired.

--
Brief aside.

While I was looking at Eigen/src/Core/MathFunctions.h:970 in an effort to ensure that there wasn't some obvious-to-me fix (there wasn't), I noticed a troubling (to me) #define at the top of that file, too: #define EIGEN_PI 3.141592653589793238462643383279502884197169399375105820974944592307816406L

This scares me because if I use more than however many digits are in that #define during computation, Eigen may be getting a truncated representation of pi, right? How is this #define used? Wanted to mention it before it slipped my mind.

--

For completeness sake, here are some previous posts related, all ultimately resolved. The Boost ticket is the most relevant, working on the interaction between expression templates and Eigen.

https://svn.boost.org/trac/boost/ticket/11149
https://forum.kde.org/viewtopic.php?f=74&t=133221
https://forum.kde.org/viewtopic.php?f=74&t=128222
danielbrake
Registered Member
Posts
15
Karma
0
I will briefly follow up, and mention that ceil() has given me problems in the past, too, separately from Eigen, so this is possibly jointly an Eigen and Boost.Multiprecision problem. In particular, I located these two code samples from within my own codebase:

Code: Select all
unsigned MinDigitsForLogOfStepsize(mpfr_float const& log_of_stepsize, mpfr const& current_time)
      {
         return mpfr_float(ceil(log_of_stepsize) + ceil(log10(abs(current_time))) + 2).convert_to<int>();
      }


and

Code: Select all
static unsigned TolToDigits(mpfr_float tol)
      {
         mpfr_float b = ceil(-log10(tol));
         return b.convert_to<unsigned int>();
      }


so I take away that I myself am passing the results of ceil(mpfr_float) through a constructor for another mpfr_float before doing something else with it. I hope this is useful in finding a solution to this problem!
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
The regression has probably been introduced by changeset 0cf5d7f1bff (for CUDA compatibility purposes):
Code: Select all
-    using std::ceil;
-    using std::log;
-    return cast<RealScalar,int>(ceil(-log(NumTraits<RealScalar>::epsilon())/log(RealScalar(10))));
+    return cast<RealScalar,int>(numext::ceil(-numext::log(NumTraits<RealScalar>::epsilon())/numext::log(RealScalar(10))));


where numext::ceil is implemented as:

Code: Select all
template<typename T>
EIGEN_DEVICE_FUNC
T (ceil)(const T& x)
{
  using std::ceil;
  return ceil(x);
}


I guess that the problem is that because to expression templates, here T is not a mpfr_float but some expression, and thus the return type cannot be of type T. A workaround would be to call numext::ceil<RealScalar>(....), thus effectively disable expression templates here.

Of course, in C++14, we could properly implement numext::ceil, but in c++98 I'm not sure we can get the result type with automatic ADL.
danielbrake
Registered Member
Posts
15
Karma
0
A workaround would be to call numext::ceil<RealScalar>(....), thus effectively disable expression templates here.


tried, and it worked. however, there is another problem which I got to after compilation succeeded on my core. also related to expression templates, i believe. here's the error:

Code: Select all
In file included from test/classes/eigen_test.cpp:34:
In file included from ./include/bertini2/eigen_extensions.hpp:42:
In file included from /Users/ofloveandhate/repo/eigen/Eigen/Core:330:
/Users/ofloveandhate/repo/eigen/Eigen/src/Core/MathFunctions.h:253:12: error: no viable conversion from 'expression<detail::multiplies,
      detail::expression<divide_immediates, number<mpfr_float_backend<0, allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>,
      number<mpfr_float_backend<0, allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>,
      detail::expression<divide_immediates, number<mpfr_float_backend<0, allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>,
      number<mpfr_float_backend<0, allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, [2 * ...]>' to
      'expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, [2 * ...]>'
    return x*x;
           ^~~
/Users/ofloveandhate/repo/eigen/Eigen/src/Core/MathFunctions.h:275:68: note: in instantiation of member function
      'Eigen::internal::abs2_impl_default<boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void>, false>::run' requested here
    return abs2_impl_default<Scalar,NumTraits<Scalar>::IsComplex>::run(x);
                                                                   ^
/Users/ofloveandhate/repo/eigen/Eigen/src/Core/MathFunctions.h:907:45: note: in instantiation of member function
      'Eigen::internal::abs2_impl<boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void> >::run' requested here
  return EIGEN_MATHFUNC_IMPL(abs2, Scalar)::run(x);
                                            ^
/Users/ofloveandhate/repo/eigen/Eigen/src/QR/ColPivHouseholderQR.h:544:43: note: in instantiation of function template specialization
      'Eigen::numext::abs2<boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void> >' requested here
        RealScalar temp2 = temp * numext::abs2(m_colNormsUpdated.coeffRef(j) /
                                          ^
/Users/ofloveandhate/repo/eigen/Eigen/src/QR/ColPivHouseholderQR.h:463:3: note: in instantiation of member function
      'Eigen::ColPivHouseholderQR<Eigen::Matrix<bertini::complex, -1, -1, 0, -1, -1> >::computeInPlace' requested here
  computeInPlace();
  ^
/Users/ofloveandhate/repo/eigen/Eigen/src/SVD/JacobiSVD.h:226:12: note: in instantiation of function template specialization
      'Eigen::ColPivHouseholderQR<Eigen::Matrix<bertini::complex, -1, -1, 0, -1, -1> >::compute<Eigen::Matrix<bertini::complex, -1, -1, 0, -1, -1> >'
      requested here
      m_qr.compute(m_adjoint);
           ^
/Users/ofloveandhate/repo/eigen/Eigen/src/SVD/JacobiSVD.h:682:27: note: in instantiation of member function
      'Eigen::internal::qr_preconditioner_impl<Eigen::Matrix<bertini::complex, -1, -1, 0, -1, -1>, 2, 0, true>::run' requested here
    m_qr_precond_morecols.run(*this, m_scaledMatrix);
                          ^
/Users/ofloveandhate/repo/eigen/Eigen/src/SVD/JacobiSVD.h:544:7: note: in instantiation of member function
      'Eigen::JacobiSVD<Eigen::Matrix<bertini::complex, -1, -1, 0, -1, -1>, 2>::compute' requested here
      compute(matrix, computationOptions);
      ^
test/classes/eigen_test.cpp:339:78: note: in instantiation of member function 'Eigen::JacobiSVD<Eigen::Matrix<bertini::complex, -1, -1, 0, -1, -1>,
      2>::JacobiSVD' requested here
                Eigen::JacobiSVD<Eigen::Matrix<data_type, Eigen::Dynamic, Eigen::Dynamic>> svd(A, Eigen::ComputeThinU | Eigen::ComputeThinV);
                                                                                           ^
/usr/local/include/boost/multiprecision/detail/number_base.hpp:412:8: note: candidate constructor (the implicit copy constructor) not viable: no known
      conversion from 'detail::expression<detail::multiplies, detail::expression<divide_immediates, number<mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, number<mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, void, void>, detail::expression<divide_immediates, number<mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, number<mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, void, void> >' to 'const
      boost::multiprecision::detail::expression<boost::multiprecision::detail::divide_immediates,
      boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0, allocate_dynamic>,
      boost::multiprecision::expression_template_option::et_on>, boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<0,
      allocate_dynamic>, boost::multiprecision::expression_template_option::et_on>, void, void> &' for 1st argument
struct expression<tag, Arg1, Arg2, void, void>


I took your previous fix, and applied at the two sites needed to resolve the two similar errors, one given above. The two sites were
Eigen/src/QR/ColPivHouseholderQR.h:493,
Eigen/src/QR/ColPivHouseholderQR.h:544.

The fix was numext::abs2 --> numext::abs2<RealScalar>.

With this, my entire computational core and test suites once again compile with 3.3, as of yesterday's pull. Will you please confirm that these sets of changes (to Core/IO and QR/ColPivHouseholderQR) are incorporated into Eigen? Thanks very much for your help!!!!
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Actually, I'm not 100% sure yet how to properly handle this issue as there are many more occurrences that would need to be fixed, and bypassing ET is not completely satisfactory.
danielbrake
Registered Member
Posts
15
Karma
0
Ok, thanks for letting me know. I'll await your further reply! Thanks!
danielbrake
Registered Member
Posts
15
Karma
0
any further information on a possible fix?
danielbrake
Registered Member
Posts
15
Karma
0
Rev 9265 still exhibits problems with ceil for me.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
You might try the following patch with -std=c++14:

Code: Select all
diff --git a/Eigen/src/Core/MathFunctions.h b/Eigen/src/Core/MathFunctions.h
--- a/Eigen/src/Core/MathFunctions.h
+++ b/Eigen/src/Core/MathFunctions.h
@@ -243,39 +243,39 @@ struct conj_retval
 * Implementation of abs2                                                 *
 ****************************************************************************/
 
 template<typename Scalar,bool IsComplex>
 struct abs2_impl_default
 {
   typedef typename NumTraits<Scalar>::Real RealScalar;
   EIGEN_DEVICE_FUNC
-  static inline RealScalar run(const Scalar& x)
+  static inline EIGEN_CXX14_AUTO(RealScalar) run(const Scalar& x)
   {
     return x*x;
   }
 };
 
 template<typename Scalar>
 struct abs2_impl_default<Scalar, true> // IsComplex
 {
   typedef typename NumTraits<Scalar>::Real RealScalar;
   EIGEN_DEVICE_FUNC
-  static inline RealScalar run(const Scalar& x)
+  static inline EIGEN_CXX14_AUTO(RealScalar) run(const Scalar& x)
   {
     return real(x)*real(x) + imag(x)*imag(x);
   }
 };
 
 template<typename Scalar>
 struct abs2_impl
 {
   typedef typename NumTraits<Scalar>::Real RealScalar;
   EIGEN_DEVICE_FUNC
-  static inline RealScalar run(const Scalar& x)
+  static inline EIGEN_CXX14_AUTO(RealScalar) run(const Scalar& x)
   {
     return abs2_impl_default<Scalar,NumTraits<Scalar>::IsComplex>::run(x);
   }
 };
 
 template<typename Scalar>
 struct abs2_retval
 {
@@ -494,17 +494,17 @@ struct log1p_retval
 * Implementation of pow                                                  *
 ****************************************************************************/
 
 template<typename ScalarX,typename ScalarY, bool IsInteger = NumTraits<ScalarX>::IsInteger&&NumTraits<ScalarY>::IsInteger>
 struct pow_impl
 {
   //typedef Scalar retval;
   typedef typename ScalarBinaryOpTraits<ScalarX,ScalarY,internal::scalar_pow_op<ScalarX,ScalarY> >::ReturnType result_type;
-  static EIGEN_DEVICE_FUNC inline result_type run(const ScalarX& x, const ScalarY& y)
+  static EIGEN_DEVICE_FUNC inline EIGEN_CXX14_AUTO(result_type) run(const ScalarX& x, const ScalarY& y)
   {
     EIGEN_USING_STD_MATH(pow);
     return pow(x, y);
   }
 };
 
 template<typename ScalarX,typename ScalarY>
 struct pow_impl<ScalarX,ScalarY, true>
@@ -890,17 +890,17 @@ template<typename Scalar>
 EIGEN_DEVICE_FUNC
 inline EIGEN_MATHFUNC_RETVAL(conj, Scalar) conj(const Scalar& x)
 {
   return EIGEN_MATHFUNC_IMPL(conj, Scalar)::run(x);
 }
 
 template<typename Scalar>
 EIGEN_DEVICE_FUNC
-inline EIGEN_MATHFUNC_RETVAL(abs2, Scalar) abs2(const Scalar& x)
+inline EIGEN_CXX14_AUTO(EIGEN_MATHFUNC_RETVAL(abs2, Scalar)) abs2(const Scalar& x)
 {
   return EIGEN_MATHFUNC_IMPL(abs2, Scalar)::run(x);
 }
 
 template<typename Scalar>
 EIGEN_DEVICE_FUNC
 inline EIGEN_MATHFUNC_RETVAL(norm1, Scalar) norm1(const Scalar& x)
 {
@@ -918,17 +918,17 @@ template<typename Scalar>
 EIGEN_DEVICE_FUNC
 inline EIGEN_MATHFUNC_RETVAL(log1p, Scalar) log1p(const Scalar& x)
 {
   return EIGEN_MATHFUNC_IMPL(log1p, Scalar)::run(x);
 }
 
 template<typename ScalarX,typename ScalarY>
 EIGEN_DEVICE_FUNC
-inline typename internal::pow_impl<ScalarX,ScalarY>::result_type pow(const ScalarX& x, const ScalarY& y)
+inline EIGEN_CXX14_AUTO(typename internal::pow_impl<ScalarX EIGEN_COMMA ScalarY>::result_type) pow(const ScalarX& x, const ScalarY& y)
 {
   return internal::pow_impl<ScalarX,ScalarY>::run(x, y);
 }
 
 template<typename T> EIGEN_DEVICE_FUNC bool (isnan)   (const T &x) { return internal::isnan_impl(x); }
 template<typename T> EIGEN_DEVICE_FUNC bool (isinf)   (const T &x) { return internal::isinf_impl(x); }
 template<typename T> EIGEN_DEVICE_FUNC bool (isfinite)(const T &x) { return internal::isfinite_impl(x); }
 
@@ -936,33 +936,33 @@ template<typename Scalar>
 EIGEN_DEVICE_FUNC
 inline EIGEN_MATHFUNC_RETVAL(round, Scalar) round(const Scalar& x)
 {
   return EIGEN_MATHFUNC_IMPL(round, Scalar)::run(x);
 }
 
 template<typename T>
 EIGEN_DEVICE_FUNC
-T (floor)(const T& x)
+EIGEN_CXX14_AUTO(T) (floor)(const T& x)
 {
   EIGEN_USING_STD_MATH(floor);
   return floor(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float floor(const float &x) { return ::floorf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double floor(const double &x) { return ::floor(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC
-T (ceil)(const T& x)
+EIGEN_CXX14_AUTO(T) (ceil)(const T& x)
 {
   EIGEN_USING_STD_MATH(ceil);
   return ceil(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float ceil(const float &x) { return ::ceilf(x); }
@@ -992,191 +992,192 @@ inline int log2(int x)
   * It is essentially equivalent to \code using std::sqrt; return sqrt(x); \endcode,
   * but slightly faster for float/double and some compilers (e.g., gcc), thanks to
   * specializations when SSE is enabled.
   *
   * It's usage is justified in performance critical functions, like norm/normalize.
   */
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T sqrt(const T &x)
+EIGEN_CXX14_AUTO(T) sqrt(const T &x)
 {
   EIGEN_USING_STD_MATH(sqrt);
   return sqrt(x);
 }
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T log(const T &x) {
+EIGEN_CXX14_AUTO(T) log(const T &x) {
   EIGEN_USING_STD_MATH(log);
   return log(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float log(const float &x) { return ::logf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double log(const double &x) { return ::log(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-typename NumTraits<T>::Real abs(const T &x) {
+EIGEN_CXX14_AUTO(typename NumTraits<T>::Real)
+abs(const T &x) {
   EIGEN_USING_STD_MATH(abs);
   return abs(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float abs(const float &x) { return ::fabsf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double abs(const double &x) { return ::fabs(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T exp(const T &x) {
+EIGEN_CXX14_AUTO(T) exp(const T &x) {
   EIGEN_USING_STD_MATH(exp);
   return exp(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float exp(const float &x) { return ::expf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double exp(const double &x) { return ::exp(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T cos(const T &x) {
+EIGEN_CXX14_AUTO(T) cos(const T &x) {
   EIGEN_USING_STD_MATH(cos);
   return cos(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float cos(const float &x) { return ::cosf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double cos(const double &x) { return ::cos(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T sin(const T &x) {
+EIGEN_CXX14_AUTO(T) sin(const T &x) {
   EIGEN_USING_STD_MATH(sin);
   return sin(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float sin(const float &x) { return ::sinf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double sin(const double &x) { return ::sin(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T tan(const T &x) {
+EIGEN_CXX14_AUTO(T) tan(const T &x) {
   EIGEN_USING_STD_MATH(tan);
   return tan(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float tan(const float &x) { return ::tanf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double tan(const double &x) { return ::tan(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T acos(const T &x) {
+EIGEN_CXX14_AUTO(T) acos(const T &x) {
   EIGEN_USING_STD_MATH(acos);
   return acos(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float acos(const float &x) { return ::acosf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double acos(const double &x) { return ::acos(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T asin(const T &x) {
+EIGEN_CXX14_AUTO(T) asin(const T &x) {
   EIGEN_USING_STD_MATH(asin);
   return asin(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float asin(const float &x) { return ::asinf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double asin(const double &x) { return ::asin(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T atan(const T &x) {
+EIGEN_CXX14_AUTO(T) atan(const T &x) {
   EIGEN_USING_STD_MATH(atan);
   return atan(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float atan(const float &x) { return ::atanf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double atan(const double &x) { return ::atan(x); }
 #endif
 
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T cosh(const T &x) {
+EIGEN_CXX14_AUTO(T) cosh(const T &x) {
   EIGEN_USING_STD_MATH(cosh);
   return cosh(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float cosh(const float &x) { return ::coshf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double cosh(const double &x) { return ::cosh(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T sinh(const T &x) {
+EIGEN_CXX14_AUTO(T) sinh(const T &x) {
   EIGEN_USING_STD_MATH(sinh);
   return sinh(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float sinh(const float &x) { return ::sinhf(x); }
 
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 double sinh(const double &x) { return ::sinh(x); }
 #endif
 
 template<typename T>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-T tanh(const T &x) {
+EIGEN_CXX14_AUTO(T) tanh(const T &x) {
   EIGEN_USING_STD_MATH(tanh);
   return tanh(x);
 }
 
 #ifdef __CUDACC__
 template<> EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
 float tanh(const float &x) { return ::tanhf(x); }
 
diff --git a/Eigen/src/Core/arch/CUDA/Half.h b/Eigen/src/Core/arch/CUDA/Half.h
--- a/Eigen/src/Core/arch/CUDA/Half.h
+++ b/Eigen/src/Core/arch/CUDA/Half.h
@@ -395,62 +395,62 @@ EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC bo
 #else
   return (a.x & 0x7fff) > 0x7c00;
 #endif
 }
 EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC bool (isfinite)(const Eigen::half& a) {
   return !(Eigen::numext::isinf)(a) && !(Eigen::numext::isnan)(a);
 }
 
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half abs(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) abs(const Eigen::half& a) {
   Eigen::half result;
   result.x = a.x & 0x7FFF;
   return result;
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half exp(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) exp(const Eigen::half& a) {
   return Eigen::half(::expf(float(a)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half log(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) log(const Eigen::half& a) {
   return Eigen::half(::logf(float(a)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half sqrt(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) sqrt(const Eigen::half& a) {
   return Eigen::half(::sqrtf(float(a)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half pow(const Eigen::half& a, const Eigen::half& b) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) pow(const Eigen::half& a, const Eigen::half& b) {
   return Eigen::half(::powf(float(a), float(b)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half sin(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) sin(const Eigen::half& a) {
   return Eigen::half(::sinf(float(a)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half cos(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) cos(const Eigen::half& a) {
   return Eigen::half(::cosf(float(a)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half tan(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) tan(const Eigen::half& a) {
   return Eigen::half(::tanf(float(a)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half tanh(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) tanh(const Eigen::half& a) {
   return Eigen::half(::tanhf(float(a)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half floor(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) floor(const Eigen::half& a) {
   return Eigen::half(::floorf(float(a)));
 }
-template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half ceil(const Eigen::half& a) {
+template<> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC EIGEN_CXX14_AUTO(Eigen::half) ceil(const Eigen::half& a) {
   return Eigen::half(::ceilf(float(a)));
 }
 
-template <> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half mini(const Eigen::half& a, const Eigen::half& b) {
+/*template <>*/ EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half mini(const Eigen::half& a, const Eigen::half& b) {
 #if defined(EIGEN_HAS_CUDA_FP16) && defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 530
   return __hlt(b, a) ? b : a;
 #else
   const float f1 = static_cast<float>(a);
   const float f2 = static_cast<float>(b);
   return f2 < f1 ? b : a;
 #endif
 }
-template <> EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half maxi(const Eigen::half& a, const Eigen::half& b) {
+/*template <>*/ EIGEN_STRONG_INLINE EIGEN_DEVICE_FUNC Eigen::half maxi(const Eigen::half& a, const Eigen::half& b) {
 #if defined(EIGEN_HAS_CUDA_FP16) && defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 530
   return __hlt(a, b) ? b : a;
 #else
   const float f1 = static_cast<float>(a);
   const float f2 = static_cast<float>(b);
   return f1 < f2 ? b : a;
 #endif
 }
diff --git a/Eigen/src/Core/arch/SSE/MathFunctions.h b/Eigen/src/Core/arch/SSE/MathFunctions.h
--- a/Eigen/src/Core/arch/SSE/MathFunctions.h
+++ b/Eigen/src/Core/arch/SSE/MathFunctions.h
@@ -566,24 +566,24 @@ ptanh<Packet4f>(const Packet4f& _x) {
 }
 
 } // end namespace internal
 
 namespace numext {
 
 template<>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-float sqrt(const float &x)
+EIGEN_CXX14_AUTO(float) sqrt(const float &x)
 {
   return internal::pfirst(internal::Packet4f(_mm_sqrt_ss(_mm_set_ss(x))));
 }
 
 template<>
 EIGEN_DEVICE_FUNC EIGEN_ALWAYS_INLINE
-double sqrt(const double &x)
+EIGEN_CXX14_AUTO(double) sqrt(const double &x)
 {
 #if EIGEN_COMP_GNUC_STRICT
   // This works around a GCC bug generating poor code for _mm_sqrt_pd
   // See https://bitbucket.org/eigen/eigen/commits/14f468dba4d350d7c19c9b93072e19f7b3df563b
   return internal::pfirst(internal::Packet2d(__builtin_ia32_sqrtsd(_mm_set_sd(x))));
 #else
   return internal::pfirst(internal::Packet2d(_mm_sqrt_pd(_mm_set_sd(x))));
 #endif
diff --git a/Eigen/src/Core/util/Macros.h b/Eigen/src/Core/util/Macros.h
--- a/Eigen/src/Core/util/Macros.h
+++ b/Eigen/src/Core/util/Macros.h
@@ -451,16 +451,32 @@
       || ((__cplusplus >= 201103L) && (EIGEN_COMP_GNUC_STRICT || EIGEN_COMP_CLANG || EIGEN_COMP_ICC>=1400)) \
       || EIGEN_COMP_MSVC >= 1900)
     #define EIGEN_HAS_CXX11_NOEXCEPT 1
   #else
     #define EIGEN_HAS_CXX11_NOEXCEPT 0
   #endif
 #endif
 
+// Does the compiler support variadic templates?
+#ifndef EIGEN_HAS_CXX14
+#if EIGEN_MAX_CPP_VER>=14 && __cplusplus >= 201402L
+#define EIGEN_HAS_CXX14 1
+#else
+#define EIGEN_HAS_CXX14 0
+#endif
+#endif
+
+// This macro outputs auto if c++14 is available, and DEFAULT_TYPE otherwise
+#if EIGEN_HAS_CXX14
+#define EIGEN_CXX14_AUTO(DEFAULT_TYPE) auto
+#else
+#define EIGEN_CXX14_AUTO(DEFAULT_TYPE) DEFAULT_TYPE
+#endif
+
 /** Allows to disable some optimizations which might affect the accuracy of the result.
   * Such optimization are enabled by default, and set EIGEN_FAST_MATH to 0 to disable them.
   * They currently include:
   *   - single precision ArrayBase::sin() and ArrayBase::cos() for SSE and AVX vectorization.
   */
 #ifndef EIGEN_FAST_MATH
 #define EIGEN_FAST_MATH 1
 #endif
diff --git a/Eigen/src/SVD/BDCSVD.h b/Eigen/src/SVD/BDCSVD.h
--- a/Eigen/src/SVD/BDCSVD.h
+++ b/Eigen/src/SVD/BDCSVD.h
@@ -1047,17 +1047,17 @@ void BDCSVD<MatrixType>::deflation(Index
   const Index length = lastCol + 1 - firstCol;
   
   Block<MatrixXr,Dynamic,1> col0(m_computed, firstCol+shift, firstCol+shift, length, 1);
   Diagonal<MatrixXr> fulldiag(m_computed);
   VectorBlock<Diagonal<MatrixXr>,Dynamic> diag(fulldiag, firstCol+shift, length);
   
   const RealScalar considerZero = (std::numeric_limits<RealScalar>::min)();
   RealScalar maxDiag = diag.tail((std::max)(Index(1),length-1)).cwiseAbs().maxCoeff();
-  RealScalar epsilon_strict = numext::maxi(considerZero,NumTraits<RealScalar>::epsilon() * maxDiag);
+  RealScalar epsilon_strict = numext::maxi<RealScalar>(considerZero,NumTraits<RealScalar>::epsilon() * maxDiag);
   RealScalar epsilon_coarse = 8 * NumTraits<RealScalar>::epsilon() * numext::maxi<RealScalar>(col0.cwiseAbs().maxCoeff(), maxDiag);
   
 #ifdef EIGEN_BDCSVD_SANITY_CHECKS
   assert(m_naiveU.allFinite());
   assert(m_naiveV.allFinite());
   assert(m_computed.allFinite());
 #endif
 
diff --git a/Eigen/src/SVD/JacobiSVD.h b/Eigen/src/SVD/JacobiSVD.h
--- a/Eigen/src/SVD/JacobiSVD.h
+++ b/Eigen/src/SVD/JacobiSVD.h
@@ -722,17 +722,17 @@ JacobiSVD<MatrixType, QRPreconditioner>:
             // accumulate resulting Jacobi rotations
             m_workMatrix.applyOnTheLeft(p,q,j_left);
             if(computeU()) m_matrixU.applyOnTheRight(p,q,j_left.transpose());
 
             m_workMatrix.applyOnTheRight(p,q,j_right);
             if(computeV()) m_matrixV.applyOnTheRight(p,q,j_right);
 
             // keep track of the largest diagonal coefficient
-            maxDiagEntry = numext::maxi(maxDiagEntry,numext::maxi(abs(m_workMatrix.coeff(p,p)), abs(m_workMatrix.coeff(q,q))));
+            maxDiagEntry = numext::maxi<RealScalar>(maxDiagEntry,numext::maxi(abs(m_workMatrix.coeff(p,p)), abs(m_workMatrix.coeff(q,q))));
           }
         }
       }
     }
   }
 
   /*** step 3. The work matrix is now diagonal, so ensure it's positive so its diagonal entries are the singular values ***/
 
diff --git a/Eigen/src/SVD/SVDBase.h b/Eigen/src/SVD/SVDBase.h
--- a/Eigen/src/SVD/SVDBase.h
+++ b/Eigen/src/SVD/SVDBase.h
@@ -125,20 +125,19 @@ public:
     *
     * \note This method has to determine which singular values should be considered nonzero.
     *       For that, it uses the threshold value that you can control by calling
     *       setThreshold(const RealScalar&).
     */
   inline Index rank() const
   {
     using std::abs;
-    using std::max;
     eigen_assert(m_isInitialized && "JacobiSVD is not initialized.");
     if(m_singularValues.size()==0) return 0;
-    RealScalar premultiplied_threshold = (max)(m_singularValues.coeff(0) * threshold(), (std::numeric_limits<RealScalar>::min)());
+    RealScalar premultiplied_threshold = numext::maxi<RealScalar>(m_singularValues.coeff(0) * threshold(), (std::numeric_limits<RealScalar>::min)());
     Index i = m_nonzeroSingularValues-1;
     while(i>=0 && m_singularValues.coeff(i) < premultiplied_threshold) --i;
     return i+1;
   }
   
   /** Allows to prescribe a threshold to be used by certain methods, such as rank() and solve(),
     * which need to determine when singular values are to be considered nonzero.
     * This is not used for the SVD decomposition itself.
diff --git a/test/boostmultiprec.cpp b/test/boostmultiprec.cpp
--- a/test/boostmultiprec.cpp
+++ b/test/boostmultiprec.cpp
@@ -54,17 +54,17 @@
 #undef isnan
 #undef isinf
 #undef isfinite
 
 #include <boost/multiprecision/cpp_dec_float.hpp>
 #include <boost/multiprecision/number.hpp>
 
 namespace mp = boost::multiprecision;
-typedef mp::number<mp::cpp_dec_float<100>, mp::et_off> Real; // swith to et_on for testing with expression templates
+typedef mp::number<mp::cpp_dec_float<100>, mp::et_on> Real; // swith to et_on for testing with expression templates
 
 namespace Eigen {
   template<> struct NumTraits<Real> : GenericNumTraits<Real> {
     static inline Real dummy_precision() { return 1e-50; }
   };
 
   template<typename T1,typename T2,typename T3,typename T4,typename T5>
   struct NumTraits<boost::multiprecision::detail::expression<T1,T2,T3,T4,T5> > : NumTraits<Real> {};
diff --git a/test/eigensolver_selfadjoint.cpp b/test/eigensolver_selfadjoint.cpp
--- a/test/eigensolver_selfadjoint.cpp
+++ b/test/eigensolver_selfadjoint.cpp
@@ -14,17 +14,17 @@
 #include <Eigen/Eigenvalues>
 #include <Eigen/SparseCore>
 
 
 template<typename MatrixType> void selfadjointeigensolver_essential_check(const MatrixType& m)
 {
   typedef typename MatrixType::Scalar Scalar;
   typedef typename NumTraits<Scalar>::Real RealScalar;
-  RealScalar eival_eps = (std::min)(test_precision<RealScalar>(),  NumTraits<Scalar>::dummy_precision()*20000);
+  RealScalar eival_eps = numext::mini<RealScalar>(test_precision<RealScalar>(),  NumTraits<Scalar>::dummy_precision()*20000);
   
   SelfAdjointEigenSolver<MatrixType> eiSymm(m);
   VERIFY_IS_EQUAL(eiSymm.info(), Success);
   VERIFY_IS_APPROX(m.template selfadjointView<Lower>() * eiSymm.eigenvectors(),
                    eiSymm.eigenvectors() * eiSymm.eigenvalues().asDiagonal());
   VERIFY_IS_APPROX(m.template selfadjointView<Lower>().eigenvalues(), eiSymm.eigenvalues());
   VERIFY_IS_UNITARY(eiSymm.eigenvectors());
 
diff --git a/test/qr.cpp b/test/qr.cpp
--- a/test/qr.cpp
+++ b/test/qr.cpp
@@ -81,17 +81,17 @@ template<typename MatrixType> void qr_in
   for(int i = 0; i < size; i++) m1(i,i) = internal::random<Scalar>();
   RealScalar absdet = abs(m1.diagonal().prod());
   m3 = qr.householderQ(); // get a unitary
   m1 = m3 * m1 * m3;
   qr.compute(m1);
   VERIFY_IS_APPROX(log(absdet), qr.logAbsDeterminant());
   // This test is tricky if the determinant becomes too small.
   // Since we generate random numbers with magnitude rrange [0,1], the average determinant is 0.5^size
-  VERIFY_IS_MUCH_SMALLER_THAN( abs(absdet-qr.absDeterminant()), (max)(RealScalar(pow(0.5,size)),(max)(abs(absdet),abs(qr.absDeterminant()))) );
+  VERIFY_IS_MUCH_SMALLER_THAN( abs(absdet-qr.absDeterminant()), numext::maxi(RealScalar(pow(0.5,size)),numext::maxi<RealScalar>(abs(absdet),abs(qr.absDeterminant()))) );
   
 }
 
 template<typename MatrixType> void qr_verify_assert()
 {
   MatrixType tmp;
 
   HouseholderQR<MatrixType> qr;
danielbrake
Registered Member
Posts
15
Karma
0
what commit should i be at before trying this? thanks!!!!!!!
danielbrake
Registered Member
Posts
15
Karma
0
My project uses C++14 anyway, so trying this out was no trouble at all. The patch didn't apply perfectly to the commit I worked from, number 9348, pulled July 25 2016. Nonetheless, the patch solved my compilation problem, I can compile my entire library, and run my entire test suite again.

I am working on introducing expression templates for complexes in addition to the reals, so will let you know if I encounter any more problems in this vein. Thanks for the help!!!
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Finally, I went with explicit casts from expressions to actual scalar. Only a few changes were required regarding calls to min/max to make it works with all our matrix decompositions. See the respective new unit test:

https://bitbucket.org/eigen/eigen/src/a ... ew-default

for some required specializations.


Bookmarks



Who is online

Registered users: Bing [Bot], blue_bullet, Google [Bot], rockscient, Yahoo [Bot]