Registered Member
|
Hello,
I have a set of custom datatypes (type_i64, type_i32, type_i16r), for which I am performing a function over the array and then accumulating the result in a smaller datatype. I perform the fuction by overloading the multiplication operator. Accumulating over the array is similar to matrix multiplication, but with an overloaded multiplication operator (see bottom). I'm not really sure what's going on, but for some reason the compiler is summing over type_i64 type instead of the itype_i16r and generating the following error: "external/eigen_archive/eigen-eigen-334b1d428283/unsupported/Eigen/CXX11/../../../Eigen/src/Core/functors/BinaryFunctors.h:42:130: error: could not convert 'Eigen::operator+((* & a), (* & b))' from 'Eigen::type_i16r' to 'const result_type {aka const Eigen::type_i64}'" I think I need to disable the scalar_sum_op, but I'm not sure how. I tried using ScalarBinaryOpTraits but I don't think my setup is correct. I get the following error: "/third_party/eigen3/unsupported/Eigen/CXX11/src/CustomOps/Types.h:39:58: error: type/value mismatch at argument 3 in template parameter list for 'template<class ScalarA, class ScalarB, class BinaryOp> struct Eigen::ScalarBinaryOpTraits' struct ScalarBinaryOpTraits<BUInt64,BUInt64,scalar_sum_op>" Any help would be greatly appreciated. Cheers, I've this code is in the header file: //---- HEADER CODE START namespace Eigen { struct type_i64; struct type_i32; struct type_i16r; template <> struct NumTraits<type_i64> : GenericNumTraits<uint64_t> {}; template <> struct NumTraits<type_i32> : GenericNumTraits<uint32_t> {}; template <> struct NumTraits<type_i16> : GenericNumTraits<int16_t> {}; namespace internal { template <> struct scalar_product_traits<type_i64, type_i64> { enum { Defined = 1 }; typedef type_i16 ReturnType; }; template<> struct ScalarBinaryOpTraits<type_i64,type_i64,internal::scalar_sum_op> { enum { Defined = 0 }; }; } } struct type_i64 { type_i64() {} type_i64(const int64_t v) : value(v) {} operator int() const { return static_cast<int>(value); } int64_t value; }; struct type_i32 { type_i32() {} type_i32(const int32_t v) : value(v) {} operator int() const { return static_cast<int>(value); } int32_t value; }; struct type_i16r { type_i16r() {} type_i16r(const int8_t v) : value(v) {} type_i16r(const type_i32 v) : value(2 * __builtin_popcountl(v.value) - 32) {} type_i16r(const type_i64 v) : value(2 * __builtin_popcountll(v.value) - 64) {} // type_i16r(const float v) : value(static_cast<int16_t>(lrint(v))) {} // // operator float() const { return static_cast<float>(value); } int16_t value; }; EIGEN_STRONG_INLINE type_i16r operator+(const type_i16r a, const type_i16r b) { return a.value + b.value; } EIGEN_STRONG_INLINE type_i16r &operator+=(type_i16r &a, const type_i16r b) { a.value += b.value; return a; } EIGEN_STRONG_INLINE type_i16r operator*(const type_i64 a, const type_i64 b) { return foo(a.value ^ b.value)); } } /// END CODE // LOOP CODE template<typename Index, typename LhsMapper, bool ConjugateLhs, typename RhsMapper, bool ConjugateRhs, int Version> struct general_matrix_vector_product<Index,type_i64,LhsMapper,ColMajor,ConjugateLhs,type_i64,RhsMapper,ConjugateRhs,Version> { EIGEN_DONT_INLINE static void run( Index rows, Index cols, const LhsMapper& lhs, const RhsMapper& rhs, type_i16r* res, Index resIncr, type_i16r alpha); }; template<typename Index, typename LhsMapper, bool ConjugateLhs, typename RhsMapper, bool ConjugateRhs, int Version> EIGEN_DONT_INLINE void general_matrix_vector_product<Index,type_i64,LhsMapper,ColMajor,ConjugateLhs,type_i64,RhsMapper,ConjugateRhs,Version>::run( Index rows, Index cols, const LhsMapper& lhs, const RhsMapper& rhs, type_i16r* res, Index resIncr, type_i16r alpha) { eigen_assert(alpha.value == 1); eigen_assert(resIncr == 1); eigen_assert(rows > 0); eigen_assert(cols > 0); for (Index i = 0; i < rows; ++i) { for (Index j = 0; j < cols; ++j) { res[i] += lhs(i, j) * rhs(j, 0); } } } // END CODE |
Moderator
|
This ScalarBinaryOpTraits is very new, and I'm glad that you came up with a fancy exemple to stress it (currently I've only tested complex vs real and auto-diff scalar). To make it work you need to update your clone, specialize ScalarBinaryOpTraits for scalar_product_op (internal::scalar_product_traits is obsolete), and add a few overload of operator* to deal with alpha. See the working exemple below. Actually, you can also save a lot of troubles by using A.lazyProduct(x) instead of A*x (no alpha stuff, no general_matrix_vector_product specializations, etc.).
|
Registered users: Bing [Bot], blue_bullet, Google [Bot], rockscient, Yahoo [Bot]