Eigen笔记之Tensor

本文记录Eigen中Tensor Module的笔记

Eigen unsupport mudule: Tensor

Class Tensor<data_type, rank>

  • data_type:数据类型 int, float
  • rank: 数据的维度,比如矩阵 rank = 2

tensor 支持不同大小的tensor的赋值

1
2
3
// tensor_1 shape为{2, 3}
// tensor_2 shape 为 {3, 5}
tensor_1 = tensor_2;

tensor的初始化

1
2
3
4
5
6
7
// Create a tensor of rank 3 of sizes 2, 3, 4.  This tensor owns
// memory to hold 24 floating point values (24 = 2 x 3 x 4).
Tensor<float, 3> t_3d(2, 3, 4);

//Constructor where the sizes for the constructor are specified as an
// array of values instead of an explicitly list of parameters
Tensor<string, 2> t_2d({5, 7});

TensorFixedSize<data_type, Sizes<size0, size1, ...>>

固定大小的Tensor

1
2
3
4
5
//tensors of fixed size, where the size is known at compile time
// Fixed sized tensors can provide very fast computations
// If the total number of elements in a fixed size tensor is small enough
// the tensor data is held onto the stack and does not cause heap allocation and free.
TensorFixedSize<float, Sizes<4, 3>> t_4x3;

TensorMap<Tensor<data_type, rank>>

TensorMap可以从用户提供的已分配内存的空间中构造出tensor,但是TensorMap的大小是不可改变的因为这部分内存空间并不属于它。 TensorMap 的构造函数TensorMap<Tensor<data_type, rank>>(data, size0, size1, ...)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// Map a tensor of ints on top of stack-allocated storage.
int storage[128];  // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);

// The same storage can be viewed as a different tensor.
// You can also pass the sizes as an array.
TensorMap<Tensor<int, 2>> t_2d(storage, 16, 8);

// You can also map fixed-size tensors.  Here we get a 1d view of
// the 2d fixed-size tensor.
TensorFixedSize<float, Sizes<4, 5>> t_4x3;
TensorMap<Tensor<float, 1>> t_12(t_4x3.data(), 12);

Tensor的基本操作

获取元素和赋值

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// Set the value of the element at position (0, 1, 0);
Tensor<float, 3> t_3d(2, 3, 4);
t_3d(0, 1, 0) = 12.0f;

    // Initialize all elements to random values.
for (int i = 0; i < 2; ++i) {
  for (int j = 0; j < 3; ++j) {
    for (int k = 0; k < 4; ++k) {
      t_3d(i, j, k) = ...some random value...;
    }
  }
}

// Print elements of a tensor.
for (int i = 0; i < 2; ++i) {
    LOG(INFO) << t_3d(i, 0, 0);

操作

Tensor库包含了大量的操作,这些操作都作为Tensor类的方法,可以直接使用。

1
2
3
4
5
6
Tensor<float, 3> t1(2, 3, 4);
...set some values in t1...
Tensor<float, 3> t2(2, 3, 4);
...set some values in t2...
// Set t3 to the element wise sum of t1 and t2
Tensor<float, 3> t3 = t1 + t2;

对于表达式值的获取有以下三种方式:

  1. 赋值给Tensor,TensorFixedSize, 或者TensorMap
  2. 使用eval()方法
  3. 赋值诶TensorRef

在下面的实例中auto得到的是立即值Operations而不是Tensor,并且auto并不会让表达式进行计算,而是直到赋值给Tensor计算才会执行。

1
2
3
4
5
6
7
auto t3 = t1 + t2;             // t3 is an Operation.
auto t4 = t3 * 0.2f;           // t4 is an Operation.
auto t5 = t4.exp();            // t5 is an Operation.
Tensor<float, 3> result = t5;  // The operations are evaluated.

\\ 赋值给TensorFixedSize 而不是Tensor可以让计算更有效率
TensorFixedSize<float, Sizes<4, 4, 2>> result = t5;

使用eval()方法,需要注意调用eval()的返回值是一个Operationeval()可以让计算更先进行,类似于()的作用提高了优先级。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
// The previous example could have been written:
Tensor<float, 3> result = ((t1 + t2) * 0.2f).exp();

// If you want to compute (t1 + t2) once ahead of time you can write:
Tensor<float, 3> result = ((t1 + t2).eval() * 0.2f).exp();

// Here t3 is an evaluation Operation.  t3 has not been evaluated yet.
auto t3 = (t1 + t2).eval();

// You can use t3 in another expression.  Still no evaluation.
auto t4 = (t3 * 0.2f).exp();

// The value is evaluated when you assign the Operation to a Tensor, using
// an intermediate tensor to represent t3.x
Tensor<float, 3> result = t4;

当你并不需要整个表达式计算的所有的值,而是只会用到计算所的的其中某些值(比如我们只会用到计算所得矩阵的第一行),可以选择赋值给RensorRef

1
2
3
4
5
6
7
8
// Create a TensorRef for the expression.  The expression is not
// evaluated yet.
TensorRef<Tensor<float, 3> > ref = ((t1 + t2) * 0.2f).exp();

// Use "ref" to access individual elements.  The expression is evaluated
// on the fly.
float at_0 = ref(0, 0, 0);
cout << ref(0, 1, 0);

控制表达式如何计算

在Tensor库中对于不同的环境进行了优化比如:CPU单进程,CPU多进程,或者在单个GPU上使用cuda。

目前默认的是实现已经针对Intel CPUs进行了优化,未来将针对ARM CPUs进行优化。

普通的单线程CPU实现:

1
2
3
Tensor<float, 2> a(30, 40);
Tensor<float, 2> b(30, 40);
Tensor<float, 2> c = a + b;

使用不同的实现需要使用device()方法, 目前支持三种不同的devices类型: DefaultDevice,ThreadPoolDeviceGPUDevice.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
// This is exactly the same as not inserting a device() call.
DefaultDevice my_device;
c.device(my_device) = a + b;

// Evaluating with a Thread Pool
// Create the Eigen ThreadPoolDevice.
Eigen::ThreadPoolDevice my_device(4 /* number of threads to use */);

// Now just use the device when evaluating expressions.
Eigen::Tensor<float, 2> c(30, 50);
c.device(my_device) = a.contract(b, dot_product_dims);

// Evaluating On GPU
// This is presently a bit more complicated than just using a thread pool device. You need to create a GPU device but you also need to explicitly allocate the memory for tensors with cuda.
// To be continued

API

数据类型

  • <Tensor-Type>::Dimensions 使用来表示Tensor的数据维度
  • <Tensor-Type>::Index 用来索引
  • <Tensor-Type>::Scalar 用来表示Tensor的数据类型比如:float

内置方法

这些内置的方法并不会像Operations一样提供"惰性"计算,而是立刻进行计算得到结果。这些方法适用于所有的tensor类: Tensor,TensorFixedSizeTensorMap

Shape 相关
  • int NumDimensions

    1
    2
    3
    
      Eigen::Tensor<float, 2> a(3, 4);
      cout << "Dims " << a.NumDimensions;
      => Dims 2
    
  • Dimensions dimensions()

    1
    2
    3
    4
    5
    6
    7
    8
    
    // Returns an array-like object representing the dimensions of the tensor.
    Eigen::Tensor<float, 2> a(3, 4);
    const Eigen::Tensor<float, 2>::Dimensions& d = a.dimensions();
    // or use auto to simplify the code in C+11
    // const auto& d = a.dimensions();
    cout << "Dim size: " << d.size << ", dim 0: " << d[0]
        << ", dim 1: " << d[1];
    => Dim size: 2, dim 0: 3, dim 1: 4
    
  • Index dimension(Index n)

    1
    2
    3
    4
    
    Eigen::Tensor<float, 2> a(3, 4);
    int dim1 = a.dimension(1);
    cout << "Dim 1: " << dim1;
    => Dim 1: 4
    
  • Index size()

    1
    2
    3
    
    Eigen::Tensor<float, 2> a(3, 4);
    cout << "Size: " << a.size();
    => Size: 12
    
赋值相关
  • <Tensor-Type> setConstant(const Scalar& val)

    1
    2
    3
    4
    5
    6
    7
    
    a.setConstant(12.3f);
    cout << "Constant: " << endl << a << endl << endl;
    =>
    Constant:
    12.3 12.3 12.3 12.3
    12.3 12.3 12.3 12.3
    12.3 12.3 12.3 12.3
    
  • <Tensor-Type> setZero()

    1
    2
    3
    4
    5
    6
    7
    
    a.setZero();
    cout << "Zeros: " << endl << a << endl << endl;
    =>
    Zeros:
    0 0 0 0
    0 0 0 0
    0 0 0 0
    
  • <Tensor-Type> setValues({..initializer_list})

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    
    Eigen::Tensor<float, 2> a(2, 3);
    a.setValues({{0.0f, 1.0f, 2.0f}, {3.0f, 4.0f, 5.0f}});
    cout << "a" << endl << a << endl << endl;
    =>
    a
    0 1 2
    3 4 5
    
    Eigen::Tensor<int, 2> a(2, 3);
    a.setConstant(1000);
    a.setValues({{10, 20, 30}});
    cout << "a" << endl << a << endl << endl;
    =>
    a
    10   20   30
    1000 1000 1000
    
  • <Tensor-Type> setRandom()

    1
    2
    3
    4
    5
    6
    7
    
    a.setRandom();
    cout << "Random: " << endl << a << endl << endl;
    =>
    Random:
    0.680375    0.59688  -0.329554    0.10794
    -0.211234   0.823295   0.536459 -0.0452059
    0.566198  -0.604897  -0.444451   0.257742
    

    可以使用自定义的number generator,示例如下

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    
    // Custom number generator for use with setRandom().
    struct MyRandomGenerator {
    // Default and copy constructors. Both are needed
    MyRandomGenerator() { }
    MyRandomGenerator(const MyRandomGenerator& ) { }
    
    // Return a random value to be used.  "element_location" is the
    // location of the entry to set in the tensor, it can typically
    // be ignored.
        Scalar operator()(Eigen::DenseIndex element_location,
            Eigen::DenseIndex /*unused*/ = 0) const {
        return <randomly generated value of type T>;
        }
    
    // Same as above but generates several numbers at a time.
    typename internal::packet_traits<Scalar>::type packetOp(
    Eigen::DenseIndex packet_location, Eigen::DenseIndex /*unused*/ = 0) const {
        return <a packet of randomly generated values>;
      }
    };
    

    然后使用a.setRandom<MyRandomGenerator>();便可以了,其中Eigen内置了两个generator: UniformRandomGenerator,NormalRandomGenerator.

数据获取
  • Scalar* data()

    返回指向tensor内存的指针。其中内存存储方法取决于tensor的存储方式是`RowMajor`还是`ColMajor`.
    ```c++
    Eigen::Tensor<float, 2> a(3, 4);
    float* a_data = a.data();
    a_data[0] = 123.45f;
    cout << "a(0, 0): " << a(0, 0);
    => a(0, 0): 123.45
    ```
    
Tensor 操作
本小节内的操作都将会返回一个没有计算值的`tensor Operations`, 这些操作可以组合在一起使用。需要注意的是,这些操作执行的是“惰性”计算。
Unary Element wise operations
  • <Operation> constant(const Scalar& val)

    返回一个和原tensor一样的tensor.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    Eigen::Tensor<float, 2> a(2, 3);
    a.setConstant(1.0f);
    Eigen::Tensor<float, 2> b = a + a.constant(2.0f);
    Eigen::Tensor<float, 2> c = b * b.constant(0.2f);
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    cout << "c" << endl << c << endl << endl;
    =>
      a
      1 1 1
      1 1 1
    
      b
      3 3 3
      3 3 3
    
      c
      0.6 0.6 0.6
      0.6 0.6 0.6
    
  • <Operation> random()

    返回一个shape和原tensor一样,但是里面的元素值为随机值

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    Eigen::Tensor<float, 2> a(2, 3);
    a.setConstant(1.0f);
    Eigen::Tensor<float, 2> b = a + a.random();
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    1 1 1
    1 1 1
    
    b
    1.68038   1.5662  1.82329
    0.788766  1.59688 0.395103
    
  • <Operation> operator-()

    所有元素取反

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    Eigen::Tensor<float, 2> a(2, 3);
    a.setConstant(1.0f);
    Eigen::Tensor<float, 2> b = -a;
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    1 1 1
    1 1 1
    
    b
    -1 -1 -1
    -1 -1 -1
    
  • <Operation> sqrt()

  • <Operation> rsqrt()

  • Operation> square()

  • <Operation> inverse()

  • <Operation> exp()

  • <Operation> log()

  • <Operation> abs()

  • <Operation> pow(Scalar exponent)

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{0, 1, 8}, {27, 64, 125}});
    Eigen::Tensor<double, 2> b = a.cast<double>().pow(1.0 / 3.0);
    cout << "a" << endl << a << endl << endl;
    cout << "b" << endl << b << endl << endl;
    =>
    a
    0   1   8
    27  64 125
    
    b
    0 1 2
    3 4 5
    
  • <Operation> operator * (Scalar scale)

Binary Element wise operation

提供两个tensor,进行基于相应元素的计算操作。

  • <Operation> operator+(const OtherDerived& other)

  • <Operation> operator-(const OtherDerived& other)

  • <Operation> operator*(const OtherDerived& other)

  • <Operation> operator/(const OtherDerived& other)

  • <Operation> cwiseMax(const OtherDerived& other)

    返回两个tensor每个元素大的值组成的新tensor = max {t1_i, t2_i}

  • Operation> cwiseMin(const OtherDerived& other)

  • <Operation> Logical operators

    • operator&&(const OtherDerived& other)
    • operator||(const OtherDerived& other)
    • operator<(const OtherDerived& other)
    • operator<=(const OtherDerived& other)
    • operator>(const OtherDerived& other)
    • operator>=(const OtherDerived& other)
    • operator==(const OtherDerived& other)
    • operator!=(const OtherDerived& other) 返回的是一个包含bool 值的tensor
Contraction

Tensor contractions是将矩阵相乘的操作推广到多维上。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
// Create 2 matrices using tensors of rank 2
Eigen::Tensor<int, 2> a(2, 3);
a.setValues({{1, 2, 3}, {6, 5, 4}});
Eigen::Tensor<int, 2> b(3, 2);
b.setValues({{1, 2}, {4, 5}, {5, 6}});

// Compute the traditional matrix product
Eigen::array<Eigen::IndexPair<int>, 1> product_dims = { Eigen::IndexPair<int>(1, 0) };
Eigen::Tensor<int, 2> AB = a.contract(b, product_dims);

// Compute the product of the transpose of the matrices
Eigen::array<Eigen::IndexPair<int>, 1> transposed_product_dims = { Eigen::IndexPair<int>(0, 1) };
Eigen::Tensor<int, 2> AtBt = a.contract(b, transposed_product_dims);

// Contraction to scalar value using a double contraction.
// First coordinate of both tensors are contracted as well as both second coordinates, i.e., this computes the sum of the squares of the elements.
Eigen::array<Eigen::IndexPair<int>, 2> double_contraction_product_dims = { Eigen::IndexPair<int>(0, 0), Eigen::IndexPair<int>(1, 1) };
Eigen::Tensor<int, 0> AdoubleContractedA = a.contract(a, double_contraction_product_dims);

// Extracting the scalar value of the tensor contraction for further usage
int value = AdoubleContractedA(0);
Reduction operations

维度缩减至1维:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
// Create a tensor of 2 dimensions
Eigen::Tensor<int, 2> a(2, 3);
a.setValues({{1, 2, 3}, {6, 5, 4}});
// Reduce it along the second dimension (1)...
Eigen::array<int, 1> dims({1 /* dimension to reduce */});
// ...using the "maximum" operator.
// The result is a tensor with one dimension.  The size of
// that dimension is the same as the first (non-reduced) dimension of a.
Eigen::Tensor<int, 1> b = a.maximum(dims);
cout << "a" << endl << a << endl << endl;
cout << "b" << endl << b << endl << endl;
=>
a
1 2 3
6 5 4

b
3
6

维度缩减至2维:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
Eigen::Tensor<float, 3, Eigen::ColMajor> a(2, 3, 4);
a.setValues({{{0.0f, 1.0f, 2.0f, 3.0f},
              {7.0f, 6.0f, 5.0f, 4.0f},
              {8.0f, 9.0f, 10.0f, 11.0f}},
             {{12.0f, 13.0f, 14.0f, 15.0f},
              {19.0f, 18.0f, 17.0f, 16.0f},
              {20.0f, 21.0f, 22.0f, 23.0f}}});
// The tensor a has 3 dimensions.  We reduce along the
// first 2, resulting in a tensor with a single dimension
// of size 4 (the last dimension of a.)
// Note that we pass the array of reduction dimensions
// directly to the maximum() call.
Eigen::Tensor<float, 1, Eigen::ColMajor> b =
    a.maximum(Eigen::array<int, 2>({0, 1}));
cout << "b" << endl << b << endl << endl;
=>
b
20
21
22
23

维度缩减至一个数值:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
Eigen::Tensor<float, 3> a(2, 3, 4);
a.setValues({{{0.0f, 1.0f, 2.0f, 3.0f},
              {7.0f, 6.0f, 5.0f, 4.0f},
              {8.0f, 9.0f, 10.0f, 11.0f}},
             {{12.0f, 13.0f, 14.0f, 15.0f},
              {19.0f, 18.0f, 17.0f, 16.0f},
              {20.0f, 21.0f, 22.0f, 23.0f}}});
// Reduce along all dimensions using the sum() operator.
Eigen::Tensor<float, 0> b = a.sum();
cout << "b" << endl << b << endl << endl;
=>
b
276
  • <Operation> sum(const Dimensions& new_dims)
  • <Operation> sum()
  • <Operation> mean(const Dimensions& new_dims)
  • <Operation> mean()
  • <Operation> maximum(const Dimensions& new_dims)
  • <Operation> maximum()
  • <Operation> minimum(const Dimensions& new_dims)
  • <Operation> minimum()
  • <Operation> prod(const Dimensions& new_dims)
  • <Operation> prod()
  • <Operation> all(const Dimensions& new_dims)
  • <Operation> all() 返回bool值,检验所有的元素是否都为true
  • <Operation> any(const Dimensions& new_dims)
  • <Operation> any() 返回bool值,检验是否存在元素为true
  • <Operation> reduce(const Dimensions& new_dims, const Reducer& reducer) 用于支持用户自定义的reduce操作
Scan operations

该操作返回的tensor的shape将会与原tensor一样

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// Create a tensor of 2 dimensions
Eigen::Tensor<int, 2> a(2, 3);
a.setValues({{1, 2, 3}, {4, 5, 6}});
// Scan it along the second dimension (1) using summation
Eigen::Tensor<int, 2> b = a.cumsum(1);
// The result is a tensor with the same size as the input
cout << "a" << endl << a << endl << endl;
cout << "b" << endl << b << endl << endl;
=>
a
1 2 3
4 5 6

b
1  3  6
4  9 15
  • <Operation> cumsum(const Index& axis)
  • <Operation> cumprod(const Index& axis)
Convolutions
  • <Operation> convolve(const Kernel& kernel, const Dimensions& dims)

    对使用卷积核(kernel)对tensor进行卷积操作(执行的是无padding的计算output_dim_size = input_dim_size - kernel_dim_size + 1)

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    
    Tensor<float, 4, DataLayout> input(3, 3, 7, 11);
    Tensor<float, 2, DataLayout> kernel(2, 2);
    Tensor<float, 4, DataLayout> output(3, 2, 6, 11);
    input.setRandom();
    kernel.setRandom();
    
    Eigen::array<ptrdiff_t, 2> dims({1, 2});  // Specify second and third dimension for convolution.
    output = input.convolve(kernel, dims);
    
    for (int i = 0; i < 3; ++i) {
        for (int j = 0; j < 2; ++j) {
            for (int k = 0; k < 6; ++k) {
                for (int l = 0; l < 11; ++l) {
                    const float result = output(i,j,k,l);
                    const float expected = input(i,j+0,k+0,l) * kernel(0,0) +
                    input(i,j+1,k+0,l) * kernel(1,0) +
                    input(i,j+0,k+1,l) * kernel(0,1) +
                    input(i,j+1,k+1,l) * kernel(1,1);
                    VERIFY_IS_APPROX(result, expected);
                    }
                }
            }
        }
    
Geometrical operations

包含切分和增添数据的操作

  • <Operation> reshape(const Dimensions& new_dims)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    // Increase the rank of the input tensor by introducing a new dimension
    // of size 1.
    Tensor<float, 2> input(7, 11);
    array<int, 3> three_dims{{7, 11, 1}};
    Tensor<float, 3> result = input.reshape(three_dims);
    
    // Decrease the rank of the input tensor by merging 2 dimensions;
    array<int, 1> one_dim{{7 * 11}};
    Tensor<float, 1> result = input.reshape(one_dim);
    

    当原tensor的存储分布为ColMajor

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    Eigen::Tensor<float, 2, Eigen::ColMajor> a(2, 3);
    a.setValues({{0.0f, 100.0f, 200.0f}, {300.0f, 400.0f, 500.0f}});
    Eigen::array<Eigen::DenseIndex, 1> one_dim({3 * 2});
    Eigen::Tensor<float, 1, Eigen::ColMajor> b = a.reshape(one_dim);
    cout << "b" << endl << b << endl;
    =>
    b
    0
    300
    100
    400
    200
    500
    

    当原tensor的存储分布为RowMajor

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    Eigen::Tensor<float, 2, Eigen::RowMajor> a(2, 3);
    a.setValues({{0.0f, 100.0f, 200.0f}, {300.0f, 400.0f, 500.0f}});
    Eigen::array<Eigen::DenseIndex, 1> one_dim({3 * 2});
    Eigen::Tensor<float, 1, Eigen::RowMajor> b = a.reshape(one_dim);
    cout << "b" << endl << b << endl;
    =>
    b
    0
    100
    200
    300
    400
    500
    
  • <Operation> shuffle(const Shuffle& shuffle) 返回原tensor的副本,但是其维度被打乱了,比如原先维度为{2, 3}变成{3, 2}

    1
    2
    3
    4
    5
    6
    7
    8
    
    // Shuffle all dimensions to the left by 1.
    Tensor<float, 3> input(20, 30, 50);
    // ... set some values in input.
    Tensor<float, 3> output = input.shuffle({1, 2, 0})
    
    eigen_assert(output.dimension(0) == 30);
    eigen_assert(output.dimension(1) == 50);
    eigen_assert(output.dimension(2) == 20);
    
  • <Operation> stride(const Strides& strides)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    Eigen::Tensor<int, 2> a(4, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500}, {600, 700, 800}, {900, 1000, 1100}});
    Eigen::array<Eigen::DenseIndex, 2> strides({3, 2});
    Eigen::Tensor<int, 2> b = a.stride(strides);
    cout << "b" << endl << b << endl;
    =>
    b
    0   200
    900  1100
    
  • <Operation> slice(const StartIndices& offsets, const Sizes& extents)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
  Eigen::Tensor<int, 2> a(4, 3);
  a.setValues({{0, 100, 200}, {300, 400, 500},
  {600, 700, 800}, {900, 1000, 1100}});
  Eigen::array<int, 2> offsets = {1, 0};
  Eigen::array<int, 2> extents = {2, 2};
  Eigen::Tensor<int, 1> slice = a.slice(offsets, extents);
  cout << "a" << endl << a << endl;
  =>
  a
  0   100   200
  300   400   500
  600   700   800
  900  1000  1100
  cout << "slice" << endl << slice << endl;
  =>
  slice
  300   400
  600   700

  Tensor4D a(2, 2, 3, 3);
  double* arr = a.data();
  for (int i = 0; i < 36; i++) {
    arr[i] = i;
  }
  Eigen::array<int, 4> offsets = {0, 0, 0, 0}; // 代表切分的起始位置
  Eigen::array<int, 4> extents = {1, 2, 2, 2}; // 分别代表各个维度上切分块的大小
  auto slice = a.slice(offsets, extents); // slice 是一个4维的tensor
  cout << "a" << endl << a << endl;
  cout << "slice" << endl << slice << endl;
  =>
  a
  0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17
  18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
  slice
  0  1  3  4  9 10 12 13
  • <Operation> chip(const Index offset, const Index dim) 返回一个比原tensor少一个维度的tensor
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
    Eigen::Tensor<int, 2> a(4, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500},
             {600, 700, 800}, {900, 1000, 1100}});
    Eigen::Tensor<int, 1> row_3 = a.chip(2, 0);
    Eigen::Tensor<int, 1> col_2 = a.chip(1, 1);
    cout << "a" << endl << a << endl;
    =>
    a
    0   100   200
    300   400   500
    600   700   800
    900  1000  1100
    cout << "row_3" << endl << row_3 << endl;
    =>
    row_3
    600   700   800
    cout << "col_2" << endl << col_2 << endl;
    =>
    col_2
    100   400   700    1000
  • <Operation> reverse(const ReverseDimensions& reverse)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
    Eigen::Tensor<int, 2> a(4, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500},
    {600, 700, 800}, {900, 1000, 1100}});
    Eigen::array<bool, 2> reverse({true, false});
    Eigen::Tensor<int, 2> b = a.reverse(reverse);
    cout << "a" << endl << a << endl << "b" << endl << b << endl;
    =>
    a
    0   100   200
    300   400   500
    600   700   800
    900  1000  1100
    b
    900  1000  1100
    600   700   800
    300   400   500
    0   100   200
  • <Operation> broadcast(const Broadcast& broadcast)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500}});
    Eigen::array<int, 2> bcast({3, 2});
    Eigen::Tensor<int, 2> b = a.broadcast(bcast);
    cout << "a" << endl << a << endl << "b" << endl << b << endl;
    =>
    a
    0   100   200
    300   400   500
    b
    0   100   200    0   100   200
    300   400   500  300   400   500
    0   100   200    0   100   200
    300   400   500  300   400   500
    0   100   200    0   100   200
    300   400   500  300   400   500
  • <Operation> pad(const PaddingDimensions& padding)

    padding with zeros.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
    Eigen::Tensor<int, 2> a(2, 3);
    a.setValues({{0, 100, 200}, {300, 400, 500}});
    Eigen::array<pair<int, int>, 2> paddings;
    paddings[0] = make_pair(0, 1); //  表示第一维度中左边和右边分别padding“列”
    paddings[1] = make_pair(2, 3);
    Eigen::Tensor<int, 2> b = a.pad(paddings);
    cout << "a" << endl << a << endl << "b" << endl << b << endl;
    =>
    a
    0   100   200
    300   400   500
    b
    0     0     0    0
    0     0     0    0
    0   100   200    0
    300   400   500    0
    0     0     0    0
    0     0     0    0
    0     0     0    0
Special operations
  • <Operation> cast<T>() 类型转换
    1
    2
    
    Eigen::Tensor<float, 2> a(2, 3);
    Eigen::Tensor<int, 2> b = a.cast<int>();
    

特殊问题

  1. Tensor 默认存储方式是“column-major”
updatedupdated2021-11-062021-11-06