官网上是类比Numpy的Ndarrays,相比于Ndarrays多了GPU相关的加速运算。关于Ndarrays其实就是N-dimension Arrays,是一个N维的数组对象。Pytorch里提供的许多方法也与Numpy很像。 下面提供了关于建立矩阵的方法,包括empty,rand,zeros,tensor等。 torch建立的对象是tensor。html
empty方法建立的矩阵不是空矩阵,而是未初始化的矩阵,因此里面的值不必定是0。python
import torch x = torch.empty(5,3) print(x)
tensor([[ 1.1018e-08, 4.5818e-41, -1.2501e+11], [ 4.5916e-41, 4.9724e-17, 4.5818e-41], [ 4.9696e-17, 4.5818e-41, 1.3498e-08], [ 4.5818e-41, 1.3416e-08, 4.5818e-41], [ 4.9691e-17, 4.5818e-41, 4.9691e-17]])
rand方法生成的是一个用随机数初始化的矩阵,里面的数值为限定时是在[0,1]之间随机生成数组
x = torch.rand(3,5) print(x)
tensor([[ 0.9101, 0.5218, 0.6148, 0.2900, 0.3983], [ 0.8021, 0.3228, 0.2530, 0.3052, 0.6225], [ 0.0999, 0.2756, 0.8993, 0.1512, 0.0917]])
zeros方法顾名思义是生成0矩阵,detype参数指定了生成的数据的类型ide
x = torch.zeros(5, 3, dtype=torch.long) print(x)
tensor([[ 0, 0, 0], [ 0, 0, 0], [ 0, 0, 0], [ 0, 0, 0], [ 0, 0, 0]])
tensor 方法能直接生成tensor构成的数据(这里有点迷),不过生成的数据自动保留四位小数学习
x = torch.tensor([5.5, 3,9.99999,8.16]) print(x)
tensor([ 5.5000, 3.0000, 10.0000, 8.1600])
x = x.new_ones(5, 3, dtype=torch.double) # new_* methods take in sizes print(x) x = torch.randn_like(x, dtype=torch.float) # override dtype! print(x) # result has the same size
tensor([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]], dtype=torch.float64) tensor([[ 0.2710, 1.6084, -0.8171], [ 0.8977, 1.4150, 0.2287], [ 0.2518, -0.0245, -0.5036], [-0.5529, -0.1147, -1.3930], [-0.7114, 0.2698, 2.3081]])
加法运算要求矩阵的大小相同,dtype也须要相同才能进行运算,add方法与所重载的+号相似。 add方法的接受参数spa
y = torch.rand(5,3,dtype=torch.float) print(y) z = x + y print(z)
tensor([[ 0.1392, 0.6817, 0.1232], [ 0.6517, 0.5228, 0.1931], [ 0.8584, 0.7995, 0.4617], [ 0.1270, 0.0686, 0.8771], [ 0.1968, 0.9849, 0.8087]]) tensor([[ 0.4103, 2.2900, -0.6938], [ 1.5495, 1.9379, 0.4218], [ 1.1102, 0.7750, -0.0419], [-0.4260, -0.0461, -0.5159], [-0.5146, 1.2546, 3.1168]])
z = torch.add(x , y) print(z)
tensor([[ 0.4103, 2.2900, -0.6938], [ 1.5495, 1.9379, 0.4218], [ 1.1102, 0.7750, -0.0419], [-0.4260, -0.0461, -0.5159], [-0.5146, 1.2546, 3.1168]])
output = torch.empty(5,3) print(output) torch.add(x,y,out=output) print(output)
tensor(1.00000e-08 * [[ 1.1018, 0.0000, 1.1018], [ 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000]]) tensor([[ 0.4103, 2.2900, -0.6938], [ 1.5495, 1.9379, 0.4218], [ 1.1102, 0.7750, -0.0419], [-0.4260, -0.0461, -0.5159], [-0.5146, 1.2546, 3.1168]])
在提供的方法中,若是是(operation)_ (arg)格式的方法的,操做后会替换到调用这个方法的对象code
y.add_(x) print(y)
tensor([[ 0.4103, 2.2900, -0.6938], [ 1.5495, 1.9379, 0.4218], [ 1.1102, 0.7750, -0.0419], [-0.4260, -0.0461, -0.5159], [-0.5146, 1.2546, 3.1168]])
x.copy_(y) print(x)
tensor([[ 0.4103, 2.2900, -0.6938], [ 1.5495, 1.9379, 0.4218], [ 1.1102, 0.7750, -0.0419], [-0.4260, -0.0461, -0.5159], [-0.5146, 1.2546, 3.1168]])
print(x[:,1])
tensor([ 2.2900, 1.9379, 0.7750, -0.0461, 1.2546])
改变矩阵的大小,相似resize,reshapehtm
x = torch.randn(4, 4) y = x.view(16) z = x.view(-1, 8) # the size -1 is inferred from other dimensions print(x.size(), y.size(), z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
print(x,y,z)
tensor([[-0.3858, -0.6874, 1.0538, -1.2053], [-0.2992, 0.7963, 0.5221, -1.0758], [-1.1194, 0.1516, 2.0523, 0.4788], [ 0.3233, -0.0533, 0.3937, -2.0091]]) tensor([-0.3858, -0.6874, 1.0538, -1.2053, -0.2992, 0.7963, 0.5221, -1.0758, -1.1194, 0.1516, 2.0523, 0.4788, 0.3233, -0.0533, 0.3937, -2.0091]) tensor([[-0.3858, -0.6874, 1.0538, -1.2053, -0.2992, 0.7963, 0.5221, -1.0758], [-1.1194, 0.1516, 2.0523, 0.4788, 0.3233, -0.0533, 0.3937, -2.0091]])
对于单值的tensor可用.item()方法获取其详细的值。对象
x = torch.rand(1) print(x) print(x.item())
tensor([ 0.5580]) 0.5579903721809387
x = torch.ones(5) print(x) y = x.numpy() print(y)
tensor([ 1., 1., 1., 1., 1.]) [1. 1. 1. 1. 1.]
x.add_(1) print(x) print(y)
tensor([ 2., 2., 2., 2., 2.]) [2. 2. 2. 2. 2.]
能够发现对x操做会影响到y索引
import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b)
[2. 2. 2. 2. 2.] tensor([ 2., 2., 2., 2., 2.], dtype=torch.float64)
能够看到从numpy生成的tensor也会共同影响
Reference: