三流Mayavi操作-Mayav-2.2.1-Data representation数据表示法

秉着边学边写边折腾的原则,开始粗糙的工作。真正掌握还是得讲解给别人听。 先给出网课
https://www.icourse163.org/course/BIT-1001871001
Mayavi官方
http://docs.enthought.com/mayavi/mayavi/genindex.html
(有时候这网站会装死,一般过几个小时就会活过来)
我发现了,光是三流操作还不够,还得加上四流翻译。

.

2.0.简述(1)

2.1 .mlab图形可视化,图像操作

2.1.0 绘图函数

2.1.0.0 绘图函数,通用参数
2.1.0.1 .barchart
2.1.0.2 .mesh,.triangular_mesh
2.1.0.3 .imshow,.plot3d
2.1.0.4 .surf,.contour_surf,.contour3d
2.1.0.5.points3d,.quiver3d
2.1.0.6.flow,.volume_slice
2.1.0.7 补充、对比及总结
2.1.0.8 绘图常用的技巧

2.1.1 图像控制

2.1.1.0

2.1.2 图像修饰
2.1.3 相机控制
2.1.4 其他

2.2.api操作管线对象,窗口对象

2.2.0 总体
2.2.1 Data representation:*当前位置
2.2.2 脚本录制

这一节主要是翻译
http://docs.enthought.com/mayavi/mayavi/data.html
对管线是个预备章节,我尽量还原Mayavi官方原版的排版。
注意:这是Mayavi中Advanced use of Mayavi的一篇文档,先放在这里,能吃多少吃多少吧。
.
.

1.Data representation in Mayavi

Mayavi中的数据表示法

Describing data in three dimensions in the general case is a complex problem. Mayavi helps you focus on your visualization work and not worry too much about the underlying data structures, for instance using mlab (see mlab: Python scripting for 3D plotting). We suggest you create sources for Mayavi using mlab or Mayavi sources when possible. However, it helps to understand the VTK data structures that Mayavi uses if you want to create data with a specific structure for a more efficient visualization, or if you want to extract the data from the Mayavi pipeline.
.
一般来说,在三维空间是一个复杂的问题。Mayavi可以帮助您将工作重心放在可视化而不是关注底层的数据结构,比如使用mlab(详见:3D绘图的Python脚本).我们建议您尽可能使用Mayavi的mlab或者使用Mayavi的数据源。如果您想要特定地构建数据集并使用更高效的可视化方法,或者您想要从pipeline管线中提取想要的信息,在这里将帮助您理解Mayavi所使用的VTK数据结构

Outline

概述

1.Introduction to TVTK datasets——TVTK数据集
2.The flow of data——数据流
3.Retrieving the data from Mayavi pipelines——Mayavi的管线中检索数据
4.Dissection of the different TVTK datasets——TVTK数据集详细说明
5.Inserting TVTK datasets in the Mayavi pipeline——在Mayavi管线中配置TVTK数据集


Mayavi data sources and VTK datasets

Mayavi 数据源和VTK数据集

1.When you load a file, or you expose data in Mayavi using one of the mlab.pipeline source functions (see Data sources), you create an object in the Mayavi pipeline that is attached to a scene. This object is a Mayavi source, and serves to describe the data and its properties to the Mayavi pipeline.
2.The internal structures use to represent to data in 3D all across Mayavi are VTK datasets, as described below.
.
1.当您加载一个文件,或者在Mayavi中使用mlab.pipeline的函数加载数据(见数据源),您在Mayavi中所创建的管线对象是基于scene实现的。这个对象是一个Mayavi数据源,用于向Mayavi的pipeline管线导入数据和属性。
2.其内部通过Mayavi来表示3D的数据结构都是VTK数据集,描述如下。

One should not confuse VTK (or TVTK) datasets and Mayavi data sources. There is a finite and small number of datasets. However, many pipeline objects could be constructed to fit in the pipeline below a scene and providing datasets to the pipeline.
.
请不要将VTK (或者 TVTK) 数据集和Mayavi数据源混淆。数据集的数量是很有限的。但是可以创建基于scenepipeline管线对象来配合需要并向其提供数据集。

-------------------标记线-------------------------------------------------------
这里是需要作解释的,区别是什么没有说明
.you create an object in the Mayavi pipeline that is attached to a scene.
标记数1
-------------------标记线-------------------------------------------------------

Introduction to TVTK datasets

Mayavi uses the VTK library for all its visualization needs, via TVTK (Traited VTK). The data is exposed internally, by the sources, or at the output of the filters, as VTK datasets, described below. Understanding these structures is useful not only to manipulate them, but also to understand what happens when using filters to transform the data in the pipeline.

通过TVTK(由Traits封装的VTK),Mayavi使用VTK库满足其可视化需求。如下面的VTK数据集所示,数据导入可以通过sources数据源,或者filters数据变换的输出端。理解这些数据结构是非常有益的,不仅可以处理它们,还能帮助您理解在pipeline中使用filters变换数据的过程中发生了什么。

[画蛇添足]

VTK和TVTK,查看

简单点说TVTK是Traited VTK,由Traits封装的VTK,

标记数2
[画蛇添足-完]

A dataset is defined by many different characteristics:
一个数据集可以有不同的形式:
在这里插入图片描述

Connectivity:
连通性

Connectivity is not only necessary to draw lines between the different points, it is also needed to define a volume.
Implicit connectivity: connectivity or positioning is implicit. In this case the data is considered as arranged on a lattice-like structure, with equal number of layers in each direction, x increasing first along the array, then y and finally z.
.
无论是两点之间定义一条线还是定义一个空间,连通性都是必要的。
隐性连通:连通性或者位置是确定方式是隐性的,这种情况下,数据的连接方式被默认为是一种像晶格一样的网状结构,它们在各个方向上的维度是相同的,x沿着数组递增,然后是y,最后是z。

Data:
数据

Dataset are made of points positioned in 3D, with the corresponding data. Each dataset can carry several data components.
数据集由放置在3维空间中的组成,数据的各向维度保持一致。(好像说法有误)每一个数据集可以容纳多个数据。
.
Scalar or Vectors data: The data can be scalar, in which case VTK can perform operations such as taking the gradient and display the data with a colormap, or vector, in which case VTK can perform an integration to display streamlines, display the vectors, or extract the norm of the vectors, to create a scalar dataset.
标量或矢量数据:数据可以是scalar标量,VTK可以在这种情形下通过colormap可视化数据的梯度;也可以是vector矢量数据,VTK亦可以绘制流线,向量,或者计算向量的模。
.
Cell data and point data: Each VTK dataset is defined by vertices and cells, explicitly or implicitly. The data, scalar or vector, can be positioned either on the vertices, in which case it is called point data, or associated with a cell, in which case it is called cell data. Point data is stored in the .point_data attribute of the dataset, and the cell data is stored in the .cell_data attribute.
单元数据或者点数据: VTK的数据集是由
点或单元
,通过显性或隐性定义的。无论是是标量还是矢量,当其绑定在点上,这时称为点数据;当其和单元绑定,这时候称为元数据。点数据存储在数据集的 .point_data属性,元数据存储在 .cell_data的属性中。
.

In addition the data arrays have an associated name, which is used in Mayavi to specify on which data component module or filter apply (eg: using theSetActiveAttribute filter).
此外,数据组有相应的命名,在Mayavi中用来指定数据模块或者filter的应用。(例如:使用SetActiveAttribute 进行变换).

-------------------标记线-------------------------------------------------------
这里是需要作解释的,数据集和数据源没有说清楚每一个数据集可以容纳多个数据。
标记数3
-------------------标记线-------------------------------------------------------

Note:VTK array ordering
注意:VTK向量组的顺序
All VTK arrays, whether it be for data or position, are exposed as (n, 3) numpy arrays for 3D components, and flat (n, ) array for 1D components. The index vary in the opposite order as numpy: z first, y and then x. Thus to go from a 3D numpy array to the corresponding flatten VTK array, the operation is:
所有的VTK array,无论是对于数据本身还是位置,都应该是(n,3)的numpy array初始的3D数据,以及(这个怎么翻译?????????)它们的索引在相反的方向变化,其顺序和numpy构建方式一致,首先是z,其次是y,最后是x。因此从一个3D numpy数组到相应的VTK数组的方法如下:

vtk_array = numpy_array.T.ravel()

An complete list of the VTK datasets used by Mayavi is given below, after a tour of the Mayavi pipeline.
在访问过Mayavi的管线之后,下面将给出一个完整的VTK数据集目录。

The flow of data

As described earlier, Mayavi builds visualization by assembling pipelines, where the data is loaded in Mayavi by a data source, and it can be transformed by filters and visualized by modules.
如前面提到的,Mayavi通过在pipeline管线中加载数据源data source以及数据变换 filters和可视化modules来配置管线。

To retrieve the data displayed by Mayavi, to modify it via Python code, or to benefit from the data processing steps performed by the Mayavi filters, it can be useful to “open up” the Mayavi pipeline and understand how the data flows in it.
无论是是通过Mayavi绘制数据的图像,或是通过Python的代码修改,还是从Mayavi的filters进行数据处理流程中学习,从Mayavipipeline管线的 “open up” 中学习都是非常有益的,并且可以很好地了解数据是怎样流动的。

Inside the Mayavi pipeline, the 3D data flowing between sources ,filters and modules is stored in VTK datasets. Each source or filter has an outputs attribute, which is a list of VTK datasets describing the data output by the object.
在Mayavi的管线内部,3D数据存储在VTK数据集中,并在sourcesfiltersmodules之间流动。每一个数据源source或者filter都有一个输出属性,由多个由对象输出、用于描述数据的VTK数据集组成。

For example:

import numpy as np
from mayavi import mlab
data = np.random.random((10, 10, 10))
iso = mlab.contour3d(data)

The parent of iso is its ‘Colors and legend’ node, the parent of which is the source feeding into iso:
iso变量的父节点 parent‘Colors and legend’ 节点, parent 节点是iso数据来源。

iso.parent.parent.outputs

>>>[<tvtk_classes.image_data.ImageData object at 0xf08220c>]

[画蛇添足]

啰嗦几句,这里是在绘制一个等值面,然后把这个等值面的绘制赋给iso
mlab.contour3d(data)配置了从数据源sourcefilter和最后的可视化模块Module,是一条龙服务
在这里插入图片描述

最后iso里面的内容是<mayavi.modules.iso_surface.IsoSurface object at 0x000001A48EEC7258>这里是它的层级,看上面的图就能发现最后的可视化模块ModuleIsoSurface
这样就完成了就完成了一条龙搭建服务,赋给iso就可以通过iso倒回去配置节点

所以就有了这一层操作iso.parent.parent

在这里插入图片描述
注意:sources并不是最高层

然后进行对比,从最高级节点往下依次:(这里不展开,如果要看资料,戳这里
<mayavi.sources.array_source.ArraySource object at 0x000002377115D048>
这一层是数据源source对应的是上图的——标量场'ScalarField'

<mayavi.core.module_manager.ModuleManager object at 0x0000023771169B48>
然后是模块管理器ModuleManager对应的是——颜色和图例'Color and Legend'

<mayavi.modules.iso_surface.IsoSurface object at 0x0000023771169410>
最后才是可视化模块module——IsoSurface

然后再说一下iso.parent.parent.outputs

iso父节点的父节点是sources层级,outputs是它的一个属性,这里有比较简洁的说明,完整说明戳这里
[<tvtk.tvtk_classes.image_data.ImageData object at 0x000002377115DF10>]
List of outputs produced by the source. These are TVTK datasets, that are explained in the section Data representation in Mayavi.
这个outputs属性返回了一个列表,对于它的解释,截取文档。另外不止是Source具有这个属性,Filter也是有的。这个返回信息指出,它是个ImageData对象,为了说清楚这些东西,这也是翻译这个文章的意义。
.

如果你少输了一层会报错,别问我怎么知道的
AttributeError: 'ModuleManager' object has no attribute 'outputs'

Thus we can see that the Mayavi source created by mlab.surf exposes an ImageData VTK dataset.
由此我们看到了由mlab.surf创建的Mayavi数据源——ImageData,它是一个 VTK数据集。

Note: To retrieve the VTK datasets feeding in an arbitrary object, the mlab function pipeline.get_vtk_src() may be useful. In the above example:
注意: 在下面的例子中,将从VTK数据集中任意提取对象,mlab的函数pipeline.get_vtk_src()会非常有帮助。
mlab.pipeline.get_vtk_src(iso)
>>>[<tvtk_classes.image_data.ImageData object at 0xf08220c>]

[画蛇添足]

这个mlab.pipeline.get_vtk_src(iso)iso.parent.parent.outputs最后都能返回相同的内容,使用前者可以直接获取iso的数据源对象ImageData,而后者是从层级中一层一层向上,最后达到source层级获取数据源,这样做的前提是你必须知道有多少个层级,比如使用surf直接构建一条龙管线的时候,中间做了两次filter数据变换,这就导致节点多了一层。

Retrieving the data from Mayavi pipelines

从Mayavi管线中提取数据

[画蛇添足]:前面说的是提取层级,这里说的是提取数据了。

Probing data at given positions

If you simply want to retrieve the data values described by a Mayavi object a given position in space, you can use the pipeline.probe_data() function (warning the probe_data function is new in Mayavi 3.4.0)

For example, if you have a set of irregularly spaced data points with no connectivity information:
.
.如果你只想从Mayavi对象在空间的所给位置中提取数据值,你可以使用pipeline.probe_data()函数(注意,probe_data()方法只有3.4.0的版本之后才有)
.给定一组没有连通信息的非常规的空间点集,下面将展示一个实例,

x, y, z = np.random.random((3, 100)) #这里创建了3个长度100的随机数组
data = x**2 + y**2 + z**2 #经过计算得到的还是一个长度100的数组

You can expose them as a Mayavi source of unconnected points:
您可以将它们作为Mayavi数据源source离散点来表示。

src = mlab.pipeline.scalar_scatter(x, y, z, data)
.

[画蛇添足]
先说scalar_scatter方法,戳这里

Creates scattered scalar data.
构建离散数据,具体语法如下,详细戳上方。
scalar_scatter(s, ...)scalar_scatter(x, y, z, s, ...)scalar_scatter(x, y, z, s, ...)scalar_scatter(x, y, z, f, ...)

再说probe_data方法(下面会用到):
简单摘抄一下,这里

Retrieve the data from a described by Mayavi visualization object at points x, y, z.
mayavi.tools.pipeline.probe_data(mayavi_object, x, y, z, type='scalars', location='points')

以上两者,一个构建一个提取。
src里面的内容是这样的<mayavi.sources.vtk_data_source.VTKDataSource object at 0x00000237681D22B0>
[画蛇添足-完]

And visualize these points for debugging:
以及通过调试对这些点进行可视化:

pts = mlab.pipeline.glyph(src, scale_mode='none',scale_factor=.1)

[画蛇添足]
这里涉及到一个glyph方法,详细这里

Applies the Glyph mayavi module to the given VTK data source
将 mayavi 的Glyph模块应用于VTK数据源。

这里是非常清楚的,Module是可视化模块,最后的着色等操作就是在这个模块上面进行的,它有很多参数,colorcolormapextent和前面的surfmesh参数是一样的,实际上mlab一条龙函数可选参数都是Module里面的,把中间的过程省略了,filter这些都是直接配置好的。
我们回顾一下前面,第一步是使用scalar_scatter方法构建了VTK数据源,第二步也就是这步,设定了图像的绘制细节,到这里已经成图像了

在这里插入图片描述
[画蛇添足-完]

The resulting data is not defined in the volume, but only at the given position: as there is no connectivity information, Mayavi cannot interpolate between the points:
如果仅仅是给出点的位置,却没有定义空间中数据的连接方式是不够的,在这里并没有给出连通信息,Mayavi不能对这些点进行插值

mlab.pipeline.probe_data(pts, .5, .5, .5)

>>>array([ 0. ])

[画蛇添足]probe_data这里是没有值的,只有上面被定义了的地方才会有点,就是上图那点点。反正我的VTK是报了一堆错,倒是不影响输出一个0。

To define volumetric data, you can use a delaunay3d filter:
使用 狄氏三维数据变换delaunay3d

field = mlab.pipeline.delaunay3d(src)

[画蛇添足]

如果输入了这个代码的话,恭喜你创建失败。可能是我用了4.5.0的版本导致的。按照正常的思路将src作为数据源进入filterdelaunay3d进行数据变化,然后在创建的filter下可视化。

在这里插入图片描述

如果要让代码可以运行,它的管线至少应这样配置(忽略两层Surface那是为下图服务的):

在这里插入图片描述

它是插入在ScalarScatter在了之下,问题是并没有Module模块,手动添加是可行的。

在这里插入图片描述
我在这里添加了Surface可视化模块,为了方便(吃饱)加了个,一个设置了opacity方便看绘制的面,另一个设置了representation设置了wireframe,方便观察点与点之间连接的线。

[画蛇添足-完]

Now you can probe the value of the volumetric data anywhere. It will be non zero in the convex hull of the points:
现在您可以获取空间任意一点的值,他们将不再为0.(这里翻译有点问题

[画蛇添足]

下面分别获取了点云内部和外部的数据,注意一下numpy.allclose方法,这里
代码在这里比较了datadata_probed,data_probed是经过filter进行数据 delaunay3d变换之后得到的,他只是想说明这个变化并没有改变原来的值,不仅如此还补充了中间的值。下面再调用probe_data() 方法获得的就是中间的值了,他们不再为空。

[画蛇添足-完]
.
#Probe in the center of the cloud of points#获取点的中心数据
mlab.pipeline.probe_data(field, .5, .5, .5)

array([ 0.78386768]) #我得到的值是[0.7767401]

# Probe on the initial points
data_probed = mlab.pipeline.probe_data(field, x, y, z)
np.allclose(data, data_probed)

True

# Probe outside the cloud
mlab.pipeline.probe_data(field, -.5, -.5, -.5)

array([ 0.])

Inspecting the internals of the data structures

You may be interested in the data carried by the TVTK datasets themselves, rather than the values they represent, for instance to replicate them. For this, you can retrieve the TVTK datasets, and inspect them.
相对于他们所代表的值而言,您可能对TVTK数据集本身更感兴趣,如,对它们进行复制。这样您可以检查TVTK的数据集以及检查它们。

Extracting data points and values

The positions of all the points of a TVTK dataset can be accessed via its points attribute. Retrieving the dataset from the field object of the previous example, we can view the data points:
TVTK数据集上的每一个点都可以通过 points属性提取。从field对象中,我们可以观察所有的数据点。

dataset = field.outputs[0]
dataset.points

[(0.72227946564137335, 0.23729151639368518, 0.24443798107195291), ...,
(0.13398528550831601, 0.80368395047618579, 0.31098842991116804)], length = 100
.
[画蛇添足]

上面这个失效了,修改了命令
dataset = field.outputs[0]
dataset.get_output().points

[(0.07492927473776878, 0.9790978297714724, 0.20491368359173223), ..., (0.904629265877752, 0.43634926499914983, 0.2290535325644406)], length = 100

get_output()方法在Mayavi里面是找不到的,得到VTK里面去找,可以说很辛酸了。
这里还要啰嗦几句,注意返回的类型。这里我要展开说一下。

1.就field而言,field是一个filter对象<mayavi.filters.delaunay3d.Delaunay3D object at 0x0000029C61E63BF8>
2.dataset = field.outputs[0]之后,dataset是一个类,<tvtk.tvtk_classes.delaunay3d.Delaunay3D object at 0x0000029C61E63E08>
3.再到dataset.get_output(),这个参数还只能取0,1,返回的是<tvtk.tvtk_classes.unstructured_grid.UnstructuredGrid object at 0x0000029C719AFD00>
4.注意到里面的unstructured_grid,用type()可以检查dataset.get_output()的类型,最后得到<class 'tvtk.tvtk_classes.points.Points'>
5.这里是可以继续延伸扩展下去的。

[画蛇添足-完]

This is a TVTK array. For us, it is more useful to convert it to a numpy array:
这是一个TVTK的数组,这非常有用,可以很方便地将它转化成一个numpy数组的形式。

points = dataset.points.to_array()
points.shape
(100, 3)

[画蛇添足]
由于上面的指令失效了,此处也需要响应的修改。

point = dataset.get_output(0).points.to_array()
point.shape

我现在都有点怀疑它的说法了,type()验证一下它返回的类型
<class 'numpy.ndarray'>,ok心满意足。

[画蛇添足-完]

To retrieve the original x, y, z positions of the data points specified, we can transpose the array:
我们可以通过对矩阵进行转置来得到x,y,z的原始位置。

x, y, z = points.T

这里不多说,转置属于numpy的基础内容

The corresponding data values can be found in the point_data.scalars attribute of the dataset, as the data is located on the points, and not in the cells, and it is scalar data:
当数据的标量值定义在点而不是在元上的时候,相应的标量值可以在 point_data.scalars 属性中读取,

dataset.point_data.scalars.to_array().shape
(100,)

[画蛇添足]

修改为dataset.get_output(0).point_data.scalars.to_array().shape
注意point_data被收进get_output(0)了,不再是dataset对应的对象的类了

dataset——<tvtk.tvtk_classes.delaunay3d.Delaunay3D object at 0x0000029C61E63E08>
dataset.get_output()——<tvtk.tvtk_classes.unstructured_grid.UnstructuredGrid object at 0x0000029C719AFD00>
dataset.get_output(0).point_data——<tvtk.tvtk_classes.point_data.PointData object at 0x0000029C7177F2B0>
dataset.get_output(0).point_data.scalars——<class 'tvtk.tvtk_classes.double_array.DoubleArray'>
dataset.get_output(0).point_data.scalars.to_array()——<class 'numpy.ndarray'>

[画蛇添足-完]

Extracting lines

线提取

If we want to extract the edges of the Delaunay tessellation, we can apply the ExtractEdges filter to the field from the previous example and inspect its output:
如果我们想提取经过Delaunay变换之后的网格线,以上面为例,我们可以运用ExtractEdgesfield进行变换并查看它的输出:

edges = mlab.pipeline.extract_edges(field)
edges.outputs

以上两句分别得到:
<mayavi.filters.extract_edges.ExtractEdges object at 0x0000029C7177FDB0>
[<tvtk.tvtk_classes.extract_edges.ExtractEdges object at 0x0000029C7177FEB8>]

表面上是得到类,其实在管线层也发生了变化。
直观的理解edges = mlab.pipeline.extract_edges(field)就是在field的变换下创建ExtractEdges变换
在这里插入图片描述
在这个基础上手动创建surface能得到wireframe效果,倒是representationwireframe反而不起作用了。如果手多的话,多用几次edges = mlab.pipeline.extract_edges(field)会发现有很多个ExtractEdges
在这里插入图片描述
我关心的是没有手动添加surface的时候,
在这里插入图片描述
注意到ExtractEdges的层级下面是空的。如果真的理解的话,应该能想到,不管有没有图像,这个时候图像的所有数据都是可以读取,以下部分只是为了辅助观测。就计算而言,图像不是必要的,因为数据全部都已经配置在管线里面了。
即使不添加Module,不生成图像照样读取线数据,点数据,空间坐标等等。

We can see that the output is a PolyData dataset. Looking at how these are build (see PolyData), we see that the connectivity information is help in the lines attribute (that we convert to a numpy array using its .to_array() method):
我们可以看到输出是一个PolyData数据集,注意到他们是怎么创建的,使用lines 属性能帮助我们查看他们的连通方式。

pd = edges.outputs[0]
pd.lines.to_array()

array([ 2, 0, 1, ..., 2, 97, 18])

[画蛇添足]:
这段代码同样是有问题的,linespoints都被收进了get_output

pd = edges.outputs[0]
pd.get_output(0).lines.to_array()

[画蛇添足-完]

The way this array is build is a sequence of a length descriptor, followed by the indices of the data points connected together in the points array retrieved earlier. Here we have only sets of pairs of points connected together: the array is an alternation of 2 followed by a pair of indices.
根据预先提取的连接数据点的索引,通过这种方法创建得到的是对数据的一维描述。这里我们仅设置一组连接点,(这里怎么翻译???
A full example illustrating how to use the VTK Delaunay filter to extract a graph is given in Delaunay graph example.
这里有一个完整实例告诉你怎样使用VTK Delaunay filter来提取曲线,见Delaunay graph example

Headless use of Mayavi for the algorithms, without visualization

仅使用Mayavi的图像算法,而不使用可视化

As you can see from the above example, it can be interesting to use Mayavi just for the numerical algorithm operating on 3D data, as the Delaunay tessellation and interpolation demoed.

To run such examples headless, simply create the source with the keyword argument figure=False. As a result the sources will not be attached to any engine, but you will still be able to use filters, and to probe the data:

当您从理解了上面的实例之后,再使用Mayavi对3D数据做数值计算会很有意思,比如利用Delaunay算法做插值。
为使用这样的方法,可以创建数据源的时候设置参数figure=False,这样,数据源sources将不会添加到engine的层级下,但是您仍然可以使用filters,也可以遍历数据。

src = mlab.pipeline.scalar_scatter(x, y, z, data, figure=False)

[画蛇添足]

1.这里有一个问题,As a result the sources will not be attached to any engine,是创建了数据源,却不把数据源和Scene绑定起来,还是绑定了但不会显示,这里是有点微妙的。
2.这么做是为了什么,如果不写show()图像不会展示出来,但是数据已经经过了filter计算变换,就计算量而言,并不见得能节省。

[画蛇添足-完]

Dissection of the different TVTK datasets

The 5 TVTK structures used are the following (ordered by the cost of visualizing them):
以下是TVTK的5种数据结构(根据视觉化所需的成本进行排列):

什么成本??消耗的内存还是?????

![在这里插入图片描述](https://img-blog.csdn.net/20181013235845807?watermark/2/text/aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzQyNzMxNDY2/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70)
-----------------------------
这个后面翻译这个翻译对精度有要求。。。。

ImageData

This dataset is made of data points positioned on an orthogonal grid, with constant spacing along each axis. The position of the data points are inferred from their position on the data array (implicit positioning), an origin and a spacing between 2 slices along each axis. In 2D, this can be understood as a raster image. This is the data structure created by the ArraySource mayavi source, from a 3D numpy array, as well as the mlab.pipeline.scalar_field and mlab.pipeline.vector_field factory functions, if the x, y and z arrays are not explicitely specified.

ImageData数据集的点放置在正交的网格上,且各个方向上是等距的。给定originspacing之后,数据点的位置可以通过它们在array的位置中来确定(隐性连通)。2D可以理解成一个光栅像。如果没有明确指定xyz,那么这样的数据结构是由ArraySource mayavi sourcenumpymlab.pipeline.scalar_fieldmlab.pipeline.vector_field创建而来。
在这里插入图片描述
Creating a tvtk.ImageData object from numpy arrays:

from tvtk.api import tvtk
from numpy import random
data = random.random((3, 3, 3))
i = tvtk.ImageData(spacing=(1, 1, 1), origin=(0, 0, 0))		#给定初始位置和间距
i.point_data.scalars = data.ravel()
i.point_data.scalars.name = 'scalars'
i.dimensions = data.shape

RectilinearGrid

This dataset is made of data points positioned on an orthogonal grid, with arbitrary spacing along the various axis. The position of the data points are inferred from their position on the data array, an origin and the list of spacings of each axis.

RectilinearGrid数据集的点放置在正交的网格上,在各个方向上它具有任意的间距。提供 originlist形式的spacing之后,数据点的位置可以通过它们在array的位置中来确定(隐性连通)。
在这里插入图片描述

from tvtk.api import tvtk
from numpy import random, array
data = random.random((3, 3, 3))
r = tvtk.RectilinearGrid()
r.point_data.scalars = data.ravel()
r.point_data.scalars.name = 'scalars'
r.dimensions = data.shape
r.x_coordinates = array((0, 0.7, 1.4))
r.y_coordinates = array((0, 1, 3))
r.z_coordinates = array((0, .5, 2))

StructuredGrid

This dataset is made of data points positioned on arbitrary grid: each point is connected to its nearest neighbors on the data array. The position of the data points are fully described by 1 coordinate arrays, specifying x, y and z for each point.

StructuredGrid数据集的点放置在任意的网格上:每一个点就近连接,取决于他们在array的位置。给定一组xyz之后,每一个点的位置由一组坐标确定。

说的不清不楚的。,连通方式呢?????????????????????

在这里插入图片描述
Creating a tvtk.StructuredGrid object from numpy arrays:

from numpy import pi, cos, sin, empty, linspace, random
from tvtk.api import tvtk

def generate_annulus(r, theta, z):
    """ Generate points for structured grid for a cylindrical annular
        volume.  This method is useful for generating a unstructured
        cylindrical mesh for VTK.
    """
    # Find the x values and y values for each plane.
    x_plane = (cos(theta)*r[:,None]).ravel()
    y_plane = (sin(theta)*r[:,None]).ravel()

    # Allocate an array for all the points.  We'll have len(x_plane)
    # points on each plane, and we have a plane for each z value, so
    # we need len(x_plane)*len(z) points.
    points = empty([len(x_plane)*len(z), 3])

    # Loop through the points for each plane and fill them with the
    # correct x,y,z values.
    start = 0
    for z_plane in z:
        end = start+len(x_plane)
        # slice out a plane of the output points and fill it
        # with the x,y, and z values for this plane.  The x,y
        # values are the same for every plane.  The z value
        # is set to the current z
        plane_points = points[start:end]
        plane_points[:,0] = x_plane
        plane_points[:,1] = y_plane
        plane_points[:,2] = z_plane
        start = end

    return points

dims = (3, 4, 3)
r = linspace(5, 15, dims[0])
theta = linspace(0, 0.5*pi, dims[1])
z = linspace(0, 10, dims[2])
pts = generate_annulus(r, theta, z)
sgrid = tvtk.StructuredGrid(dimensions=(dims[1], dims[0], dims[2]))
sgrid.points = pts
s = random.random((dims[0]*dims[1]*dims[2]))
sgrid.point_data.scalars = ravel(s.copy())
sgrid.point_data.scalars.name = 'scalars'

PolyData

This dataset is made of arbitrarily positioned data points that can be connected to form lines, or grouped in polygons to from surfaces (the polygons are broken up in triangles). Unlike the other datasets, this one cannot be used to describe volumetric data. The is the dataset created by the mlab.pipeline.scalar_scatter and mlab.pipeline.vector_scatter functions.

PolyData数据集的点由放置在任意位置的点组成,它们彼此可以连通而形成线,或着组成多边形而铺设成面(这样的多边形实际上是被拆分成三角形而形成的)。与其他的数据集不同,PolyData数据集不能够用来描述体数据。它是由mlab.pipeline.scalar_scattermlab.pipeline.vector_scatter创建。
在这里插入图片描述
>这个不能描述体数据是什么鬼。需要举例和铺设管线来看看。
.

from numpy import array, random
from tvtk.api import tvtk

# The numpy array data.
points = array([[0,-0.5,0], [1.5,0,0], [0,1,0], [0,0,0.5],
                [-1,-1.5,0.1], [0,-1, 0.5], [-1, -0.5, 0],
                [1,0.8,0]], 'f')
triangles = array([[0,1,3], [1,2,3], [1,0,5],
                   [2,3,4], [3,0,4], [0,5,4], [2, 4, 6],
                    [2, 1, 7]])
scalars = random.random(points.shape)

# The TVTK dataset.
mesh = tvtk.PolyData(points=points, polys=triangles)
mesh.point_data.scalars = scalars
mesh.point_data.scalars.name = 'scalars'

UnstructuredGrid

This dataset is the most general dataset of all. It is made of data points positioned arbitrarily. The connectivity between data points can be arbitrary (any number of neighbors). It is described by specifying connectivity, defining volumetric cells made of adjacent data points.
UnstructuredGrid 是所有数据集形式中最一般的。它同样是由任意放置的数据点组成,数据点直接的连通方式也可以是任意的(甚至可以是任意数量的点来连接),它需要明确指定连通方式来定义相邻数据点组成的体单元。(后半句翻译有点问题

在这里插入图片描述

from numpy import array, random
from tvtk.api import tvtk

points = array([[0,1.2,0.6], [1,0,0], [0,1,0], [1,1,1], # tetra
                [1,0,-0.5], [2,0,0], [2,1.5,0], [0,1,0],
                [1,0,0], [1.5,-0.2,1], [1.6,1,1.5], [1,1,1], # Hex
                ], 'f')
# The cells
cells = array([4, 0, 1, 2, 3, # tetra
               8, 4, 5, 6, 7, 8, 9, 10, 11 # hex
               ])
# The offsets for the cells, i.e. the indices where the cells
# start.
offset = array([0, 5])
tetra_type = tvtk.Tetra().cell_type # VTK_TETRA == 10
hex_type = tvtk.Hexahedron().cell_type # VTK_HEXAHEDRON == 12
cell_types = array([tetra_type, hex_type])
# Create the array of cells unambiguously.
cell_array = tvtk.CellArray()
cell_array.set_cells(2, cells)
# Now create the UG.
ug = tvtk.UnstructuredGrid(points=points)
# Now just set the cell types and reuse the ug locations and cells.
ug.set_cells(cell_types, offset, cell_array)
scalars = random.random(points.shape[0])
ug.point_data.scalars = scalars
ug.point_data.scalars.name = 'scalars'

Modifying the data

If you want to modify the data of any of these low-level data structures, you need to reasign data to the corresponding arrays, but also reasign them a name. Once this is done, you should call the ‘modified()’ method of the object, to tell the pipeline that the data has been modified:
如果您想修改这些底层的数据结构,您需要数据转换到相应的array中,相应也要转换它们的命名。当修改工作完成之后,您应当调用对应对象的modified()方法告知pipeline数据已经发生改变。

ug.point_data.scalars = new_scalars
ug.point_data.scalars.name = 'scalars'
ug.modified()

.
.
[画蛇添足] 下面部分是参考,不翻译。

External references

This section of the user guide will be improved later. For now, the following two presentations best describe how one can create data objects or data files for Mayavi and TVTK.

Presentation on TVTK and Mayavi2 for course at IIT Bombay
https://github.com/enthought/mayavi/raw/master/docs/pdf/tvtk_mayavi2.pdf
This presentation provides information on graphics in general, 3D data representation, creating VTK data files, creating datasets from numpy in Python, and also about mayavi.

Presentation on making TVTK datasets using numpy arrays made for SciPy07.
Prabhu Ramachandran. “TVTK and MayaVi2”, SciPy‘07: Python for Scientific Computing, CalTech, Pasadena, CA, 16–17 August, 2007.
This presentation focuses on creating TVTK datasets using numpy arrays.

Datasets creation examples

There are several examples in the mayavi sources that highlight the creation of the most important datasets from numpy arrays. Specifically they are:

Datasets example: Generate a simple example for each type of VTK dataset.
Polydata example: Demonstrates how to create Polydata datasets from numpy arrays and visualize them in mayavi.
Structured points2d example: Demonstrates how to create a 2D structured points (an ImageData) dataset from numpy arrays and visualize them in mayavi. This is basically a square of equispaced points.
Structured points3d example: Demonstrates how to create a 3D structured points (an ImageData) dataset from numpy arrays and visualize them in Mayavi. This is a cube of points that are regularly spaced.
Structured grid example: Demonstrates the creation and visualization of a 3D structured grid.
Unstructured grid example: Demonstrates the creation and visualization of an unstructured grid.

.
.
.

Inserting TVTK datasets in the Mayavi pipeline

Mayavi管线中导入TVTK数据集

TVTK datasets can be created using directly TVTK, as illustrated in the examples above. A VTK data source can be inserted in the Mayavi pipeline using the VTKDataSource. For instance we can create an ImageData dataset:
TVTK数据集可以直接由TVTK进行创建,上面已经有过说明。一个VTK数据源可以使用VTKDataSource导入Mayavi管线。下面我们用一个例子来创建一个ImageData数据集:

from tvtk.api import tvtk
import numpy as np
a = np.random.random((10, 10, 10))
i = tvtk.ImageData(spacing=(1, 1, 1), origin=(0, 0, 0))
i.point_data.scalars = a.ravel()
i.point_data.scalars.name = 'scalars'
i.dimensions = a.shape

If you are scripting using mlab, the simplest way to visualize your data is to use the mlab.pipeline to apply filters and modules to your data. Indeed these functions creating filters and modules accept VTK datasets and automatically insert them on the pipeline. A surface module could have been used to visualize the ImageData dataset created above as such:
如果您使用mlab编写脚本,最简单的方法就是使用mlab.pipeline并应用于filtersmodule来可视化数据。事实上,这些函数能够建立filtersmodule,接受VTK的数据集,并自动导入到管线中。以下是一个用于可视化ImageData数据集的surface module

from enthgouth.mayavi import mlab
mlab.pipeline.surface(i)

In addition, inserting this dataset on the Mayavi pipeline with direct control on the Engine is done as such with VTKDataSource:
除此之外,直接控制Engine向Mayavi pipeline导入数据集也是可行的,比如VTKDataSource

from mayavi.sources.api import VTKDataSource
src = VTKDataSource(data=i)
from mayavi.api import Engine
e = Engine()
e.start()
s = e.new_scene()
e.add_source(src)

Of course, unless you want specific control on the attributes of the VTK dataset, or you are using Mayavi in the context of existing code manipulating TVTK objects, creating an ImageData TVTK object is not advised. The ArraySource object of Mayavi will actually create an ImageData, but make sure you don’t get the shape wrong, which can lead to a segmentation fault. An even easier way to create a data source for an ImageData is to use the mlab.pipeline.scalar_field function, as explained in the section on creating data sources with mlab.

当然,除非您想要特别指定控制VTK数据集的属性,否则我们不建议您使用现有的Mayavi代码来控制TVTK对象来创建ImageData。的确,Mayavi的ArraySource对象能创建 ImageData,但是前提是您需要确保不出现维度错误,否则会引发一个segmentation fault。通过使用mlab.pipeline.scalar_field我们有更简单方法为ImageData来创建数据源,详见section on creating data sources with mlab


目前为止全文翻译完。还需要补充很多东西

4.结尾

补充:

内容

问题汇总:

1.是不是忘记了解决上面的报错
2.这里我们仅设置一组连接点...pd位置的翻译
3.可能要对Delaunay graph example进行分析
4.Headless那段原因需要探讨一下。
5.ordered by the cost of visualizing them,视觉成本。

更新时间

2018.10-12.——更新到The flow of data节。
2018.10-14.——TVTK datasets全文翻译完,对官方代码修修剪剪,补充实例。