Apache Flink 零基础入门(十)Flink DataSet编程

DataSet programs in Flink are regular programs that implement transformations on data sets (e.g., filtering, mapping, joining, grouping). The data sets are initially created from certain sources (e.g., by reading files, or from local collections). Results are returned via sinks, which may for example write the data to (distributed) files, or to standard output (for example the command line terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs. The execution can happen in a local JVM, or on clusters of many machines.java

Flink中DataSet编程是很是常规的编程,只须要实现他的数据集的转换(例如filtering, mapping, joining, grouping)。这个数据集最初是经过数据源建立(例如读取文件、本地数据集加载本地集合),转换的结果经过sink返回到本地(或者分布式)的文件系统或者终端。Flink程序能够运行在各类环境中例如单机,或者嵌入其余程序中。执行过程能够在本地JVM中或者集群中。程序员

Source ===> Flink(transformation)===> Sinkapache

 基于文件

  • readTextFile(path) / TextInputFormat - Reads files line wise and returns them as Strings.
  • readTextFileWithValue(path) / TextValueInputFormat - Reads files line wise and returns them as StringValues. StringValues are mutable strings.
  • readCsvFile(path) / CsvInputFormat - Parses files of comma (or another char) delimited fields. Returns a DataSet of tuples or POJOs. Supports the basic java types and their Value counterparts as field types.
  • readFileOfPrimitives(path, Class) / PrimitiveInputFormat - Parses files of new-line (or another char sequence) delimited primitive data types such as String or Integer.
  • readFileOfPrimitives(path, delimiter, Class) / PrimitiveInputFormat - Parses files of new-line (or another char sequence) delimited primitive data types such as String or Integer using the given delimiter.

基于集合 

  • fromCollection(Collection)
  • fromCollection(Iterator, Class)
  • fromElements(T ...)
  • fromParallelCollection(SplittableIterator, Class)
  • generateSequence(from, to)

从简单的基于集合建立DataSet

基于集合的数据源每每用来在开发环境中或者程序员学习中,能够随意造咱们所须要的数据,由于方式简单。下面从java和scala两种方式来实现使用集合做为数据源。数据源是简单的1到10编程

java

import org.apache.flink.api.java.ExecutionEnvironment;

import java.util.ArrayList;
import java.util.List;

public class JavaDataSetSourceApp {
    public static void main(String[] args) throws Exception {
        ExecutionEnvironment executionEnvironment = ExecutionEnvironment.getExecutionEnvironment();
        fromCollection(executionEnvironment);
    }

    public static void fromCollection(ExecutionEnvironment env) throws Exception {
        List<Integer> list = new ArrayList<Integer>();
        for (int i = 1; i <= 10; i++) {
            list.add(i);
        }
        env.fromCollection(list).print();
    }
}

scala

import org.apache.flink.api.scala.ExecutionEnvironment

object DataSetSourceApp {

  def main(args: Array[String]): Unit = {
    val env = ExecutionEnvironment.getExecutionEnvironment
    fromCollection(env)
  }

  def fromCollection(env: ExecutionEnvironment): Unit = {
    import org.apache.flink.api.scala._
    val data = 1 to  10
    env.fromCollection(data).print()
  }

}

读文件或文件夹方式建立DataSet

在本地文件夹:E:\test\input,下面有一个hello.txt,内容以下:api

hello	world	welcome
hello	welcome

Scala

def main(args: Array[String]): Unit = {
    val env = ExecutionEnvironment.getExecutionEnvironment
    //fromCollection(env)
    textFile(env)
  }

  def textFile(env: ExecutionEnvironment): Unit = {
    val filePathFilter = "E:/test/input/hello.txt"
    env.readTextFile(filePathFilter).print()

  }

readTextFile方法须要参数1:文件路径(可使本地,也能够是hdfs://host:port/file/path),参数2:编码(若是不写,默认UTF-8)app

是否能够指定文件夹?分布式

咱们直接传递文件夹路径ide

def main(args: Array[String]): Unit = {
    val env = ExecutionEnvironment.getExecutionEnvironment
    //fromCollection(env)
    textFile(env)
  }

  def textFile(env: ExecutionEnvironment): Unit = {
    //val filePathFilter = "E:/test/input/hello.txt"
    val filePathFilter = "E:/test/input"
    env.readTextFile(filePathFilter).print()

  }

运行结果正常。说明readTextFile方法传入文件夹,也没有问题,它将会遍历文件夹下面的全部文件学习

Java

public static void main(String[] args) throws Exception {
        ExecutionEnvironment executionEnvironment = ExecutionEnvironment.getExecutionEnvironment();
        // fromCollection(executionEnvironment);
        textFile(executionEnvironment);
    }

    public static void textFile(ExecutionEnvironment env) throws Exception {
        String filePath = "E:/test/input/hello.txt";
        // String filePath = "E:/test/input";
        env.readTextFile(filePath).print();
    }

一样的道理,java中也能够指定文件或者文件夹,若是指定文件夹,那么将遍历文件夹下面的全部文件。优化

读CSV文件建立DataSet

建立一个CSV文件,内容以下:

name,age,job
Tom,26,cat
Jerry,24,mouse
sophia,30,developer

Scala

读取csv文件方法readCsvFile,参数以下:

filePath: String,
      lineDelimiter: String = "\n",
      fieldDelimiter: String = ",", 字段分隔符
      quoteCharacter: Character = null,
      ignoreFirstLine: Boolean = false,  是否忽略第一行
      ignoreComments: String = null,
      lenient: Boolean = false,
      includedFields: Array[Int] = null, 读取文件的哪几列
      pojoFields: Array[String] = null)

读取csv文件代码以下:

def csvFile(env:ExecutionEnvironment): Unit = {
    import org.apache.flink.api.scala._
    val filePath = "E:/test/input/people.csv"
    env.readCsvFile[(String, Int, String)](filePath, ignoreFirstLine = true).print()
  }

如何只读前两列,就须要指定includedFields了,

env.readCsvFile[(String, Int)](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print()

以前使用Tuple方式指定类型,如何指定自定义的一个case class?

def csvFile(env: ExecutionEnvironment): Unit = {
    import org.apache.flink.api.scala._
    val filePath = "E:/test/input/people.csv"
    //    env.readCsvFile[(String, Int, String)](filePath, ignoreFirstLine = true).print()
    //    env.readCsvFile[(String, Int)](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print()

    env.readCsvFile[MyCaseClass](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print()
  }
  case class MyCaseClass(name: String, age: Int)

如何指定POJO?

新建一个POJO类,people

public class People {
    private String name;
    private int age;
    private String job;

    @Override
    public String toString() {
        return "People{" +
                "name='" + name + '\'' +
                ", age=" + age +
                ", job='" + job + '\'' +
                '}';
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    public String getJob() {
        return job;
    }

    public void setJob(String job) {
        this.job = job;
    }
}
env.readCsvFile[People](filePath, ignoreFirstLine = true, pojoFields = Array("name", "age", "job")).print()

java

public static void csvFile(ExecutionEnvironment env) throws Exception {
        String filePath = "E:/test/input/people.csv";
        DataSource<Tuple2<String, Integer>> types = env.readCsvFile(filePath).ignoreFirstLine().includeFields("11").types(String.class, Integer.class);
        types.print();
    }

只取出第一列和第二列的数据。

读取POJO数据:

env.readCsvFile(filePath).ignoreFirstLine().pojoType(People.class, "name", "age", "job").print();

读递归文件夹建立DataSet

scala

def main(args: Array[String]): Unit = {
    val env = ExecutionEnvironment.getExecutionEnvironment
    //fromCollection(env)
    //    textFile(env)
//    csvFile(env)
    readRecursiveFiles(env)
  }

  def readRecursiveFiles(env: ExecutionEnvironment): Unit = {
    val filePath = "E:/test/nested"
    val parameter = new Configuration()
    parameter.setBoolean("recursive.file.enumeration", true)
    env.readTextFile(filePath).withParameters(parameter).print()
  }

从压缩文件中建立DataSet

Scala

def readCompressionFiles(env: ExecutionEnvironment): Unit = {
    val filePath = "E:/test/my.tar.gz"
    env.readTextFile(filePath).print()
  }

能够直接读取压缩文件。由于提升了空间利用率,可是却致使CPU的压力也提高了。所以须要一个权衡。须要调优,在各类状况下去选择更合适的方式。不是任何一种优化都能带来想要的结果。若是自己集群的CPU压力就高,那么就不该该读取压缩文件了。

相关文章
相关标签/搜索