Spark链接到MySQL并执行查询为何速度会快?

问题导读:
1. Spark为何能提升Mysql的查询速度?
2. 如何运行SQL in Spark?

3. SparkSQL如何将查询推送到MySQL?
4. 如何使用Spark缓存查询数据?
5. 如何使用 Spark 和 Percona XtraDB Cluster?
6. Spark表分区时须要注意的事项?
7. Spark表现很差的时候?


在这篇文章中咱们将讨论如何利用 Apache Spark 来提高 MySQL 的查询性能。


介绍

在个人前一篇文章 Apache Spark with MySQL 中介绍了如何利用 Apache Spark 实现数据分析以及如何对大量存放于文本文件的数据进行转换和分析。瓦迪姆还作了一个基准测试用来比较  MySQL 和 Spark with Parquet 柱状格式 (使用空中交通性能数据) 两者的性能。 这个测试很是棒,但若是咱们不但愿将数据从 MySQL 移到其余的存储系统中,而是继续在已有的 MySQL 服务器上执行查询的话,Apache Spark 同样能够帮到咱们!


开始

在已有的 MySQL 服务器之上使用 Apache Spark (无需将数据导出到 Spark 或者 Hadoop 平台上),这样至少能够提高 10 倍的查询性能。使用多个 MySQL 服务器(复制或者 Percona XtraDB Cluster)可让咱们在某些查询上获得额外的性能提高。你也可使用 Spark 的缓存功能来缓存整个 MySQL 查询结果表。

思路很简单:Spark 能够经过 JDBC 读取 MySQL 上的数据,也能够执行 SQL 查询,所以咱们能够直接链接到 MySQL 并执行查询。那么为何速度会快呢?对一些须要运行很长时间的查询(如报表或者BI),因为 Spark 是一个大规模并行系统,所以查询会很是的快。MySQL 只能为每个查询分配一个 CPU 核来处理,而 Spark 可使用全部集群节点的全部核。在下面的例子中,咱们会在 Spark 中执行 MySQL 查询,这个查询速度比直接在 MySQL 上执行速度要快 5 到 10 倍。

另外,Spark 能够增长“集群”级别的并行机制,在使用 MySQL 复制或者 Percona XtraDB Cluster 的状况下,Spark 能够把查询变成一组更小的查询(有点像使用了分区表时能够在每一个分区都执行一个查询),而后在多个 Percona XtraDB Cluster 节点的多个从服务器上并行的执行这些小查询。最后它会使用 map/reduce 方式将每一个节点返回的结果聚合在一块儿行程完整的结果。

这篇文章跟我以前文章 “ Airlines On-Time Performance” 所使用的数据库是相同的。瓦迪姆建立了一些脚本能够方便的下载这些数据并上传到 MySQL 数据库。脚本的下载地址请看  这里。同时咱们此次使用的是 2016年7月26日发布的 Apache Spark 2.0


安装 Apache Spark

使用独立模式启动 Apache Spark 是很简单的,以下几步便可:
  • 下载 Apache Spark 2.0 并解压到某目录
  • 启动 master.
  • 启动 slave (worker) 并链接到 master
  • 启动应用 (spark-shell 或者 spark-sql).

示例:
[Shell]  纯文本查看  复制代码
?
1
2
3
4
5
6
root@thor:~ /spark # ./sbin/start-master.sh
less .. /logs/spark-root-org .apache.spark.deploy.master.Master-1-thor.out
15 /08/25 11:21:21 INFO Master: Starting Spark master at spark: //thor :7077
15 /08/25 11:21:21 INFO Utils: Successfully started service 'MasterUI' on port 8080.
15 /08/25 11:21:21 INFO MasterWebUI: Started MasterWebUI at [url=http: //10 .60.23.188:8080]http: //10 .60.23.188:8080[ /url ]
root@thor:~ /spark # ./sbin/start-slave.sh spark://thor:7077
为了链接到 Spark ,咱们可使用 spark-shell (Scala)、pyspark (Python) 或者  spark-sql。spark-sql 和 MySQL 命令行相似,所以这是最简单的选择(你甚至能够用 show tables 命令)。我同时还须要在交互模式下使用 Scala ,所以我选择的是 spark-shell 。在下面全部的例子中,我都是在 MySQL 和 Spark 上使用相同的 SQL 查询,因此其实没多大的不一样。

为了让 Spark 能用上 MySQL 服务器,咱们须要驱动程序  Connector/J for MySQL. 下载这个压缩文件解压后拷贝 mysql-connector-java-5.1.39-bin.jar 到 spark 目录,而后在 conf/spark-defaults.conf 中添加类路径,以下:
[Shell]  纯文本查看  复制代码
?
1
2
spark.driver.extraClassPath = /usr/local/spark/mysql-connector-java-5 .1.39-bin.jar
spark.executor.extraClassPath = /usr/local/spark/mysql-connector-java-5 .1.39-bin.jar


利用 Apache Spark 运行 MySQL 查询

在这个测试中咱们使用的一台拥有 12 核(老的 Intel(R) Xeon(R) CPU L5639 @ 2.13GHz 处理器) 以及 48G 内存,带有 SSD 磁盘的物理服务器。 在这台机器上我安装了 MySQL 并启动了 Spark 主节点和从节点。

如今咱们能够在 Spark 中运行 MySQL 查询了。首先,从 Spark 目录中启动 Shell (在我这里是 /usr/local/spark ):
[Shell]  纯文本查看  复制代码
?
1
$ . /bin/spark-shell --driver-memory 4G --master spark: //server1 :7077
而后咱们将链接到 MySQL 服务器并注册临时视图:
[Scala]  纯文本查看  复制代码
?
1
2
3
4
5
6
7
val jdbcDF = spark.read.format( "jdbc" ).options(
   "dbtable" -> "ontime.ontime_part" ,
   "fetchSize" -> "10000" ,
   "partitionColumn" -> "yeard" , "lowerBound" -> "1988" , "upperBound" -> "2016" , "numPartitions" -> "28"
   )).load()
jdbcDF.createOrReplaceTempView( "ontime" )
这样咱们就为 Spark 建立了一个“数据源”(换句话说就是至关于 Spark 创建了到 MySQL 的链接)。Spark 表名 “ontime” 对应链接到 MySQL 的ontime.ontime_part 表,如今能够在 Spark 中运行 SQL 了,它们是按顺序被一一解析并转换成 MySQL 查询的。

partitionColumn” 在这里很是重要,它告诉 Spark 并行的执行多个查询,每一个分区分配一个查询执行。

如今咱们能够运行查询:
[Scala]  纯文本查看  复制代码
?
1
2
val sqlDF = sql( "select min(year), max(year) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and (origin = 'RDU' or dest = 'RDU') GROUP by carrier HAVING cnt > 100000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10" )
sqlDF.show()

MySQL 查询示例

让咱们暂时回到 MySQL 来看看这个查询例子,我选出了以下的查询语句 (来自我之前的文章):
[SQL]  纯文本查看  复制代码
?
01
02
03
04
05
06
07
08
09
10
11
select min ( year ), max ( year ) as max_year, Carrier, count (*) as cnt,
sum (if(ArrDelayMinutes>30, 1, 0)) as flights_delayed,
round( sum (if(ArrDelayMinutes>30, 1, 0))/ count (*),2) as rate
FROM ontime
WHERE
DayOfWeek not in (6,7)
and OriginState not in ( 'AK' , 'HI' , 'PR' , 'VI' )
and DestState not in ( 'AK' , 'HI' , 'PR' , 'VI' )
GROUP by carrier HAVING cnt > 100000 and max_year > '1990'
ORDER by rate DESC , cnt desc
LIMIT  10
这个查询用来查找出每一个航空公司航班延误的架数。此外该查询还将很智能的计算准点率,考虑到航班数量(咱们不但愿小航空公司跟大航空公司比较,同时一些老的关闭的航空公司也不在计算范围以内)。

我选择这个查询主要的缘由是,这在 MySQL 很难再优化了,全部的这些 WHERE 条件智能过滤掉约 70% 的记录行。我作了一个基本的计算:
[SQL]  纯文本查看  复制代码
?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
mysql> select count (*) FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ( 'AK' , 'HI' , 'PR' , 'VI' ) and DestState not in ( 'AK' , 'HI' , 'PR' , 'VI' );
+ -----------+
| count (*)  |
+ -----------+
| 108776741 |
+ -----------+
mysql> select count (*) FROM ontime;
+ -----------+
| count (*)  |
+ -----------+
| 152657276 |
+ -----------+
mysql> select round((108776741/152657276)*100, 2);
+ -------------------------------------+
| round((108776741/152657276)*100, 2) |
+ -------------------------------------+
|                               71.26 |
+ -------------------------------------+
表结构以下:
[SQL]  纯文本查看  复制代码
?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
CREATE TABLE `ontime_part` (
   `YearD` int (11) NOT NULL ,
   `Quarter` tinyint(4) DEFAULT NULL ,
   `MonthD` tinyint(4) DEFAULT NULL ,
   `DayofMonth` tinyint(4) DEFAULT NULL ,
   `DayOfWeek` tinyint(4) DEFAULT NULL ,
   `FlightDate` date DEFAULT NULL ,
   `UniqueCarrier` char (7) DEFAULT NULL ,
   `AirlineID` int (11) DEFAULT NULL ,
   `Carrier` char (2) DEFAULT NULL ,
   `TailNum` varchar (50) DEFAULT NULL ,
...
   `id` int (11) NOT NULL AUTO_INCREMENT,
   PRIMARY KEY (`id`,`YearD`),
   KEY `covered` (`DayOfWeek`,`OriginState`,`DestState`,`Carrier`,`YearD`,`ArrDelayMinutes`)
) ENGINE=InnoDB AUTO_INCREMENT=162668935 DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (YearD)
(PARTITION p1987 VALUES LESS THAN (1988) ENGINE = InnoDB,
  PARTITION p1988 VALUES LESS THAN (1989) ENGINE = InnoDB,
  PARTITION p1989 VALUES LESS THAN (1990) ENGINE = InnoDB,
  PARTITION p1990 VALUES LESS THAN (1991) ENGINE = InnoDB,
  PARTITION p1991 VALUES LESS THAN (1992) ENGINE = InnoDB,
  PARTITION p1992 VALUES LESS THAN (1993) ENGINE = InnoDB,
  PARTITION p1993 VALUES LESS THAN (1994) ENGINE = InnoDB,
  PARTITION p1994 VALUES LESS THAN (1995) ENGINE = InnoDB,
  PARTITION p1995 VALUES LESS THAN (1996) ENGINE = InnoDB,
  PARTITION p1996 VALUES LESS THAN (1997) ENGINE = InnoDB,
  PARTITION p1997 VALUES LESS THAN (1998) ENGINE = InnoDB,
  PARTITION p1998 VALUES LESS THAN (1999) ENGINE = InnoDB,
  PARTITION p1999 VALUES LESS THAN (2000) ENGINE = InnoDB,
  PARTITION p2000 VALUES LESS THAN (2001) ENGINE = InnoDB,
  PARTITION p2001 VALUES LESS THAN (2002) ENGINE = InnoDB,
  PARTITION p2002 VALUES LESS THAN (2003) ENGINE = InnoDB,
  PARTITION p2003 VALUES LESS THAN (2004) ENGINE = InnoDB,
  PARTITION p2004 VALUES LESS THAN (2005) ENGINE = InnoDB,
  PARTITION p2005 VALUES LESS THAN (2006) ENGINE = InnoDB,
  PARTITION p2006 VALUES LESS THAN (2007) ENGINE = InnoDB,
  PARTITION p2007 VALUES LESS THAN (2008) ENGINE = InnoDB,
  PARTITION p2008 VALUES LESS THAN (2009) ENGINE = InnoDB,
  PARTITION p2009 VALUES LESS THAN (2010) ENGINE = InnoDB,
  PARTITION p2010 VALUES LESS THAN (2011) ENGINE = InnoDB,
  PARTITION p2011 VALUES LESS THAN (2012) ENGINE = InnoDB,
  PARTITION p2012 VALUES LESS THAN (2013) ENGINE = InnoDB,
  PARTITION p2013 VALUES LESS THAN (2014) ENGINE = InnoDB,
  PARTITION p2014 VALUES LESS THAN (2015) ENGINE = InnoDB,
  PARTITION p2015 VALUES LESS THAN (2016) ENGINE = InnoDB,
  PARTITION p_new VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */
就算有一个“覆盖”索引,MySQL 也将扫描约 ~70M-100M 行的数据并建立一个临时表:
[SQL]  纯文本查看  复制代码
?
01
02
03
04
05
06
07
08
09
10
11
12
13
mysql>  explain select min (yearD), max (yearD) as max_year, Carrier, count (*) as cnt, sum (if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round( sum (if(ArrDelayMinutes>30, 1, 0))/ count (*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ( 'AK' , 'HI' , 'PR' , 'VI' ) and DestState not in ( 'AK' , 'HI' , 'PR' , 'VI' ) GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC , cnt desc LIMIT  10G
*************************** 1. row ***************************
            id: 1
   select_type: SIMPLE
         table : ontime_part
          type: range
possible_keys: covered
           key : covered
       key_len: 2
           ref: NULL
          rows : 70483364
         Extra: Using where ; Using index ; Using temporary ; Using filesort
1 row in set (0.00 sec)
下面是 MySQL 查询的响应时间:
[SQL]  纯文本查看  复制代码
?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
mysql> select min (yearD), max (yearD) as max_year, Carrier, count (*) as cnt, sum (if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round( sum (if(ArrDelayMinutes>30, 1, 0))/ count (*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ( 'AK' , 'HI' , 'PR' , 'VI' ) and DestState not in ( 'AK' , 'HI' , 'PR' , 'VI' ) GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC , cnt desc LIMIT  10;
+ ------------+----------+---------+----------+-----------------+------+
| min (yearD) | max_year | Carrier | cnt      | flights_delayed | rate |
+ ------------+----------+---------+----------+-----------------+------+
|       2003 |     2013 | EV      |  2962008 |          464264 | 0.16 |
|       2003 |     2013 | B6      |  1237400 |          187863 | 0.15 |
|       2006 |     2011 | XE      |  1615266 |          230977 | 0.14 |
|       2003 |     2005 | DH      |   501056 |           69833 | 0.14 |
|       2001 |     2013 | MQ      |  4518106 |          605698 | 0.13 |
|       2003 |     2013 | FL      |  1692887 |          212069 | 0.13 |
|       2004 |     2010 | OH      |  1307404 |          175258 | 0.13 |
|       2006 |     2013 | YV      |  1121025 |          143597 | 0.13 |
|       2003 |     2006 | RU      |  1007248 |          126733 | 0.13 |
|       1988 |     2013 | UA      | 10717383 |         1327196 | 0.12 |
+ ------------+----------+---------+----------+-----------------+------+
10 rows in set (19 min 16.58 sec)
足足执行了 19 分钟,这个结果然的让人爽不起来。


SQL in Spark

如今咱们但愿在 Spark 中运行相同的查询,让 Spark 从 MySQL 读取数据。咱们建立了一个“数据源”而后执行以下查询:
[Scala]  纯文本查看  复制代码
?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
scala> val jdbcDF = spark.read.format( "jdbc" ).options(
      |   Map( "url" ->  "jdbc:mysql://localhost:3306/ontime?user=root&password=mysql" ,
      |   "dbtable" -> "ontime.ontime_sm" ,
      |   "fetchSize" -> "10000" ,
      |   "partitionColumn" -> "yeard" , "lowerBound" -> "1988" , "upperBound" -> "2015" , "numPartitions" -> "48"
      |   )).load()
16 / 08 / 02 23 : 24 : 12 WARN JDBCRelation : The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions : 27 ; Input number of partitions : 48 ; Lower bound : 1988 ; Upper bound : 2015 .
dbcDF : org.apache.spark.sql.DataFrame = [id : int, YearD : date ... 19 more fields]
scala> jdbcDF.createOrReplaceTempView( "ontime" )
scala> val sqlDF = sql( "select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10" )
sqlDF : org.apache.spark.sql.DataFrame = [min(yearD) : date, max _ year : date ... 4 more fields]
scala> sqlDF.show()
+----------+--------+-------+--------+---------------+----+
|min(yearD)|max _ year|Carrier|     cnt|flights _ delayed|rate|
+----------+--------+-------+--------+---------------+----+
|      2003 |    2013 |     EV| 2962008 |         464264 | 0.16 |
|      2003 |    2013 |     B 6 | 1237400 |         187863 | 0.15 |
|      2006 |    2011 |     XE| 1615266 |         230977 | 0.14 |
|      2003 |    2005 |     DH|  501056 |          69833 | 0.14 |
|      2001 |    2013 |     MQ| 4518106 |         605698 | 0.13 |
|      2003 |    2013 |     FL| 1692887 |         212069 | 0.13 |
|      2004 |    2010 |     OH| 1307404 |         175258 | 0.13 |
|      2006 |    2013 |     YV| 1121025 |         143597 | 0.13 |
|      2003 |    2006 |     RU| 1007248 |         126733 | 0.13 |
|      1988 |    2013 |     UA| 10717383 |        1327196 | 0.12 |
+----------+--------+-------+--------+---------------+----+
Spark-shell 并不会显示查询的执行时间,这个能够从 spark-sql 提供的 Web UI 中获取到。我在 spark-sql 中从新执行相同的查询:
[Shell]  纯文本查看  复制代码
?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
. /bin/spark-sql --driver-memory 4G  --master spark: //thor :7077
spark-sql> CREATE TEMPORARY VIEW ontime
          > USING org.apache.spark.sql.jdbc
          > OPTIONS (
          >      url  "jdbc:mysql://localhost:3306/ontime?user=root&password=" ,
          >      dbtable "ontime.ontime_part" ,
          >      fetchSize "1000" ,
          >      partitionColumn "yearD" , lowerBound "1988" , upperBound "2014" , numPartitions "48"
          > );
16 /08/04 01:44:27 WARN JDBCRelation: The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions: 26; Input number of partitions: 48; Lower bound: 1988; Upper bound: 2014.
Time taken: 3.864 seconds
spark-sql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum ( if (ArrDelayMinutes>30, 1, 0)) as flights_delayed, round( sum ( if (ArrDelayMinutes>30, 1, 0)) /count (*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ( 'AK' , 'HI' , 'PR' , 'VI' ) and DestState not in ( 'AK' , 'HI' , 'PR' , 'VI' ) GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10;
16 /08/04 01:45:13 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
2003    2013    EV      2962008 464264  0.16
2003    2013    B6      1237400 187863  0.15
2006    2011    XE      1615266 230977  0.14
2003    2005    DH      501056  69833   0.14
2001    2013    MQ      4518106 605698  0.13
2003    2013    FL      1692887 212069  0.13
2004    2010    OH      1307404 175258  0.13
2006    2013    YV      1121025 143597  0.13
2003    2006    RU      1007248 126733  0.13
1988    2013    UA      10717383        1327196 0.12
Time taken: 139.628 seconds, Fetched 10 row(s)
能够看到查询的时间足足快了 10 倍之多(同一台机器,只有一台机器)。可是到底这些查询是怎么变成 MySQL 查询的呢?而后为何这样的查询会快那么多。让咱们深刻到 MySQL 一探究竟。


深刻 MySQL

Spark:
[Shell]  纯文本查看  复制代码
?
1
2
scala> sqlDF.show()
[Stage 4:>                                                        (0 + 26) / 26]

MySQL:
[Shell]  纯文本查看  复制代码
?
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
mysql> select id , info from information_schema.processlist where info is not NULL and info not like '%information_schema%' ;
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id    | info                                                                                                                                                                                                                                                    |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10948 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 2001 AND yearD < 2002) |
| 10965 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 2007 AND yearD < 2008) |
| 10966 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 1991 AND yearD < 1992) |
| 10967 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 1994 AND yearD < 1995) |
| 10968 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 1998 AND yearD < 1999) |
| 10969 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 2010 AND yearD < 2011) |
| 10970 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 2002 AND yearD < 2003) |
| 10971 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 2006 AND yearD < 2007) |
| 10972 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 1990 AND yearD < 1991) |
| 10953 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' , 'PR' , 'VI' )))) AND ((NOT (DestState IN ( 'AK' , 'HI' , 'PR' , 'VI' ))))) AND (yearD >= 2009 AND yearD < 2010) |
| 10947 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ( 'AK' , 'HI' ,
相关文章
相关标签/搜索