(root@localhost) [test]> desc l; +-------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+---------+------+-----+---------+-------+ | a | int(11) | NO | PRI | NULL | | | b | int(11) | YES | MUL | NULL | | | c | int(11) | YES | UNI | NULL | | | d | int(11) | YES | | NULL | | +-------+---------+------+-----+---------+-------+ 4 rows in set (0.00 sec) (root@localhost) [test]> explain select b from l where c = 10; +----+-------------+-------+------------+-------+---------------+------+---------+-------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+-------+---------------+------+---------+-------+------+----------+-------+ | 1 | SIMPLE | l | NULL | const | c | c | 5 | const | 1 | 100.00 | NULL | +----+-------------+-------+------------+-------+---------------+------+---------+-------+------+----------+-------+ 1 row in set, 1 warning (0.00 sec) (root@localhost) [test]> explain select b from l where d = 10; +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ | 1 | SIMPLE | l | NULL | ALL | NULL | NULL | NULL | NULL | 4 | 25.00 | Using where | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------------+ 1 row in set, 1 warning (0.01 sec)
看key值,表示这条sql语句的执行计划使用了哪个索引,没走索引,key值就是NULL,这时候就会扫描所有数据mysql
线上删除索引不须要在线工具,只是将索引所占的空间释放掉,很快,不须要pt-oscgit
alter table orders drop index xxx
大部分都是看慢查询日志,找到慢sql,复制出来去命令行里explain一把,看下具体状况,缺乏索引就加一下,一百五十万数据的一张表加一个索引差很少要4sgithub
线上的slow-log好几个g,怎么看?sql
mysqldumpslow slow.log |less,这样会把对一些表的操做格式化 -s [option] c l r t at(average query time) 执行时间倒叙 默认 -r 逆序 -t 3 看top3
这个工具会解析全部slow.log 量太大,解析依然很慢,这时候就须要用到采样数据库
tail -n 10000 slow.log > analytics.log mysqldumpslow analytics.log
tips:如何在线清理慢日志?less
直接 > slow.log 这样是不行的,由于mysql对应这个文件的句柄依然打开,磁盘空间释放不出来的,工具
正确作法,先备份,mv slow.log slow.lg.170302,虽然更名,但句柄未变,此时慢日志仍是会往里面写,命令行
在数据库中flush slow logs 一把,此时才会关闭以前的慢查询日志的句柄,从新打开一个新慢日志的句柄日志
sys库中一个表
statement_analysis表,这个看起来比slow.log看起来更直观,这个表很是重要,这个表不会很大,有参数来控制它最大多少行,后面再讲
x$statement_analysis 这样查,就不会把表里的时间什么的格式化,全是数字,若是想对这张表进行每秒钟采集,将这些数值作差值,能够获得某个波段的增加量code
(root@localhost) [sys]> show create table statement_analysis\G *************************** 1. row *************************** View: statement_analysis Create View: CREATE ALGORITHM=MERGE DEFINER=`mysql.sys`@`localhost` SQL SECURITY INVOKER VIEW `statement_analysis` AS select `sys`.`format_statement`(`performance_schema`.`events_statements_summary_by_digest`.`DIGEST_TEXT`) AS `query`,`performance_schema`.`events_statements_summary_by_digest`.`SCHEMA_NAME` AS `db`,if(((`performance_schema`.`events_statements_summary_by_digest`.`SUM_NO_GOOD_INDEX_USED` > 0) or (`performance_schema`.`events_statements_summary_by_digest`.`SUM_NO_INDEX_USED` > 0)),'*','') AS `full_scan`,`performance_schema`.`events_statements_summary_by_digest`.`COUNT_STAR` AS `exec_count`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_ERRORS` AS `err_count`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_WARNINGS` AS `warn_count`,`sys`.`format_time`(`performance_schema`.`events_statements_summary_by_digest`.`SUM_TIMER_WAIT`) AS `total_latency`,`sys`.`format_time`(`performance_schema`.`events_statements_summary_by_digest`.`MAX_TIMER_WAIT`) AS `max_latency`,`sys`.`format_time`(`performance_schema`.`events_statements_summary_by_digest`.`AVG_TIMER_WAIT`) AS `avg_latency`,`sys`.`format_time`(`performance_schema`.`events_statements_summary_by_digest`.`SUM_LOCK_TIME`) AS `lock_latency`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_ROWS_SENT` AS `rows_sent`,round(ifnull((`performance_schema`.`events_statements_summary_by_digest`.`SUM_ROWS_SENT` / nullif(`performance_schema`.`events_statements_summary_by_digest`.`COUNT_STAR`,0)),0),0) AS `rows_sent_avg`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_ROWS_EXAMINED` AS `rows_examined`,round(ifnull((`performance_schema`.`events_statements_summary_by_digest`.`SUM_ROWS_EXAMINED` / nullif(`performance_schema`.`events_statements_summary_by_digest`.`COUNT_STAR`,0)),0),0) AS `rows_examined_avg`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_ROWS_AFFECTED` AS `rows_affected`,round(ifnull((`performance_schema`.`events_statements_summary_by_digest`.`SUM_ROWS_AFFECTED` / nullif(`performance_schema`.`events_statements_summary_by_digest`.`COUNT_STAR`,0)),0),0) AS `rows_affected_avg`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_CREATED_TMP_TABLES` AS `tmp_tables`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_CREATED_TMP_DISK_TABLES` AS `tmp_disk_tables`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_SORT_ROWS` AS `rows_sorted`,`performance_schema`.`events_statements_summary_by_digest`.`SUM_SORT_MERGE_PASSES` AS `sort_merge_passes`,`performance_schema`.`events_statements_summary_by_digest`.`DIGEST` AS `digest`,`performance_schema`.`events_statements_summary_by_digest`.`FIRST_SEEN` AS `first_seen`,`performance_schema`.`events_statements_summary_by_digest`.`LAST_SEEN` AS `last_seen` from `performance_schema`.`events_statements_summary_by_digest` order by `performance_schema`.`events_statements_summary_by_digest`.`SUM_TIMER_WAIT` desc character_set_client: utf8 collation_connection: utf8_general_ci 1 row in set (0.00 sec)
会发现是一张视图,数据是从performance_schema库中的events_statements_summary_by_digest中抽取的,而且这张表自己就根据总的等待时间排序了,这东西方便后续作awr之类的工具
tips:
sys库中,全部的表都是视图,用于方便统计,以前须要去performance_schema中看events_statements_summary_by_digest
statements_with_errors_or_warnings 执行后有错或者报警的 statements_with_full_table_scans 没有走索引也就是全表扫描 statements_with_sorting 带有排序的 statements_with_temp_tables 带有临时表的
找线上哪些sql平均慢了看sys库,哪一个时间点慢了看slow.log
5.6怎么办,没sys库
本身建立sys库
cd /tmp git clone https://github.com/mysql/mysql-sys cd mysql-sys/ mysql -u root -p < ./sys_56.sql
貌似表比5.7要少一点
tips:
这些视图能够认为存内存的,不占特别大开销,5.6开始,实际上是须要打开performance_schema参数的,不过是默认打开的
performance_schema库太专业,不少东西和内核有关,普通用户不建议看,能看sys库已经不错了
sys库中还有个表schema_index_statistics能够查看每一个索引使用状况,增删查改全部的次数和时间均可以看到,能知道哪张表的哪一个索引比较活跃
statement_analysis、schema_index_statistics、慢查询 三个结合起来能够进行一个初步调优了