linux系统卸载MYSQLjava
1,先经过yum方式卸载mysql及相关组件 命令:yum remove mysql* 2.经过命令:rpm -qa|grep -i mysql 查找系统的有关于mysql的文件 3.而后经过命令:sudo rpm -e --nodeps 包名删除mysql有关软件 4.卸载后/etc/my.cnf不会删除,须要进行手工删除经过命令:rm -rf /etc/my.cnf 须要删除配置文件/etc/my.cnf和数据库文件/var/lib/mysql 删除命令 rm- rf 文件名/文件夹名 5.最后再次经过命令 rpm -qa|grep -i mysql来确认系统中是否还含有mysql相关的文件,若没有,则表示卸载干净
Linux系统安装MySQLnode
1.下载MySQL的Linux版本注意:下载好的MySQL你须要上传到Linux上才行,同时使用tar -zxvf压缩文件名解压 2.进入Linux系统后,先切换成root用户,root用户有更高的权限,有权限卸载系统服务,su - root 回车,而后输入密码 3.查看系统是否已经安装MySQL rpm -qa | grep mysql 或 rpm -qa | grep -i mysql 4.安装命令 rpm -ivh 服务名 咱们须要安装MySQL服务端(Server)和客户端(client) rpm -ivh MySQL-server-5.6.30-1. linux_glibc2.5. x86_64.rpm rpm -ivh MySQL-client-5.6.30-1. linux_glibc2.5. x86_64.rpm 注意: 必须安装客户端,不然你在Linux上经过命令是不能进入MySQL的,如输入命令mysql会提示错误. 开启MySQL服务 service mysql start 5.安装完成后,能够经过命令netstat -nat查看Linux的端口监控,看看Linux有没有在监控3306端口 yum install 安装方式 1.一、查看有没有安装过:yum list installed mysql* 1.二、查看有没有安装包:yum list mysql* 1.三、安装mysql客户端:yum install mysql 1.四、安装mysql 服务器端:yum install mysql-server 1.五、数据库字符集设置mysql配置文件/etc/my.cnf中加入default-character-set=utf8 1.六、启动mysql服务:service mysqld start或者/etc/init.d/mysqld start 1.七、开机启动:sudo chkconfig mysqld on,chkconfig --list | grep mysql* mysqld 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭 1.八、中止:service mysqld stop 1.九、开启登陆建立root管理员:mysqladmin -u root password 123456 登陆: mysql -u root -p输入密码便可 2.0、忘记密码: service mysqld stop mysqld_safe --user=root --skip-grant-tables mysql -u root use mysql update user set password=password("new_pass") where user="root"; flush privileges;
MySQL修改初始密码mysql
注意:先stop你的myslq服务,service mysql stop或者 /etc/init.d/mysqld stop 1.若没有root权限,这种状况下,咱们能够采用相似安全模式的方法修改初始密码,先执行命令 mysqld_safe --skip-grant-tables & (设置成安全模式)&,表示在后台运行,再也不后台运行的话,就再打开一个终端咯 <1># mysql mysql> use mysql; mysql> UPDATE user SET password=password("123456") WHERE user='root'; (会提示修改为功query ok) mysql> flush privileges; mysql> exit; <2>在mysql系统外,使用mysqladmin # mysqladmin -u root -p password "test123" Enter password: 【输入原来的密码】 <3>. 能够登陆mysql系统的状况下,经过登陆mysql系统修改 # mysql -uroot -p Enter password: 【输入原来的密码】 mysql>use mysql; mysql> update user set password=password("123456") where user='root'; mysql> flush privileges; mysql> exit; 2.将MySQL加入到系统启动项中 chkconfig mysql on 查看MySQL是否加入到系统启动项中 chkconfig --list | grep mysql 3.登陆你的MySQL系统 mysql -uroot -p回车,而后输入你的密码 4.添加系统mysql组和mysql用户:执行命令:groupadd mysql和useradd -r -g mysql mysql 5.修改当前data目录拥有者为mysql用户:执行命令 chown -R mysql:mysql data 6.把mysql客户端放到默认路径:ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql 注意:建议使用软链过去,不要直接包文件复制,便于系统安装多个版本的mysql
MYSQL服务的状态、启动、中止、重启命令linux
service mysql start 或 /etc/init.d/mysql start service mysql stop 或 /etc/init.d/mysql stop service mysql restart 或 /etc/init.d/mysql restart service mysql status 或 /etc/init.d/mysql status
hive的安装及配置sql
1.启动设置mysql 启动mysql服务 sudo service mysql start 2.设置为开机自启动 sudo chkconfig mysql on 3.设置root用户登陆密码 sudo /usr/bin/mysqladmin -u root password 'root123' 4.登陆mysql 以root用户身份登陆 mysql -uroot -proot123 5.建立hive用户,数据库等 insert into mysql.user(Host,User,Password) values("localhost","hive",password("hive")); create database hive; grant all on hive.* to hive@'%' identified by 'hive'; grant all on hive.* to hive@'localhost' identified by 'hive'; flush privileges; 6.退出mysql exit 7.验证hive用户 mysql -uhive -phive show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hive | | test | +--------------------+ 3 rows in set (0.00 sec) 退出mysql exit
安装hiveshell
1,解压安装包 cd ~ tar -zxvf apache-hive-1.1.0-bin.tar.gz 2,创建软链接 ln -s apache-hive-1.1.0-bin hive 3,添加环境变量 vi .bash_profile 导入下面的环境变量 export HIVE_HOME=/home/hdpsrc/hive export PATH=$PATH:$HIVE_HOME/bin 使其有效 source .bash_profile
配置hive
4.修改hive-site.xml
cp hive/conf/hive-default.xml.template hive/conf/hive-site.xml
编辑hive-site.xml数据库
主要修改如下参数 <property> <name>javax.jdo.option.ConnectionURL </name> <value>jdbc:mysql://Master:3306/hive </value> </property> <property> <name>javax.jdo.option.ConnectionDriverName </name> <value>com.mysql.jdbc.Driver </value> </property> <property> <name>javax.jdo.option.ConnectionPassword </name> <value>hive </value> </property> <property> <name>hive.hwi.listen.port </name> <value>9999 </value> <description>This is the port the Hive Web Interface will listen on </descript ion> </property> <property> <name>datanucleus.autoCreateSchema </name> <value>true</value> </property> <property> <name>datanucleus.fixedDatastore </name> <value>false</value> </property> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>Username to use against metastore database</description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/home/hdpsrc/hive/iotmp</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/home/hdpsrc/hive/iotmp</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> <property> <name>hive.querylog.location</name> <value>/home/hdpsrc/hive/iotmp</value> <description>Location of Hive run time structured log file</description> </property> 5,拷贝mysql-connector-java-5.1.6-bin.jar 到hive 的lib下面 mv /home/hdpsrc/Desktop/mysql-connector-java-5.1.6-bin.jar /home/hdpsrc/hive/lib/ cp mysql-connector-java-5.1.1.18-bin /usr/hive/lib 6,把jline-2.12.jar拷贝到hadoop相应的目录下,替代jline-0.9.94.jar,不然启动会报错 cp /home/hdpsrc/hive/lib/jline-2.12.jar /home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/ mv /home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar /home/hdpsrc/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar.bak / 7,穿件hive临时文件夹 mkdir /home/hdpsrc/hive/iotmp 四,启动测试hive 初始化hive元数据仓库 该执行目录$HIVE_HOME/bin bin]#./schematool -initSchema -dbType mysql -userName hive -passWord hive 启动hadoop后,执行hive命令 #hive 测试输入 show database; hive> show databases; OK default Time taken: 0.907 seconds, Fetched: 1 row(s)
hive 产生的log 的路径apache
<property> <name>hive.querylog.location</name> <value>${system:java.io.tmpdir}/${system:user.name}</value> <description>Location of Hive run time structured log file</description> </property> 修改hive-log4j.properties配置文件 cp hive-log4j.properties.template hive-log4j.proprties # list of properties property.hive.log.level = INFO property.hive.root.logger = DRFA property.hive.log.dir = ${sys:java.io.tmpdir}/${sys:user.name} property.hive.log.file = hive.log property.hive.perflogger.log.level = INFO 1) 在mysql里建立hive用户,并赋予其足够权限 [root@node01 mysql]# mysql -u root -p Enter password: mysql> create user 'hive' identified by 'hive'; Query OK, 0 rows affected (0.00 sec) mysql> grant all privileges on *.* to 'hive' with grant option; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.01 sec) 2)测试hive用户是否能正常链接mysql,并建立hive数据库 [root@node01 mysql]# mysql -u hive -p Enter password: mysql> create database hive; Query OK, 1 row affected (0.00 sec) mysql> use hive; Database changed mysql> show tables; Empty set (0.00 sec) 3)解压缩hive安装包 tar -xzvf hive-0.9.0.tar.gz [hadoop@node01 ~]$ cd hive-0.9.0 [hadoop@node01 hive-0.9.0]$ ls bin conf docs examples lib LICENSE NOTICE README.txt RELEASE_NOTES.txt scripts src 4)下载mysql链接java的驱动 并拷入hive home的lib下 [hadoop@node01 ~]$ mv mysql-connector-java-5.1.24-bin.jar ./hive-0.9.0/lib 5)修改环境变量,把Hive加到PATH /etc/profile export HIVE_HOME=/home/hadoop/hive-0.9.0 export PATH=$PATH:$HIVE_HOME/bin 6)修改hive-env.sh [hadoop@node01 conf]$ cp hive-env.sh.template hive-env.sh [hadoop@node01 conf]$ vi hive-env.sh 7)拷贝hive-default.xml 并命名为 hive-site.xml 修改四个关键配置 为上面mysql的配置 [hadoop@node01 conf]$ cp hive-default.xml.template hive-site.xml [hadoop@node01 conf]$ vi hive-site.xml <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> <description>password to use against metastore database</description> </property> 8)启动Hadoop,打开hive shell 测试 [hadoop@node01 conf]$ start-all.sh hive> load data inpath 'hdfs://node01:9000/user/hadoop/access_log.txt' > overwrite into table records; Loading data to table default.records Moved to trash: hdfs://node01:9000/user/hive/warehouse/records OK Time taken: 0.526 seconds hive> select ip, count(*) from records > group by ip; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_201304242001_0001, Tracking URL = http://node01:50030/jobdetails.jsp?jobid=job_201304242001_0001 Kill Command = /home/hadoop/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=192.168.231.131:9001 -kill job_201304242001_0001 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2013-04-24 20:11:03,127 Stage-1 map = 0%, reduce = 0% 2013-04-24 20:11:11,196 Stage-1 map = 100%, reduce = 0% 2013-04-24 20:11:23,331 Stage-1 map = 100%, reduce = 100% Ended Job = job_201304242001_0001 MapReduce Jobs Launched: Job 0: Map: 1 Reduce: 1 HDFS Read: 7118627 HDFS Write: 9 SUCCESS Total MapReduce CPU Time Spent: 0 msec OK NULL 28134 Time taken: 33.273 seconds records在HDFS中就是一个文件: [hadoop@node01 home]$ hadoop fs -ls /user/hive/warehouse/records Found 1 items -rw-r--r-- 2 hadoop supergroup 7118627 2013-04-15 20:06 /user/hive/warehouse/records/access_log.txt