Linux(CentOS)上安装使用solr6.X

Linux(CentOS)上安装使用solr6.X

安装solr

1.下载tgz包linux

方法一:apache

在本地先下载tgz包,而后经过ssh工具上传到linux服务器上指定的路径中.服务器

方法二:app

使用命令:wget 下载地址直接经过服务器到网上下载less

 

下载solr的tgz包的路径:ssh

http://apache.fayea.com/lucene/solr/5.5.3/solr-5.5.3.tgz  5.x版本socket

http://apache.fayea.com/lucene/solr/6.3.0/solr-6.3.0.tgz  6.x版本ide

(因为外国网站下载慢,这里有个我云盘的安装包:http://pan.baidu.com/s/1pLexOmR工具

2.解压网站

命令:tar zxvf solr的tgz包路径

解压完后会出现一个solr的文件夹

 

3.建立应用程序和数据目录

# mkdir -p /data/solr /usr/local/solr -- /data/solr是数据目录  /usr/local/solr是程序目录

4.建立运行solr的用户并赋权

# groupadd solr      -- 建立solr用户组

# useradd -g solr solr    -- 建立用户并指定用户组

# chown -R solr.solr /data/solr /usr/local/solr -- 将文件夹之中的文件拥有者改为指定的用户

5.安装solr服务

# solr-5.3.0/bin/install_solr_service.sh solr-5.3.0.tgz -d /data/solr -i /usr/local/solr

6.检查服务状态 

# service solr status

 

使用solr

查看solr命令选项

# ./bin/solr

 

Solr命令格式:

#./solr option[参数]  

示例:

#./solr start -p 8984  --使用指定的端口号启动solr服务

# ./solr start -help

格式: solr start [-f] [-c] [-h hostname] [-p port] [-d directory] [-z zkHost] [-m memory] [-e example] [-s solr.solr.home] [-a "additional-options"] [-V]

  -f : 在前台启动solr服务;默认是在后台启动的

  -c / -cloud : 使用solrColud模式启动solr服务,若是没有提供-z选项,那么一个嵌入式的zookeeper实例将以solr的port号加1000位端口号启动,如9983,10083

  -h <host>: 指定solr实例的主机名

  -p <port>: 指定solr启动的端口号,默认是8983,也将用来进站口stop_port =($ solr_port-1000)和JMX RMI监听端口rmi_port =(1+$solr_port)。例如,若是你设置P 8985,而后stop_port = 7985 rmi_port = 18985

  -d <dir>: 指定Solr服务器目录;

  -z <zkHost> : ZooKeeper connection string; only used when running in SolrCloud mode using -c To launch an embedded ZooKeeper instance, don't pass this parameter.

  -m <memory>  Sets the min (-Xms) and max (-Xmx) heap size for the JVM, such as: -m 4g results in: -Xms4g -Xmx4g; by default, this script sets the heap size to 512m

  -s <dir>      Sets the solr.solr.home system property; Solr will create core directories under this directory. This allows you to run multiple Solr instances on the same host while reusing the same server directory set using the -d parameter. If set, the specified directory should contain a solr.xml file, unless solr.xml exists in ZooKeeper. This parameter is ignored when running examples (-e), as the solr.solr.home depends on which example is run. The default value is server/solr.

  -e <example>  Name of the example to run; available examples:        cloud:        SolrCloud example          techproducts:  Comprehensive example illustrating many of Solr's core capabilities          dih:          Data Import Handler          schemaless:    Schema-less example

  -a            Additional parameters to pass to the JVM when starting Solr, such as to setup  Java debug options. For example, to enable a Java debugger to attach to the Solr JVM you could pass: -a "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983" In most cases, you should wrap the additional parameters in double quotes.

  -noprompt    Don't prompt for input; accept all defaults when running examples that accept user input

  -V            Verbose messages from this script

# ./bin/solr create -help

Usage: solr create [-c name] [-d confdir] [-n configName] [-shards #] [-replicationFactor #] [-p port]

  Create a core or collection depending on whether Solr is running in standalone (core) or SolrCloud    mode (collection). In other words, this action detects which mode Solr is running in, and then takes      the appropriate action (either create_core or create_collection). For detailed usage instructions, do:

    bin/solr create_core -help  or  bin/solr create_collection –help

#.bin/solr create_core -help

Usage: solr create_core [-c core] [-d confdir] [-p port]

  -c <core> :待建立的core索引库的名称

  -d <confdir>:  Configuration directory to copy when creating the new core, built-in options are:

    basic_configs: Minimal Solr configuration data_driven_schema_configs: Managed schema with field-guessing support enabled sample_techproducts_configs: Example configuration with many optional features enabled to demonstrate the full power of Solr If not specified, default is: data_driven_schema_configs Alternatively, you can pass the path to your own configuration directory instead of using one of the built-in configurations, such as: bin/solr create_core -c mycore -d /tmp/myconfig

-p <port>     Port of a local Solr instance where you want to create the new core If not specified, the script will search the local system for a running Solr instance and will use the port of the first server it finds.

#./bin/solr create_collection –help

Usage: solr create_collection [-c collection] [-d confdir] [-n configName] [-shards #] [-replicationFactor #] [-p port]

  -c <collection>         Name of collection to create

  -d <confdir>            Configuration directory to copy when creating the new collection, built-in options are:

      basic_configs: Minimal Solr configuration  data_driven_schema_configs: Managed schema with field-guessing support enabled sample_techproducts_configs: Example configuration with many optional features enabled to demonstrate the full power of Solr If not specified, default is: data_driven_schema_configs. Alternatively, you can pass the path to your own configuration directory instead of using one of the built-in configurations, such as: bin/solr create_collection -c mycoll -d /tmp/myconfig .By default the script will upload the specified confdir directory into Zookeeper using the same name as the collection (-c) option. Alternatively, if you want to reuse an existing directory or create a confdir in Zookeeper that can be shared by multiple collections, use the -n option

  -n <configName>  Name the configuration directory in Zookeeper; by default, the configurationwill be uploaded to Zookeeper using the collection name (-c), but if you want to use an existing directory or override the name of the configuration in Zookeeper, then use the -c option.

  -shards <#>    Number of shards to split the collection into; default is 1

  -replicationFactor <#>  Number of copies of each document in the collection, default is 1 (no replication)

  -p <port>               Port of a local Solr instance where you want to create the new collection If not specified, the script will search the local system for a running Solr instance and will use the port of the first server it finds.

#su -solr -c "/usr/local/solr/solr/bin/solr create -c gettingstarted -n data_driven_schema_configs"

Copying configuration to new core instance directory:

/data/solr/data/gettingstarted

Creating new core 'gettingstarted' using command:

http://localhost:8983/solr/admin/cores?action=CREATE&name=gettingstarted&instanceDir=gettingstarted

{

  "responseHeader":{

    "status":0,

    "QTime":3481},

  "core":"gettingstarted"}

 

界面操做solr core

添加数据:

 

 

选中documents,进到操做对话框中,将指定格式的数据输到Documents输入框中,点击submit便可新增索引.

成功提示:

查询数据:

选中Query,在操做对话框中点击Execute Query便可查询指定条件的全部索引

 

删除数据:

Documents中输入:<delete><query>*:*</query></delete><commit/>

-- 删除查询到的索引

 

修改数据

和增长数据是同样的操做,不过id已经存在,这时就会作更新

更新前:

 

更新语句:

 

更新后:

 

代码的操做见:https://my.oschina.net/wxdl/blog/698922

分词器的集成

下载分词器,能够参考https://my.oschina.net/wxdl/blog/698601

将下载好的分词器上传到linux服务器中solr安装目录下的

 

打开分词器的jar包(用zip打开),找到如下3个文件:

 

将这3个文件拷贝到WEB-INF下的classes目录中:

 

注:classes目录不必定会有,没有的时候本身建立

分词器的使用

集成以后能够到索引库的conf目录下的配置文件中配置FieldType:

而后到界面上从新reload一下:

这时应该就生效了:

 

相关文章
相关标签/搜索