搜索是大数据领域里常见的需求。Splunk和ELK分别是该领域在非开源和开源领域里的领导者。本文利用不多的Python代码实现了一个基本的数据搜索功能,试图让你们理解大数据搜索的基本原理。web
布隆过滤器 (Bloom Filter)算法
第一步咱们先要实现一个布隆过滤器。数组
布隆过滤器是大数据领域的一个常见算法,它的目的是过滤掉那些不是目标的元素。也就是说若是一个要搜索的词并不存在于个人数据中,那么它能够以很快的速度返回目标不存在。微信
让咱们看看如下布隆过滤器的代码:数据结构
1class Bloomfilter(object):
2 """
3 A Bloom filter is a probabilistic data-structure that trades space for accuracy
4 when determining if a value is in a set. It can tell you if a value was possibly
5 added, or if it was definitely not added, but it can't tell you for certain that
6 it was added.
7 """
8 def __init__(self, size):
9 """Setup the BF with the appropriate size"""
10 self.values = [False] * size
11 self.size = size
12
13 def hash_value(self, value):
14 """Hash the value provided and scale it to fit the BF size"""
15 return hash(value) % self.size
16
17 def add_value(self, value):
18 """Add a value to the BF"""
19 h = self.hash_value(value)
20 self.values[h] = True
21
22 def might_contain(self, value):
23 """Check if the value might be in the BF"""
24 h = self.hash_value(value)
25 return self.values[h]
26
27 def print_contents(self):
28 """Dump the contents of the BF for debugging purposes"""
29 print self.values
app
1基本的数据结构是个数组(其实是个位图,用1/0来记录数据是否存在),初始化是没有任何内容,因此所有置False。实际的使用当中,该数组的长度是很是大的,以保证效率。
2利用哈希算法来决定数据应该存在哪一位,也就是数组的索引
3当一个数据被加入到布隆过滤器的时候,计算它的哈希值而后把相应的位置为True
4当检查一个数据是否已经存在或者说被索引过的时候,只要检查对应的哈希值所在的位的True/Fasle
ide
看到这里,你们应该能够看出,若是布隆过滤器返回False,那么数据必定是没有索引过的,然而若是返回True,那也不能说数据必定就已经被索引过。在搜索过程当中使用布隆过滤器可使得不少没有命中的搜索提早返回来提升效率。大数据
咱们看看这段代码是如何运行的:this
1bf = Bloomfilter(10)
2bf.add_value('dog')
3bf.add_value('fish')
4bf.add_value('cat')
5bf.print_contents()
6bf.add_value('bird')
7bf.print_contents()
8# Note: contents are unchanged after adding bird - it collides
9for term in ['dog', 'fish', 'cat', 'bird', 'duck', 'emu']:
10 print '{}: {} {}'.format(term, bf.hash_value(term), bf.might_contain(term))
搜索引擎
结果:
14[False, False, False, False, True, True, False, False, False, True]
15[False, False, False, False, True, True, False, False, False, True]
16dog: 5 True
17fish: 4 True
18cat: 9 True
19bird: 9 True
20duck: 5 True
21emu: 8 False
首先建立了一个容量为10的的布隆过滤器:
而后分别加入 ‘dog’,‘fish’,‘cat’三个对象,这时的布隆过滤器的内容以下:
而后加入‘bird’对象,布隆过滤器的内容并无改变,由于‘bird’和‘fish’刚好拥有相同的哈希。
最后咱们检查一堆对象('dog', 'fish', 'cat', 'bird', 'duck', 'emu')是否是已经被索引了。结果发现‘duck’返回True,2而‘emu’返回False。由于‘duck’的哈希刚好和‘dog’是同样的。
分词
下面一步咱们要实现分词。 分词的目的是要把咱们的文本数据分割成可搜索的最小单元,也就是词。这里咱们主要针对英语,由于中文的分词涉及到天然语言处理,比较复杂,而英文基本只要用标点符号就行了。
下面咱们看看分词的代码:
1def major_segments(s):
2 """
3 Perform major segmenting on a string. Split the string by all of the major
4 breaks, and return the set of everything found. The breaks in this implementation
5 are single characters, but in Splunk proper they can be multiple characters.
6 A set is used because ordering doesn't matter, and duplicates are bad.
7 """
8 major_breaks = ' '
9 last = -1
10 results = set()
11
12 # enumerate() will give us (0, s[0]), (1, s[1]), ...
13 for idx, ch in enumerate(s):
14 if ch in major_breaks:
15 segment = s[last+1:idx]
16 results.add(segment)
17
18 last = idx
19
20 # The last character may not be a break so always capture
21 # the last segment (which may end up being "", but yolo)
22 segment = s[last+1:]
23 results.add(segment)
24
25 return results
主要分割
主要分割使用空格来分词,实际的分词逻辑中,还会有其它的分隔符。例如Splunk的缺省分割符包括如下这些,用户也能够定义本身的分割符。
1] < > ( ) { } | ! ; , ' " * \n \r \s \t & ? + %21 %26 %2526 %3B %7C %20 %2B %3D -- %2520 %5D %5B %3A %0A %2C %28 %29
2
3def minor_segments(s):
4 """
5 Perform minor segmenting on a string. This is like major
6 segmenting, except it also captures from the start of the
7 input to each break.
8 """
9 minor_breaks = '_.'
10 last = -1
11 results = set()
12
13 for idx, ch in enumerate(s):
14 if ch in minor_breaks:
15 segment = s[last+1:idx]
16 results.add(segment)
17
18 segment = s[:idx]
19 results.add(segment)
20
21 last = idx
22
23 segment = s[last+1:]
24 results.add(segment)
25 results.add(s)
26
27 return results
28
次要分割
次要分割和主要分割的逻辑相似,只是还会把从开始部分到当前分割的结果加入。例如“1.2.3.4”的次要分割会有1,2,3,4,1.2,1.2.3
1def segments(event):
2 """Simple wrapper around major_segments / minor_segments"""
3 results = set()
4 for major in major_segments(event):
5 for minor in minor_segments(major):
6 results.add(minor)
7 return results
分词的逻辑就是对文本先进行主要分割,对每个主要分割再进行次要分割。而后把全部分出来的词返回。
咱们看看这段代码是如何运行的:
1for term in segments('src_ip = 1.2.3.4'):
2 print term
3
4src
51.2
61.2.3.4
7src_ip
83
91
101.2.3
11ip
122
13=
144
搜索
好了,有了分词和布隆过滤器这两个利器的支撑后,咱们就能够来实现搜索的功能了。
上代码:
1class Splunk(object):
2 def __init__(self):
3 self.bf = Bloomfilter(64)
4 self.terms = {} # Dictionary of term to set of events
5 self.events = []
6
7 def add_event(self, event):
8 """Adds an event to this object"""
9
10 # Generate a unique ID for the event, and save it
11 event_id = len(self.events)
12 self.events.append(event)
13
14 # Add each term to the bloomfilter, and track the event by each term
15 for term in segments(event):
16 self.bf.add_value(term)
17
18 if term not in self.terms:
19 self.terms[term] = set()
20 self.terms[term].add(event_id)
21
22 def search(self, term):
23 """Search for a single term, and yield all the events that contain it"""
24
25 # In Splunk this runs in O(1), and is likely to be in filesystem cache (memory)
26 if not self.bf.might_contain(term):
27 return
28
29 # In Splunk this probably runs in O(log N) where N is the number of terms in the tsidx
30 if term not in self.terms:
31 return
32
33 for event_id in sorted(self.terms[term]):
34 yield self.events[event_id]
1Splunk表明一个拥有搜索功能的索引集合
2每个集合中包含一个布隆过滤器,一个倒排词表(字典),和一个存储全部事件的数组
3当一个事件被加入到索引的时候,会作如下的逻辑
4 为每个事件生成一个unqie id,这里就是序号
5 对事件进行分词,把每个词加入到倒排词表,也就是每个词对应的事件的id的映射结构,注意,一个词可能对应多个事件,因此倒排表的的值是一个Set。倒排表是绝大部分搜索引擎的核心功能。
6当一个词被搜索的时候,会作如下的逻辑
7 检查布隆过滤器,若是为假,直接返回
8 检查词表,若是被搜索单词不在词表中,直接返回
9 在倒排表中找到全部对应的事件id,而后返回事件的内容
咱们运行下看看吧:
1s = Splunk()
2s.add_event('src_ip = 1.2.3.4')
3s.add_event('src_ip = 5.6.7.8')
4s.add_event('dst_ip = 1.2.3.4')
5
6for event in s.search('1.2.3.4'):
7 print event
8print '-'
9for event in s.search('src_ip'):
10 print event
11print '-'
12for event in s.search('ip'):
13 print event
14
15src_ip = 1.2.3.4
16dst_ip = 1.2.3.4
17-
18src_ip = 1.2.3.4
19src_ip = 5.6.7.8
20-
21src_ip = 1.2.3.4
22src_ip = 5.6.7.8
23dst_ip = 1.2.3.4
是否是很赞!
更复杂的搜索
更进一步,在搜索过程当中,咱们想用And和Or来实现更复杂的搜索逻辑。
上代码:
32class SplunkM(object):
33 def __init__(self):
34 self.bf = Bloomfilter(64)
35 self.terms = {} # Dictionary of term to set of events
36 self.events = []
37
38 def add_event(self, event):
39 """Adds an event to this object"""
40
41 # Generate a unique ID for the event, and save it
42 event_id = len(self.events)
43 self.events.append(event)
44
45 # Add each term to the bloomfilter, and track the event by each term
46 for term in segments(event):
47 self.bf.add_value(term)
48 if term not in self.terms:
49 self.terms[term] = set()
50
51 self.terms[term].add(event_id)
52
53 def search_all(self, terms):
54 """Search for an AND of all terms"""
55
56 # Start with the universe of all events...
57 results = set(range(len(self.events)))
58
59 for term in terms:
60 # If a term isn't present at all then we can stop looking
61 if not self.bf.might_contain(term):
62 return
63 if term not in self.terms:
64 return
65
66 # Drop events that don't match from our results
67 results = results.intersection(self.terms[term])
68
69 for event_id in sorted(results):
70 yield self.events[event_id]
71
72
73 def search_any(self, terms):
74 """Search for an OR of all terms"""
75 results = set()
76
77 for term in terms:
78 # If a term isn't present, we skip it, but don't stop
79 if not self.bf.might_contain(term):
80 continue
81 if term not in self.terms:
82 continue
83
84 # Add these events to our results
85 results = results.union(self.terms[term])
86
87 for event_id in sorted(results):
88 yield self.events[event_id]
利用Python集合的intersection和union操做,能够很方便的支持And(求交集)和Or(求合集)的操做。
运行结果以下:
1s = SplunkM()
2s.add_event('src_ip = 1.2.3.4')
3s.add_event('src_ip = 5.6.7.8')
4s.add_event('dst_ip = 1.2.3.4')
5
6for event in s.search_all(['src_ip', '5.6']):
7 print event
8print '-'
9for event in s.search_any(['src_ip', 'dst_ip']):
10 print event
11
12src_ip = 5.6.7.8
13-
14src_ip = 1.2.3.4
15src_ip = 5.6.7.8
16dst_ip = 1.2.3.4
总结
以上的代码只是为了说明大数据搜索的基本原理,包括布隆过滤器,分词和倒排表。若是你们真的想要利用这代码来实现真正的搜索功能,还差的太远。全部的内容来自于Splunk Conf2017。你们若是有兴趣能够访问以下连接观看视频和PPT。
视频:
https://conf.splunk.com/files/2017/recordings/a-trip-through-the-splunk-data-ingestion-and-retrieval-pipeline.mp4
PPT:
https://conf.splunk.com/files/2017/slides/a-trip-through-the-splunk-data-ingestion-and-retrieval-pipeline.pdf
| 做者:Naughty
| 来源:开源中国
本文分享自微信公众号 - DataScience(DataScienceTeam)。
若有侵权,请联系 support@oschina.cn 删除。
本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一块儿分享。