关于java爬虫与python爬虫

前言

不少人说学习数据挖掘,先从爬虫入手。接触了大大小小的项目后,发现数据的获取是数据建模前的一项很是重要的活儿。在此,我须要先总结一些爬虫的流程,分别有python版的以及java版的。javascript

url请求

java版的代码以下:java

public String call (String url){
            String content = "";
            BufferedReader in = null;
            try{
                URL realUrl = new URL(url);
                URLConnection connection = realUrl.openConnection();
                connection.connect();
                in = new BufferedReader(new InputStreamReader(connection.getInputStream(),"gbk"));
                String line ;
                while ((line = in.readLine()) != null){
                    content += line + "\n";
                }
            }catch (Exception e){
                e.printStackTrace();
            }
            finally{
                try{
                    if (in != null){
                        in.close();
                    }
                }catch(Exception e2){
                    e2.printStackTrace();
                }
            }
            return content;
        }

python版的代码以下:python

# coding=utf-8
import chardet
import urllib2

url = "http://www.baidu.com"
data = (urllib2.urlopen(url)).read()
charset = chardet.detect(data)
code = charset['encoding']
content = str(data).decode(code, 'ignore').encode('utf8')
print content

正则表达式

java版的代码以下:web

public String call(String content) throws Exception {
            Pattern p = Pattern.compile("content\":\".*?\"");
            Matcher match = p.matcher(content);
            StringBuilder sb = new StringBuilder();
            String tmp;
            while (match.find()){
                tmp = match.group();
                tmp = tmp.replaceAll("\"", "");
                tmp = tmp.replace("content:", "");
                tmp = tmp.replaceAll("<.*>", "");
                sb.append(tmp + "\n");
            }
            String comment = sb.toString();
            return comment;
        }
    }

python的代码以下:正则表达式

import re
pattern = re.compile(正则)
group = pattern.findall(字符串)