使用HttpWebRequest和HtmlAgilityPack抓取网页(拒绝乱码,拒绝正则表达式)

 

废话很少说, 直接说需求。html

公司的网站须要抓取其余网站的文章,但任务没到我这,同事搞了一下午没搞出来。因为刚刚到公司, 想证实下本身,就把活揽过来了。由于之前作过,以为应该很简单,但当我开始作的时候,我崩溃了,http请求后,获得的是字符串居然是乱码,而后就各类百度(谷歌一直崩溃中),最后找到了缘由。因为我要抓取的网页作了压缩,因此当我抓的时候,抓过来的是压缩后的,因此必须解压一下,若是不解压,无论用什么编码方式,结果仍是乱码。直接上代码:正则表达式

1 public Encoding GetEncoding(string CharacterSet) 2  { 3 switch (CharacterSet) 4  { 5 case "gb2312": return Encoding.GetEncoding("gb2312"); 6 case "utf-8": return Encoding.UTF8; 7 default: return Encoding.Default; 8  } 9 }
View Code
  public string HttpGet(string url)
        {
            string responsestr = "";
            HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest;
            req.Accept = "*/*";
            req.Method = "GET";
            req.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1";
            using (HttpWebResponse response = req.GetResponse() as HttpWebResponse)
            {
                Stream stream;
                if (response.ContentEncoding.ToLower().Contains("gzip"))
                {
                    stream = new GZipStream(response.GetResponseStream(), CompressionMode.Decompress);
                }
                else if (response.ContentEncoding.ToLower().Contains("deflate"))
                {
                    stream = new DeflateStream(response.GetResponseStream(), CompressionMode.Decompress);
                }
                else
                {
                    stream = response.GetResponseStream();
                }
                using (StreamReader reader = new StreamReader(stream, GetEncoding(response.CharacterSet)))
                {
                    responsestr = reader.ReadToEnd();
                    stream.Dispose();
                }
            }
            return responsestr;
        }


调用HttpGet就能够获取网址的源码了,获得源码后, 如今用一个利器HtmlAgility来解析html了,不会正则没关系,此乃神器啊。老板不再用担忧个人正则表达式了。ide

至于这个神器的用法,园子文章不少,写的也都挺详细的,在此不赘余了。post

 

下面是抓取园子首页的文章列表:网站

 string html = HttpGet("http://www.cnblogs.com/");
            HtmlDocument doc = new HtmlDocument();
            doc.LoadHtml(html);
            //获取文章列表
            var artlist = doc.DocumentNode.SelectNodes("//div[@class='post_item']");
            foreach (var item in artlist)
            {
                HtmlDocument adoc = new HtmlDocument();
                adoc.LoadHtml(item.InnerHtml);
                var html_a = adoc.DocumentNode.SelectSingleNode("//a[@class='titlelnk']");
                Response.Write(string.Format("标题为:{0},连接为:{1}<br>",html_a.InnerText,html_a.Attributes["href"].Value));
            }

运行结果如图:编码

打完收工。url

 

 

因为时间仓促,加上本人文笔不行,若有疑问,欢迎吐槽,吐吐更健康。spa

相关文章
相关标签/搜索