第三十七章:网络爬虫实战
本章目标
完成本章学习后,你将能够:
- 使用requests获取网页
- 使用BeautifulSoup解析HTML
- 使用Scrapy框架
- 处理反爬虫机制
Requests基础
import requests # GET请求 response = requests.get('https://api.github.com') print(response.status_code) print(response.json()) # 带参数 params = {'q': 'python', 'page': 1} response = requests.get('https://api.github.com/search/repositories', params=params) # POST请求 data = {'key': 'value'} response = requests.post('https://httpbin.org/post', json=data) # 设置Headers headers = {'User-Agent': 'Mozilla/5.0'} response = requests.get('https://example.com', headers=headers)
BeautifulSoup
from bs4 import BeautifulSoup import requests response = requests.get('https://example.com') soup = BeautifulSoup(response.content, 'html.parser') # 查找元素 title = soup.find('title') links = soup.find_all('a') divs = soup.find_all('div', class_='content') # CSS选择器 items = soup.select('.item > .title') # 获取文本 print(title.get_text())
Scrapy框架
# myspider.py import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' start_urls = ['http://quotes.toscrape.com/'] def parse(self, response): for quote in response.css('div.quote'): yield { 'text': quote.css('span.text::text').get(), 'author': quote.css('small.author::text').get(), } # 翻页 next_page = response.css('li.next a::attr(href)').get() if next_page: yield response.follow(next_page, self.parse)
反爬虫应对
# 设置代理 proxies = { 'http': 'http://proxy:port', 'https': 'https://proxy:port', } response = requests.get(url, proxies=proxies) # 设置延迟 import time time.sleep(random.uniform(1, 3)) # 使用Session保持Cookie session = requests.Session() session.get('https://example.com/login')
本章实战
1. 爬取新闻网站 2. 爬取商品信息 3. 实现自动登录
下一章:第三十八章:数据分析项目