当前位置:网站首页>Scrapy framework (I): basic use
Scrapy framework (I): basic use
2022-06-27 15:34:00 【User 8336546】
Preface
This article briefly introduces Scrapy The basic use of the framework , And some problems and solutions encountered in the process of use .
Scrapy Basic use of framework
Installation of environment
1. Enter the following instructions to install wheel
pip install wheel
2. download twisted
Here is a download link :http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
notes : There are two points to note when downloading :
- To download with yourself python File corresponding to version ,
cpxxVersion number .( For example, I python Version is 3.8.2, Just download cp38 The file of ) - Download the corresponding file according to the number of bits of the operating system .32 Bit operating system download
win32;64 Bit operating system download win_amd64.
3. install twisted
Download the good in the previous step twisted Enter the following command in the directory of :
pip install Twisted-20.3.0-cp38-cp38-win_amd64.whl
4. Enter the following instructions to install pywin32
pip install pywin32
5. Enter the following instructions to install scrapy
pip install scrapy
6. test
Input in the terminal scrapy command , If no error is reported, the installation is successful .
establish scrapy engineering
Here it is. PyCharm Created in the scrapy engineering
1. open Terminal panel , Enter the following instructions to create a scrapy engineering
scrapy startproject ProjectName
ProjectNameIs the project name , Define your own .
2. The following directories are automatically generated
3. Create a crawler file
First, enter the newly created project directory :
cd ProjectName
And then in spiders Create a crawler file in a subdirectory
scrapy genspider spiderName www.xxx.com
spiderNameIs the name of the crawler file , Define your own .
4. Execute the project
scrapy crawl spiderName
Modification of file parameters
In order to better implement the crawler project , Some file parameters need to be modified .
1.spiderName.py
The contents of the crawler file are as follows :
import scrapy
class FirstSpider(scrapy.Spider):
# The name of the crawler file : Is a unique identifier of the crawler source file
name = 'spiderName'
# Allowed domain names : Used to define start_urls Which... In the list url You can send requests
allowed_domains = ['www.baidu.com']
# Initial url list : The... Stored in this list url Will be scrapy Send the request automatically
start_urls = ['http://www.baidu.com/','https://www.douban.com']
# For data analysis :response The parameter represents the corresponding response object after the request is successful
def parse(self, response):
passnotes :
allowed_domainsThe list is used to limit the requested url. In general, there is no need for , Just comment it out .
2.settings.py
1). ROBOTSTXT_OBEY
find ROBOTSTXT_OBEY keyword , The default parameter here is Ture.( That is, the project complies with by default robots agreement ) Practice for the project , It can be temporarily changed to False.
# Obey robots.txt rules ROBOTSTXT_OBEY = False
2). USER_AGENT
find USER_AGENT keyword , The default comment here is . Modify its contents , To avoid UA Anti creeping .
# Crawl responsibly by identifying yourself (and your website) on the user-agent USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36 Edg/88.0.705.50'
3). LOG_LEVEL
In order to view the project operation results more clearly ( The default running result of the project will print a large amount of log information ), Can be added manually LOG_LEVEL keyword .
# Displays log information of the specified type LOG_LEVEL = 'ERROR' # Show only error messages
Possible problems
1. Installation completed successfully scrapy, But after the crawler file is created, it still shows import scrapy Mistake .
The environment in which I practice is based on Python3.8 Various virtual environments created , However, in building scrapy Project time pip install scrapy Always report an error .
Initially, it was manually posted on the official website :https://scrapy.org/ download scrapy library , Then install to the virtual environment site-packages Under the table of contents , Sure enough, looking back import scrapy It's normal , Programs can also run . But still print a lot of error messages , adopt PyCharm Of Python Interpreter Check to see if Scrapy Library included .
But I tried some solutions , To no avail …
Finally found Anaconda Bring their own Scrapy library , So it is based on Anaconda Created a virtual environment , Perfect operation ~~~~
ending
study hard
边栏推荐
- PR second training notes
- 洛谷入门1【顺序结构】题单题解
- ICML 2022 | 阿⾥达摩院最新FEDformer,⻓程时序预测全⾯超越SOTA
- Synchronized and lock escalation
- Use GCC to generate an abstract syntax tree "ast" and dump it to Dot file and visualization
- Pisa-Proxy 之 SQL 解析实践
- Hyperledger Fabric 2. X custom smart contract
- Longest substring without repeated characters (Sword finger offer 48)
- What is the London Silver code
- AbortController的使用
猜你喜欢

sql注入原理

PSS: you are only two convolution layers away from the NMS free+ point | 2021 paper

How is the London Silver point difference calculated

Synchronized与锁升级
![Luogu_ P1008 [noip1998 popularization group] triple strike_ enumeration](/img/9f/64b0b83211bd1c615f2db9273bb905.png)
Luogu_ P1008 [noip1998 popularization group] triple strike_ enumeration

洛谷入门2【分支结构】题单题解
![[high concurrency] deeply analyze the callable interface](/img/24/33c3011752c8f04937ad68d85d4ece.jpg)
[high concurrency] deeply analyze the callable interface

PSS:你距離NMS-free+提點只有兩個卷積層 | 2021論文

What are the operating modes of the live app? What mode should we choose?
![Beginner level Luogu 2 [branch structure] problem list solution](/img/53/d7bf659f7e1047db4676c9a01fcb42.png)
Beginner level Luogu 2 [branch structure] problem list solution
随机推荐
2022-2-15 learning the imitated Niuke project - Section 5 shows comments
All you want to know about large screen visualization is here
Vscode uses yapf auto format to set the maximum number of characters per line
直播app运营模式有哪几种,我们该选择什么样的模式?
[issue 18] share a Netease go classic
volatile与JMM
Why can't the start method be called repeatedly? But the run method can?
原子操作类
Indexeddb learning materials
About tensorflow using GPU acceleration
What are the characteristics of fixed income + products?
Use of abortcontroller
Longest substring without repeated characters (Sword finger offer 48)
QT 如何在背景图中将部分区域设置为透明
Lei Jun lost another great general, and liweixing, the founding employee of Xiaomi No. 12, left his post. He once had porridge to create Xiaomi; Intel's $5.4 billion acquisition of tower semiconductor
PSS: vous n'êtes qu'à deux niveaux du NMS Free + Lifting point | 2021 Paper
Fundamentals of software engineering (I)
一场分销裂变活动,不止是发发朋友圈这么简单!
Format of mobile number
Design of electronic calculator system based on FPGA (with code)