Python语言技术文档

微信小程序技术文档

php语言技术文档

jsp语言技术文档

asp语言技术文档

C#/.NET语言技术文档

html5/css技术文档

javascript

点击排行

您现在的位置:首页 > 技术文档 > Python网络爬虫

Python制作刷网页流量工具

来源:中文源码网    浏览:179 次    日期:2024-05-17 08:05:52
【下载文档:  Python制作刷网页流量工具.txt 】


Python制作刷网页流量工具
准备
必须环境:
Python3
开始
先实现一个简单的版本,直接上代码:
import urllib.request
import urllib.error
#创建get方法
def get(url):
code=urllib.request.urlopen(url).code
return code
if __name__ == '__main__':
#设置一些基本属性
url = "http://shua.zwyuanma.com"
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.63 Safari/537.36"
headers = {'User-Agent':user_agent}
req = urllib.request.Request(url, headers=headers)
#记录次数
i = 1
while 1:
code = get(url)
print('访问:'+str(code))
i = i+1
简单粗暴,刷的只是 pv,ip 没变,容易被搜索引擎发现,下面我们来改进一下
增加代理功能
给 get 方法添加以下代码:
random_proxy = random.choice(proxies)
proxy_support = urllib.request.ProxyHandler({"http":random_proxy})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
修改一下主方法:
if __name__ == '__main__':
url = "http://shua.zwyuanma.com"
#添加代理列表,可以自行去百度获取
proxies = ["124.88.67.22:80","124.88.67.82:80","124.88.67.81:80","124.88.67.31:80","124.88.67.19:80","58.23.16.240:80"]
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.63 Safari/537.36"
headers = {'User-Agent':user_agent}
req = urllib.request.Request(url, headers=headers)
i = 1
while 1:
#添加参数
code = get(url,proxies)
print('第'+str(i)+'次代理访问:'+str(code))
i = i+1
这样差不多了,不过有个 bug ,如果页面打不开了或者代理失效了,程序就自动结束了,接下来我们添加异常处理功能
异常处理
定义 mail 方法 ,用来发邮件提醒
def mail(txt):
_user = "你的账号"
_pwd = "你的密码"
_to = "收件账号"
msg = MIMEText(txt, 'plain', 'utf-8')
#标题
msg["Subject"] = "代理失效!"
msg["From"] = _user
msg["To"] = _to
try:
#这里我用的qq邮箱
s = smtplib.SMTP_SSL("smtp.qq.com", 465)
s.login(_user, _pwd)
s.sendmail(_user, _to, msg.as_string())
s.quit()
print("Success!")
except smtplib.SMTPException as e:
print("Falied,%s" % e)
然后我们修改一下主方法:
if __name__ == '__main__':
url = "http://shua.zwyuanma.com"
proxies = ["124.88.67.22:80","124.88.67.82:80","124.88.67.81:80","124.88.67.31:80","124.88.67.19:80","58.23.16.240:80"]
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.63 Safari/537.36"
headers = {'User-Agent':user_agent}
req = urllib.request.Request(url, headers=headers)
i = 1
while 1:
try:
code = get(url,proxies)
print('第'+str(i)+'次代理访问:'+str(code))
i = i+1
except urllib.error.HTTPError as e:
print(e.code)
#添加mail方法
mail(e.code)
except urllib.error.URLError as err:
print(err.reason)
#添加mail方法
mail(err.reason)
完成!
结语
代码只有短短的 50 行,程序还可以改进:
例如:代理列表自动获取,添加界面,扩展下多线程等等
最后给再给大家分享一个其他小伙伴的作品
import urllib2
import timeit
import thread
import time
i = 0
mylock = thread.allocate_lock()
def test(no,r):
global i
url = 'http://blog.csdn.net'
for j in range(1,r):
req=urllib2.Request(url)
req.add_header("User-Agent","Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0)")
file = urllib2.urlopen(req)
print file.getcode();
mylock.acquire()
i+=1
mylock.release()
print i;
thread.exit_thread()
def fast():
thread.start_new_thread(test,(1,50))
thread.start_new_thread(test,(2,50))
fast()
time.sleep(15)
经测试,超过两个线程以上服务器就会出现503错误,所以2个线程刚好

相关内容