Exploits!
Hi. I just googled this list of exploits Expliots.txt. Can any of you more experienced guys take a look at it and tell me in case it's outdated? I'm planning on making a python program that checks a certain sites for all the exploits in this list… But before I do, I need to know in case some of these actually works :P
here comes 2 more:
If you need a list of vulnerabilities, OWASP has a great one that has helped me several times in the past. There is a page for each attack consisting of a brief summary as well as examples of it's use. Check it out.
http://www.owasp.org/index.php/Category:Attack
I'll post the list below for everyone to see. (Yes, even those of you who were too lazy to click the link.)
* Argument Injection or Modification
* Asymmetric resource consumption (amplification)
* Blind SQL Injection
* Blind XPath Injection
* Brute force attack
* Buffer overflow attack
* CSRF
* Cache Poisoning
* Code Injection
* Command Injection
* Comment Injection Attack
* Cross Site Tracing
* Cross-Site Request Forgery (CSRF)
* Cross-User Defacement
* Cross-site Scripting (XSS)
* Cryptanalysis
* Custom Special Character Injection
* Denial of Service
* Direct Dynamic Code Evaluation ('Eval Injection')
* Direct Static Code Injection
* Double Encoding
* Forced browsing
* Format string attack
* Full Path Disclosure
* HTTP Request Smuggling
* HTTP Response Splitting
* LDAP injection
* Man-in-the-browser attack
* Man-in-the-middle attack
* Mobile code: invoking untrusted mobile code
* Mobile code: non-final public field
* Mobile code: object hijack
* Network Eavesdropping
* One-Click Attack
* Overflow Binary Resource File
* Page Hijacking
* Parameter Delimiter
* Path Manipulation
* Path Traversal
* Relative Path Traversal
* Repudiation Attack
* Resource Injection
* SQL Injection
* Server-Side Includes (SSI) Injection
* Session Prediction
* Session fixation
* Session hijacking attack
* Setting Manipulation
* Special Element Injection
* Spyware
* Traffic flood
* Trojan Horse
* Unicode Encoding
* Web Parameter Tampering
* XPATH Injection
* XSRF```
ranma wrote: [quote]Avlid wrote: Why not exploit the places you want yourself? I Realy think thats an good idea :)
- It can be offered to a 3rd party to test their own website
- Experience
- A perfunctory check on a website before you go in and do stuff yourself[/quote]
indeed =D
trying all those exploits by hand would take you ages!
The lists I posted are a big outdates, yes I know :P But it scrolling through them made me wonder, is there many sites that still uses CGI?
From webopedia.com A CGI program is any program designed to accept and return data that conforms to the CGI specification. The program could be written in any programming language, including C, Perl, Java, or Visual Basic.
I don't know if this is true but from what I've noticed web-developers now a days build their sites using PHP/ASP/.NET & SQL etc. It's really rare to see a site with a CGI directory anymore… Am I blind or is it truly so?
spyware wrote: [quote]Demons Halo wrote: I don't know if this is true but from what I've noticed web-developers now a days build their sites using PHP/ASP/.NET & SQL etc. It's really rare to see a site with a CGI directory anymore… Am I blind or is it truly so?
Perl is frequently used.[/quote]
I see :)
well I've compiled a list that contains many "url exploits". Now I was thinking of making a script that takes in every line in that list and tries it next to the site name. EX:
Site: www.hellboundhackers.org First line: /.htaccess Python tries: www.hellboundhackers.org/.htaccess
When python tries that url, some response will come back ofc. It might be: access denied or file not found etc. What is the best way to sort through all those "bad responces" capturing the ones I could use? as you know there could be hundreds of different responses, so I can't tell python which ones to keep by hand. Is there some built in way to sort through such stuff?
cheers
Demons Halo wrote: When python tries that url, some response will come back ofc. It might be: access denied or file not found etc. What is the best way to sort through all those "bad responces" capturing the ones I could use? as you know there could be hundreds of different responses, so I can't tell python which ones to keep by hand. Is there some built in way to sort through such stuff?
cheers
Anything but 404 is interesting.
Also, if you're serious about making a Nessus-like scanner, be prepared for years of research before you can even attempt something like this. If you want to produce a useful scanner, that is.
I doubt you know what you have to know to build this thing.
I doubt you know what you have to know to build this thing.
So do I :P Although it's something fun to do, even if I don't get it right, I'll for sure learn more about a library or 2 ^^
besides, now that you mention it, all I need to do is isolate stuff like 404's and write the rest into a file. It seems so easy when I think about it, but I'm sure it will be a lot harder ;P
If you have some tips, don't hesitate!
S1L3NTKn1GhT wrote: I. Feel a another fuzzer script coming :p .
to be honest I did not know what the word fuzzer means so i googled it:
A program used to generate random "fuzz" for testing purposes.
I must say THANKS! I found some useful stuff googling that word up :D so far everything I found indicates that all I need to do is find a fast way to request a certain URL with the exploit from my list, filter the undesired ones, and saving the possible explots into a new file ^^
sounds like a fun project :P
@ranma: exactly!!! I like you :D let's get married (L)
ranma wrote: :love: How did you know I was single?
I just know pervert smile
On topic: The script is ready, but there seems to be one tiny little problem! in a smaller exploit list (like 20-30 items) the script runes decently fast, but when I use the big list (LOTS OF ITEMS :P) python does not respond xD Now I was expecting this so the question is: should I set a low time out? or is there a way to make the script check if a certain URL exists REALLY FAST?
:ninja:
Demons Halo wrote: The script is ready, but there seems to be one tiny little problem! in a smaller exploit list (like 20-30 items) the script runes decently fast, but when I use the big list (LOTS OF ITEMS :P) python does not respond xD Now I was expecting this so the question is: should I set a low time out? or is there a way to make the script check if a certain URL exists REALLY FAST?
If it times out on long list of items, why not try separating them into more lists. Then parse the lists in an order and maybe some sort of pause after each list. As lists complete you should also store there results then remove the list from memory. I don't know the details of your script and nor am I a python master, so these are just my best guesses and advice that I can offer.
p4plus2 wrote: [quote]Demons Halo wrote: The script is ready, but there seems to be one tiny little problem! in a smaller exploit list (like 20-30 items) the script runes decently fast, but when I use the big list (LOTS OF ITEMS :P) python does not respond xD Now I was expecting this so the question is: should I set a low time out? or is there a way to make the script check if a certain URL exists REALLY FAST?
If it times out on long list of items, why not try separating them into more lists. Then parse the lists in an order and maybe some sort of pause after each list. As lists complete you should also store there results then remove the list from memory. I don't know the details of your script and nor am I a python master, so these are just my best guesses and advice that I can offer.[/quote]
I was wworking on that while you wrote this post XD I've devided the big lists into smaller lists, each with certain type of exploits (SQL/php/etc).
yet still I need the script to go faster.. .any ideas? here comes the most important part of the script:
from urllib2 import *
exploit_list=[]
result_list=[]
site='http://www.hellboundhackers.org'
f=open("sql_exploits.txt", "r")
for line in f:
exploit_list.append(line)
f.close()
outp=open("results.txt","w")
x=0
for item in exploit_list:
address=site+item
try:
urlopen(address)
except HTTPError:
continue
except URLError:
print address
else:
outp.write(address)
x+=1
print x
outp.close()
The script takes like 5 sec / exploit line until it times out and goes for the next line in the file. So out of experience, how many ms should I set as a time out limit? 5 secs is damn much / exploit -_-
also who do you think about the code? This is not all of it, yet still it's the most important part ^^
Well you do realize it IS using internet requests so the speed will be based off of your internet connection… I imagine you could send multiple requests and THEN parse the info. That might make it so it doesn't seem to be taking forever for just one exploit.
Another fun thing to do might be to add a function that looks for paths or other error messages so you don't have to look through all of what you receive. That would be helpful for larger lists of exploits.
fashizzlepop wrote: Well you do realize it IS using internet requests so the speed will be based off of your internet connection… I imagine you could send multiple requests and THEN parse the info. That might make it so it doesn't seem to be taking forever for just one exploit.
Another fun thing to do might be to add a function that looks for paths or other error messages so you don't have to look through all of what you receive. That would be helpful for larger lists of exploits.
ofc… I'm using 24/1 so speed is not an issue (I guess :P). I thought about using multiple requests, but the problem is that the site would go crazy xD
well the exceptions above (especially HTTPError) are almost the only kind of response bothering me :P all other responses are worth looking at =D
stealth- wrote: Once you finish this, i'm sure it would make a great addition to the code bank.
I sure hope so :P
A script that takes a variable from a text-file, and then make a POST/GET request with that variable? Wonder why this page isn't slashdotted yet. Truly unique script!
hahaha we can worry about that type of stuff later on :P let me get this shit working first properly xD I'm not a pro you know -_-
Demons Halo wrote: [quote]stealth- wrote: Once you finish this, i'm sure it would make a great addition to the code bank.
I sure hope so :P
A script that takes a variable from a text-file, and then make a POST/GET request with that variable? Wonder why this page isn't slashdotted yet. Truly unique script!
hahaha we can worry about that type of stuff later on :P let me get this shit working first properly xD I'm not a pro you know -_-[/quote]
Uhh….I'm pretty sure Spyware was being sarcastic. Seriously people, his sarcasm isn't that difficult to comprehend. :angry:
ranma wrote: lol, you guys crack me up.
On another note, I don't understand what your script does exactly. Could you explain?
It takes one line at a time from a file full of URL exploits, and tries them on a given site. If the exploit is something other than a 404 or an invalid URL, it saves the info into a new text file called results.
@ spyware: It's amusing to see you pointing your dick @ ppl you were like a couple years ago xD I love you man… (L) :D
spyware wrote: [quote]ranma wrote: lol, you guys crack me up.
On another note, I don't understand what your script does exactly. Could you explain?
"Rank: HBH Guru"[/quote]
Oh shut up. From his posts I got the feeling he was just saving the urls. I didn't notice he said something other than 404. Makes sense now.
ranma wrote: [quote]spyware wrote: [quote]ranma wrote: lol, you guys crack me up.
On another note, I don't understand what your script does exactly. Could you explain?
"Rank: HBH Guru"[/quote]
Oh shut up. From his posts I got the feeling he was just saving the urls. I didn't notice he said something other than 404. Makes sense now.[/quote]
I think what spy meant was that a member with the rank "HBH Guru" should be able to understand what a simple Python-script does by reading the code. ;)