EvilZone
Hacking and Security => Hacking and Security => : Warhawk September 24, 2012, 03:23:13 AM
-
Hello All,
I would like to know how to have a listing of files on a server for example
www.awebsite.com/csvfiles/a.zip
Which has listings:
www.awebsite.com/csvfiles/f.zip (http://www.awebsite.com/csvfiles/f.zip)
www.awebsite.com/csvfiles/u.zip (http://www.awebsite.com/csvfiles/u.zip)
www.awebsite.com/csvfiles/a.zip (http://www.awebsite.com/csvfiles/a.zip)
www.awebsite.com/csvfiles/anything.zip (http://www.awebsite.com/csvfiles/anything.zip)
I know www.awebsite.com/csvfiles/a.zip (http://www.awebsite.com/csvfiles/a.zip), but do not know what the link to f.zip or u.zip or any other listings on the server is, nor the names of the files saved. How can i get the information or link to all the files on that page "www.awebsite.com/csvfiles (http://www.awebsite.com/csvfiles)"
Please any suggestions would be highly appreciated. thank you mate
Staff note: Normal sized text is well enough. No need for unnecessary 'bold' and/or enlarged fonts.
-
The webserver is set up in a way to forbid directory viewing. So you cannot bypass this unless you can remotely exploit some hole on the server to execute "ls" or "dir" commands.
More info at: http://www.checkupdown.com/status/E403.html (http://www.checkupdown.com/status/E403.html)
Also you can also try to bruteforce filenames, but this will generate a lot of noise and will get suspicion.
-
https://www.owasp.org/index.php/Category:OWASP_DirBuster_Project
-
You could bruteforce, but as said it brings a lot of noise to the admin or ids etc.
If the links are on the website somewhere, you could use a spider and pull all the <a> tags and find the links.
An old school method to find hidden/disallowed dirs is to search robots.txt, but that wont include direct file links, and only work if the admin wants those dirs to be excluded by search engines.
You could craft some google dorks, or write up a quick script to bruteforce using google, unless he disallows search engines from viewing the directories. Then again, the website would have to have been cached/spidered by google before hand.