Author Topic: Web hacking Question  (Read 1531 times)

0 Members and 1 Guest are viewing this topic.

Offline Warhawk

  • NULL
  • Posts: 2
  • Cookies: 0
    • View Profile
Web hacking Question
« on: September 24, 2012, 03:23:13 am »
Hello All,


I would like to know how to have a listing of files on a server for example


www.awebsite.com/csvfiles/a.zip
Which has listings:
www.awebsite.com/csvfiles/f.zip
www.awebsite.com/csvfiles/u.zip
www.awebsite.com/csvfiles/a.zip
www.awebsite.com/csvfiles/anything.zip

I know www.awebsite.com/csvfiles/a.zip, but do not know what the link to f.zip or u.zip or any other listings on the server is, nor the names of the files saved. How can i get the information or link to all the files on that page "www.awebsite.com/csvfiles"


Please any suggestions would be highly appreciated. thank you mate

Staff note: Normal sized text is well enough. No need for unnecessary 'bold' and/or enlarged fonts.
« Last Edit: September 24, 2012, 04:21:19 am by techb »

Offline s3my0n

  • Knight
  • **
  • Posts: 276
  • Cookies: 58
    • View Profile
    • ::1
Re: Web hacking Question
« Reply #1 on: September 24, 2012, 10:37:51 am »
The webserver is set up in a way to forbid directory viewing. So you cannot bypass this unless you can remotely exploit some hole on the server to execute "ls" or "dir" commands.
More info at: http://www.checkupdown.com/status/E403.html

Also you can also try to bruteforce filenames, but this will generate a lot of noise and will get suspicion.
« Last Edit: September 24, 2012, 10:38:50 am by s3my0n »
Easter egg in all *nix systems: E(){ E|E& };E

Offline noob

  • Knight
  • **
  • Posts: 202
  • Cookies: 29
    • View Profile

Offline 2r

  • Serf
  • *
  • Posts: 28
  • Cookies: 3
    • View Profile
Re: Web hacking Question
« Reply #3 on: September 24, 2012, 06:09:56 pm »
You could bruteforce, but as said it brings a lot of noise to the admin or ids etc.

If the links are on the website somewhere, you could use a spider and pull all the <a> tags and find the links.

An old school method to find hidden/disallowed dirs is to search robots.txt, but that wont include direct file links, and only work if the admin wants those dirs to be excluded by search engines.

You could craft some google dorks, or write up a quick script to bruteforce using google, unless he disallows search engines from viewing the directories. Then again, the website would have to have been cached/spidered by google before hand.
Conscious Code - Blog
"Statistical fact, cops will never pull over a real man with a huge bong in his car. Why? They fear this man, they know he sees further than they and will blind them with ancient logics."