I've been using this little script for a while now and decided today to improve it and comment it better.
It is a very simple URL extractor, you pass either a URL or a file which contains one URL per line, the script
then extracts all links of the source and stores them.
You are able to iterate over the collected links and redirect the output to a file.
This script used to use a lib called 'urllib2', Phage inspired me to use another lib called 'Requests' it's an awesome lib
and more intuitive than urllib2, thanks Phage for bringing it to my attention!
Usage: extractor.py [-h] (-f FILE | -u URL) [-q]
optional arguments:
-f FILE, --file FILE A text file with urls to extract from
-u URL, --url URL The url which will be searched for links
-q, --quiet Don't print errors that occur, quiet mode.
List of dependencies:
[gist]Daxda/8518554[/gist]