All In One, Fast, Easy Recon Tool
- subdoamin enumeration
- check live domains
- simple port scanner
- take screenshot for domains
- crawler:
- parse js files
- parse robots.txt
- parse sitemap.xml
- collect archived urls
you need to install chromedriver and add the bin file to PATH
git clone https://github.com/aufzayed/HydraRecon.git
pip3 install -r requirements.txt
__
|__| _| _ _ |__)_ _ _ _
| |\/(_|| (_|| \(-(_(_)| )
/
usage:
hydrarecon Methods:
1.basic ::
- subdomain enumeration
- scan common ports
- screenshot hosts
- html report
2.crawl ::
- sitemap.xml
- robots.txt
- related urls:
3.config :: config hydra
examples:
python3 hydrarecon.py --basic -d example.com
python3 hydrarecon.py --crawl -d example.com
python3 hydrarecon.py --config
optional arguments:
-h, --help show this help message and exit
--basic use basic recon module
--crawl use crawl module
--config initializing config file
--session Generate report from session.json file
-d , --domain domain to crawl or recon
-p , --ports ports to scan: (small | large | xlarge). default: small
-T , --timeout control http request timeout in seconds, default: 1s
-t , --threads number of threads, default: 10
-o , --out path to save report, default : home directory
python3 hydrarecon.py --basic -d example.com = python3 hydrarecon.py --basic -d example.com -t 10 -T 1 -o ~ -p small
python3 hydrarecon.py --crawl -d example.com = python3 hydrarecon.py --crawl -d example.com -t 10 -o ~
- if you have virustoal API key use
python3 hydrarecon.py --config
(optional) --crawl
option results depends on--basic
results
- httprobe, waybackurls by @tomnomnom
- hakrawler by @hakluke
- aquatone by @michenriksen
- LinkFinder by @GerbenJavado
- subjs by @lc