You need to pip install selenium, beatifulsoup, pandas, numpy and whatnot
Run run.py to get precious data
The default browser is Google Chrome (in order to get HTML files). You can change it in scrapeconfig.py. The supported browsers are: 'chrome' 'firefox' 'opera' 'safari'
The final dataset is "OSEBX_dataset.csv"
Running the the thing will locally store stuff such as HTML pages (In case you wanna use them for other things)
The Jupyter notebook contains outlier detection using unsupervised learning