Mauer, Patryk, 2024, "Art auction result of prints", https://doi.org/10.18150/AEQF8C, RepOD, V1
This document outlines the steps for running the data processing pipeline.
https://github.com/PatrykMauer/art-data-prep-pipeline.git
docker build -t art-data-prep-pipeline .
Change 'results_2024_05_11' to reflect your filename. For the time being only .xlsx is supported.
docker run -it --rm -v .:/app -w /app art-data-prep-pipeline /bin/bash ./data_processing.sh results_2024_05_11.xlsx
If you prefer not to use docker, follow the steps below.
For Windows users, activate the local virtual environment by running the following command in the root folder:
.\.venv\Scripts\activate
Follow these steps in sequence to process your data:
Run the following command to process the raw data:
python src\data\process_data.py data\raw\results_2024_05_11.xlsx data\interim\results_2024_05_11.xlsx
Filter the processed data using this command:
python src\data\filter_data.py data\interim\results_2024_05_11.xlsx data\interim\filtered_results_2024_05_11.xlsx
python script_name.py input_file.xlsx output_file.xlsx 2024-04-03 python src\data\filter_by_date.py data\interim\filtered_results_2024_05.11.xlsx data\interim\filtered_results_2024_05.11.xlsx 2024-04-03
Encode the filtered data with the following command:
python src\data\encode_data_const.py data\interim\filtered_results_2024_05_11.xlsx data\processed\encoded_results_2024_05_11.xlsx
To encode data for all combinations, use:
python src\data\encode_data.py data\interim\filtered_results_2024_05_11.xlsx --output_folder data\processed
Split the data into training and test sets with this command:
python src\data\split_data.py data\processed\filtered_results_2024_05_11_OrdinalOrdinalOneHotOneHot.xlsx --output_folder data\processed
Finally, scale the train and test sets and save the scalar to the references folder using:
python src\features\feature_scaling.py filtered_results_2024_05_11_OrdinalOrdinalOneHotOneHot --output_folder data\processed --columns ARTIST TECHNIQUE "TOTAL DIMENSIONS" YEAR
Connect the scaled Train and Test set with CNN features to ensure that there is no data leakage. Connect them by ImageName.
python create_feature_price_datasets.py <train_file_path> <test_file_path> <features_file_path>
python src\features\create_feature_price_datasets.py data\processed\filtered_results_2024_05_11_OrdinalOrdinalOneHotOneHot_train_scaled.xlsx data\processed\filtered_results_2024_05_11_OrdinalOrdinalOneHotOneHot_test_scaled.xlsx data\interim\features.csv
The created dataset with CNN features has lower number of rows comparing to the Test set from tabular data approach. Equalize the number of rows by removing the one which were not used for merging.
python equalize_rows_number.py <to_remove_from_file_path> <to_compare_file_path>
python src\data\equalize_rows_number.py data\processed\filtered_results_2024_05_11_OrdinalOrdinalOneHotOneHot_test_scaled.xlsx data\processed\test_features_price.csv
python src\data\equalize_rows_number.py data\processed\filtered_results_2024_05_11_OrdinalOrdinalOneHotOneHot_train_scaled.xlsx data\processed\train_features_price.csv
==============================
A predictive model for art auction results.
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results oriented visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see tox.readthedocs.io
Project based on the cookiecutter data science project template. #cookiecutterdatascience