Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addition of DataUpload Script #769

Merged
merged 4 commits into from
Feb 13, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions Dataupload/fetching.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
import os
Joel-Joseph-George marked this conversation as resolved.
Show resolved Hide resolved
import re
import csv

def extract_table_name(line):
match = re.search(r'COPY public\.(\w+)', line)
if match:
return match.group(1)
return None

def process_sql_dump(input_file):
output_folder = "Data New"
os.makedirs(output_folder, exist_ok=True)

current_table = None
current_csv_file = None

with open(input_file, 'r', encoding='utf-8') as file:
for line in file:
if line.startswith("COPY public."):
current_table = extract_table_name(line)
if current_table:
csv_file_path = os.path.join(output_folder, f"{current_table}.csv")
current_csv_file = open(csv_file_path, 'w', newline='', encoding='utf-8')
csv_writer = csv.writer(current_csv_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)

# Write "COPY public." line as the first line
csv_writer.writerow([line.strip()])

elif line == "\\.\n" and current_csv_file:
current_csv_file.close()
current_table = None
current_csv_file = None
elif current_csv_file:
values = line.strip().split('\t')
csv_writer.writerow(values)

if __name__ == "__main__":
sql_dump_file = "vachan_prod_backup_31_01_2022.sql"
process_sql_dump(sql_dump_file)
50 changes: 50 additions & 0 deletions Dataupload/readme.MD
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# DataUpload
Joel-Joseph-George marked this conversation as resolved.
Show resolved Hide resolved

This Python script tackles the challenge of streamlining data uploads to a database. Instead of manually uploading individual files, it leverages the power of automation, condensing the entire process into a single, user-friendly script. You no longer need to deal with tedious, repetitive data uploads – one command is all it takes to initiate the data flow.

But this script goes beyond mere convenience. It starts by efficiently extracting data from a specified CSV file. It then meticulously transforms that raw data into a format perfectly suited for your database, ensuring seamless ingestion. Finally, with a single command, it seamlessly transmits the prepared data to a designated endpoint, completing the entire data upload journey in an automated and error-free manner.

### Contents:

Within this directory, you will discover two Python scripts: `fetching.py` and `upload.py.`

#### Automated Data Extraction:

Handling large SQL dump files, particularly those of 2GB size, can be a daunting and time-consuming task when manually extracting tables. To streamline this process, the `fetching.py` script has been developed. This script systematically traverses your SQL dump, capturing data and organizing it into CSV files. Each CSV file is named after its corresponding table, ensuring convenient access for future use.


#### Data Upload Utility:

Complementing the data extraction process, `upload.py` facilitates seamless data integration with specific endpoints through POST requests. Utilizing data extracted from the CSV files, this script enables efficient uploading to designated endpoints.

#### Database Schema Handling:

The Python script responsible for data upload incorporates functionality to upload data to a specified schema within the database. If the specified schema does not exist, the script dynamically creates it, ensuring data organization and accessibility.




## Implementation Details

Inorder to run the upload file you need make sure the virtual environment is created and it is activated.

### Set up virtual Environment

```python3 -m venv vachan-ENV```

```source Data-ENV/bin/activate```

```pip install --upgrade pip```

```pip install -r requirements.txt```

### Execution Command :

To run the fetching script you can use this command
```python3 fetching.py```

### Execution Command:

To run the upload script you can use this command
```python3 upload.py```

10 changes: 10 additions & 0 deletions Dataupload/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
certifi==2021.10.8
charset-normalizer==2.0.12
idna==3.3
psycopg2
requests==2.27.1
urllib3==1.26.8
fastapi[all]==0.95.0
SQLAlchemy==2.0.9
jsonpickle==2.2.0
pytz==2023.3
Joel-Joseph-George marked this conversation as resolved.
Show resolved Hide resolved
Loading
Loading