-
Notifications
You must be signed in to change notification settings - Fork 20
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
29f6340
commit 19970c1
Showing
4 changed files
with
744 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
import os | ||
import re | ||
import csv | ||
|
||
def extract_table_name(line): | ||
match = re.search(r'COPY public\.(\w+)', line) | ||
if match: | ||
return match.group(1) | ||
return None | ||
|
||
def process_sql_dump(input_file): | ||
output_folder = "Data New" | ||
os.makedirs(output_folder, exist_ok=True) | ||
|
||
current_table = None | ||
current_csv_file = None | ||
|
||
with open(input_file, 'r', encoding='utf-8') as file: | ||
for line in file: | ||
if line.startswith("COPY public."): | ||
current_table = extract_table_name(line) | ||
if current_table: | ||
csv_file_path = os.path.join(output_folder, f"{current_table}.csv") | ||
current_csv_file = open(csv_file_path, 'w', newline='', encoding='utf-8') | ||
csv_writer = csv.writer(current_csv_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) | ||
|
||
# Write "COPY public." line as the first line | ||
csv_writer.writerow([line.strip()]) | ||
|
||
elif line == "\\.\n" and current_csv_file: | ||
current_csv_file.close() | ||
current_table = None | ||
current_csv_file = None | ||
elif current_csv_file: | ||
values = line.strip().split('\t') | ||
csv_writer.writerow(values) | ||
|
||
if __name__ == "__main__": | ||
sql_dump_file = "vachan_prod_backup_31_01_2022.sql" | ||
process_sql_dump(sql_dump_file) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
# DataUpload | ||
|
||
This Python script tackles the challenge of streamlining data uploads to a database. Instead of manually uploading individual files, it leverages the power of automation, condensing the entire process into a single, user-friendly script. You no longer need to deal with tedious, repetitive data uploads – one command is all it takes to initiate the data flow. | ||
|
||
But this script goes beyond mere convenience. It starts by efficiently extracting data from a specified CSV file. It then meticulously transforms that raw data into a format perfectly suited for your database, ensuring seamless ingestion. Finally, with a single command, it seamlessly transmits the prepared data to a designated endpoint, completing the entire data upload journey in an automated and error-free manner. | ||
|
||
### Contents: | ||
|
||
Within this directory, you will discover two Python scripts: `fetching.py` and `upload.py.` | ||
|
||
#### Automated Data Extraction: | ||
|
||
Handling large SQL dump files, particularly those of 2GB size, can be a daunting and time-consuming task when manually extracting tables. To streamline this process, the `fetching.py` script has been developed. This script systematically traverses your SQL dump, capturing data and organizing it into CSV files. Each CSV file is named after its corresponding table, ensuring convenient access for future use. | ||
|
||
|
||
#### Data Upload Utility: | ||
|
||
Complementing the data extraction process, `upload.py` facilitates seamless data integration with specific endpoints through POST requests. Utilizing data extracted from the CSV files, this script enables efficient uploading to designated endpoints. | ||
|
||
#### Database Schema Handling: | ||
|
||
The Python script responsible for data upload incorporates functionality to upload data to a specified schema within the database. If the specified schema does not exist, the script dynamically creates it, ensuring data organization and accessibility. | ||
|
||
|
||
|
||
|
||
## Implementation Details | ||
|
||
Inorder to run the upload file you need make sure the virtual environment is created and it is activated. | ||
|
||
### Set up virtual Environment | ||
|
||
```python3 -m venv vachan-ENV``` | ||
|
||
```source Data-ENV/bin/activate``` | ||
|
||
```pip install --upgrade pip``` | ||
|
||
```pip install -r requirements.txt``` | ||
|
||
### Execution Command : | ||
|
||
To run the fetching script you can use this command | ||
```python3 fetching.py``` | ||
|
||
### Execution Command: | ||
|
||
To run the upload script you can use this command | ||
```python3 upload.py``` | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
certifi==2021.10.8 | ||
charset-normalizer==2.0.12 | ||
idna==3.3 | ||
psycopg2 | ||
requests==2.27.1 | ||
urllib3==1.26.8 | ||
fastapi[all]==0.95.0 | ||
SQLAlchemy==2.0.9 | ||
jsonpickle==2.2.0 | ||
pytz==2023.3 |
Oops, something went wrong.