Skip to content

This is the first project where we worked on apache spark, In this project what we have done is that we downloaded the datasets from KAGGLE where everyone is aware of, we have downloaded loan, customers credit card and transactions datasets . After downloading the datsaets we have cleaned the data . Then after by using new tools and technologies…

Notifications You must be signed in to change notification settings

abhilash-1/pyspark-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ONLINE BANKING ANALYSIS

This is the first project where we worked on apache spark, In this project what we have done is that we downloaded the datasets from KAGGLE where everyone is aware of, we have downloaded loan, customers credit card and transactions datasets . After downloading the datsaets we have cleaned the data . Then after by using new tools and technologies like spark, HDFS, Hive and many more we have executed new use cases on the datasets, that we have downloaded from kaggle. As we all know apache spark is a framework that can quickly process the large datsets.

So now let me explain the dataflow of how we have done is, first primarly we have ingested the data that is , we retrieved the data and then downloaded the datasets from kaggle and then we stored this datasets in cloud storage and imported from MYSQL to hive by sqoop this is how we have ingested the data , second after ingesting the data we have processed the large datasets in hive and then we have analyzed the data using pyspark in jupyter notebook by implementing several use cases.

TECHNOLOGIES USED:

Spark SQL Spark HDFS Hive

ROLES AND RESPONSIBLITIES:

Collaborated in a team of 6 members using version control with Git/Github. Utilized the historical data from kaggle.com. Collected 3 datasets of online transactions, loan and customer credit card. Implemented Spark Session to load the data into Data Frames. Used standalone cluster mode in spark environment to run on Spark SQL queries.

license

This project uses the following license: MIT License

About

This is the first project where we worked on apache spark, In this project what we have done is that we downloaded the datasets from KAGGLE where everyone is aware of, we have downloaded loan, customers credit card and transactions datasets . After downloading the datsaets we have cleaned the data . Then after by using new tools and technologies…

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published