Skip to content

Latest commit

 

History

History
51 lines (35 loc) · 3 KB

README.md

File metadata and controls

51 lines (35 loc) · 3 KB

False Data Injection Attack (FDIA) with Long Sort Term Memory (LSTM) Model using Python

Get Started with Relevant Project Implementation

  • If you're looking for assistance with a project implementation that aligns with your needs, feel free to get in touch with us LinkedIn.
  • To get in touch with us and discuss your project implementation needs, please send an email to [email protected].
  • Thank you for considering our services. We look forward to working with you!

The dependence on advanced information and communication technology increases the vulnerability in smart grids under cyber-attacks. Recent research on unobservable false data injection attacks (FDIAs) reveals the high risk of secure system operation, since these attacks can bypass current bad data detection mechanisms. To mitigate this risk, here we proposed this project a data-driven learning-based Long Short Term Memory (LSTM) algorithm for detecting unobservable FDIAs in systems.

0

Time Series: Adversarial Attacks Data (Dataset)

API that used in this Project:

  • keras
  • sklearn
  • tensorflow
  • matplotlib
  • seaborn
  • pandas
  • numpy

Display Dataset: Adversarial Attacks Data (Time Series)

1

Plot Dataset: Adversarial Attacks Data (Time Series)

2

The Accuracy of the LSTM Model (Without FDIA) are Plotted

3_0

The loss and Accuracy of the LSTM Model (Without FDIA) are Plotted

3_1 3_2

Display Dataset With FDIA Attack: Adversarial Attacks Data (Time Series)

4

The Accuracy of the LSTM Model (Within FDIA) are Plotted

5_0

The loss and Accuracy of the LSTM Model (Within FDIA) are Plotted

4_1 4_2

Comparison of Models: Normal LSTM Model and FDIA LSTM Model

6_1 6_2