Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
dilwolf authored May 29, 2024
1 parent 4672abc commit 7249ee0
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This repository demonstrates a simulation of the training process of an LSTM mod

The following figure shows an overall comparison of the convergence of the LSTM model during the training process in terms of federated and centralized learning on the same dataset as presented in Table 3 of our paper. We trained the LSTM model for 200 rounds and evaluated the training results using the RMSE metric. Specifically, we first recorded the required communication rounds for both the federated and centralized training results.

We also compared the convergence of the LSTM model across different numbers of clients (K) (e.g., K = 1 means centralized training) in a federated manner, as shown in Figure 5. We assessed the accuracy of our LSTM model in terms of the RMSE. The convergence results indicate that the centralized-based LSTM model achieved the lowest RMSE value of 0.79, followed by the FL-based LSTM models with an increasing number of clients. One significant reason for this is that the dataset was distributed among clients in the FL-based LSTM during the training process, which influenced model performance.
We also compared the convergence of the LSTM model across different numbers of clients (K) (e.g., K = 1 means centralized training) in a federated manner. We assessed the accuracy of our LSTM model in terms of the RMSE. The convergence results indicate that the centralized-based LSTM model achieved the lowest RMSE value of 0.79, followed by the FL-based LSTM models with an increasing number of clients. One significant reason for this is that the dataset was distributed among clients in the FL-based LSTM during the training process, which influenced model performance.

While centralized training can produce better outcomes in these circumstances, it is crucial to acknowledge the unparalleled advantages of FL-based LSTM training regarding privacy and security. FL enables collaboration among parties while safeguarding data integrity, making it an ideal choice where maintaining ownership and data privacy are important. Considering these advantages, our research recommends adopting FL-based LSTM training as the approach for privacy, even though there were some performance differences compared to centralized training.

Expand Down

0 comments on commit 7249ee0

Please sign in to comment.