You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Greetings
I'm eager to showcase my projects and contribute to the exciting realm of AI4Finance. Your Guidances on how to proceed would be greatly appreciated.
This project is done for the learning purposes autumn. The idea is to investigate various algorithms of Reinforcement Learning to the domain of Stock Trading.
As part of learning, I have read around 10-15 research papers covering a lot of major and minor aspects of Deep Learning and Reinforcement Learning. We explored OpenAI Gym to gain insights into how to setup our stock trading environment.
The various algorithms we discuss are namely:
Deep Q Learning
Deep Double Q Learning
Duelling Deep Double Q Learning
Deep Deterministic Policy Gradient
Deep Recurrent Double Reinforcement Learning
Stock Trading being a really challenging RL Problem, instead of jumping straightaway to the Stock Trading part, we go on a step by step basis starting from lower state space control problems to high state space Atari Games. And then finally to Stock Trading.
Control Problems
We started with various control problems (lower state space) to test the architectures/algorithms and get a firm grip on the algorithms and Reinforcement Learning as a whole. The problems include -
Pendulum
Cart Pole
Mountain Car
Atari Games
Having tried our algos on lower state space problems, we jumped into the Atari Games that have a really large state space.
The games include -
Breakout
Pong
The results are reported below -
We implemented both Feedforward Neural Nets and Conv Nets for RAM and image versions of the game respectively. But due to lack of computational resources, we were forced to train on the RAM version. Using OpenCV, we obtained the pixels of the game play and made a video out of it which is shown below.
Our best score on Breakout was close to that reported by DeepMind by certainly less than the best score reported by the OpenAI team which is because we couldn't train for long due to lack of computational resources and time.
Stock Trading
We now shift to the main part of the project i.e. Stock Trading.
We started reading a lot of research papers and articles on algorithms and their applications. We also came across some articles and reports on Stock Trading using DRQN, DDPG and DDRPG algorithms. We explored various aspects of the algorithms and concluded the relevant algorithms.
We began by implementing the environment for Stock Trading using OpenAI Gym. This was a basic single stock version of the environment for ou first DDPG Single Stock Agent. We observed some really good performance that made us rethink the implementaions of the agent and environment.
Having fixed the errors for single stock environment and the agent, the agent was trained on it and we achieved a decent performance which is reported here.
Having gained some intuition of how things are going on, we modularised out environment code to tackle and multi stock scenario. We developed our DDPG agent with a better architecture and also modularised it to pertain to any multi stock environment. The model was trained on the environment and the results are reported here.
We planned to use Recurent Layers in our model and for that we completed the Sequence Modelling Course offered by DeepLearning.ai on Coursera. We trained a DRQN model on the single stock environment as applying it to a multi stock environment made it really complex to code which was not possible to achieve in these limited number of days. The results are reported here.
The basic idea of using the recurrent layers was to somehow make the agent remember past data so that it can use that information to make informed decisions. This is what LSTM promises to do.
Welcome to 'Quantfinance with backtesting Strategies' - a comprehensive collection of over 200+ Python programs designed for quantitative finance enthusiasts and professionals. This repository is your go-to resource for gathering, manipulating, and analyzing stock market data, leveraging the power of Python to unlock insights in the financial markets.
Organization
Our repository is organized into several key sections:
Visual tools for popular technical indicators like Bollinger Bands, RSI, and MACD.
Installation
To get started, clone the repository and install the required dependencies:
git clone https://github.com/aditya-dom/Quantfinance-with-backtesting.git
cd Finance
pip install -r requirements.txt
Usage
Detailed instructions on how to use each program can be found within their respective directories. Explore different modules to discover their functionalities.
Each script in this collection is stand-alone. Here's how you can run a sample program:
python example_program.py
Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
Discussed in #71
Originally posted by Aditya-dom March 2, 2024
1 - RL-Based Stock trading
As part of learning, I have read around 10-15 research papers covering a lot of major and minor aspects of Deep Learning and Reinforcement Learning. We explored OpenAI Gym to gain insights into how to setup our stock trading environment.
The various algorithms we discuss are namely:
Stock Trading being a really challenging RL Problem, instead of jumping straightaway to the Stock Trading part, we go on a step by step basis starting from lower state space control problems to high state space Atari Games. And then finally to Stock Trading.
Control Problems
We started with various control problems (lower state space) to test the architectures/algorithms and get a firm grip on the algorithms and Reinforcement Learning as a whole. The problems include -
Atari Games
Having tried our algos on lower state space problems, we jumped into the Atari Games that have a really large state space.
The games include -
The results are reported below -
We implemented both Feedforward Neural Nets and Conv Nets for RAM and image versions of the game respectively. But due to lack of computational resources, we were forced to train on the RAM version. Using OpenCV, we obtained the pixels of the game play and made a video out of it which is shown below.
Our best score on Breakout was close to that reported by DeepMind by certainly less than the best score reported by the OpenAI team which is because we couldn't train for long due to lack of computational resources and time.
Stock Trading
We now shift to the main part of the project i.e. Stock Trading.
We started reading a lot of research papers and articles on algorithms and their applications. We also came across some articles and reports on Stock Trading using DRQN, DDPG and DDRPG algorithms. We explored various aspects of the algorithms and concluded the relevant algorithms.
We began by implementing the environment for Stock Trading using OpenAI Gym. This was a basic single stock version of the environment for ou first DDPG Single Stock Agent. We observed some really good performance that made us rethink the implementaions of the agent and environment.
Having fixed the errors for single stock environment and the agent, the agent was trained on it and we achieved a decent performance which is reported here.
Having gained some intuition of how things are going on, we modularised out environment code to tackle and multi stock scenario. We developed our DDPG agent with a better architecture and also modularised it to pertain to any multi stock environment. The model was trained on the environment and the results are reported here.
We planned to use Recurent Layers in our model and for that we completed the Sequence Modelling Course offered by DeepLearning.ai on Coursera. We trained a DRQN model on the single stock environment as applying it to a multi stock environment made it really complex to code which was not possible to achieve in these limited number of days. The results are reported here.
The basic idea of using the recurrent layers was to somehow make the agent remember past data so that it can use that information to make informed decisions. This is what LSTM promises to do.
Authors
2 - Quantitative finance Strategies
Finance
Introduction
Welcome to 'Quantfinance with backtesting Strategies' - a comprehensive collection of over 200+ Python programs designed for quantitative finance enthusiasts and professionals. This repository is your go-to resource for gathering, manipulating, and analyzing stock market data, leveraging the power of Python to unlock insights in the financial markets.
Organization
Our repository is organized into several key sections:
find_stocks
Programs to screen stocks based on technical and fundamental analysis.
machine_learning
Introductory machine learning applications for stock classification and prediction.
portfolio_strategies
Simulations of trading strategies and portfolio analysis tools.
stock_analysis
Detailed analysis tools for individual stock assessment.
stock_data
Tools for collecting stock price action and company data via APIs and web scraping.
technical_indicators
Visual tools for popular technical indicators like Bollinger Bands, RSI, and MACD.
Installation
To get started, clone the repository and install the required dependencies:
git clone https://github.com/aditya-dom/Quantfinance-with-backtesting.git cd Finance pip install -r requirements.txt
Usage
Detailed instructions on how to use each program can be found within their respective directories. Explore different modules to discover their functionalities.
Each script in this collection is stand-alone. Here's how you can run a sample program:
Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
Authors
Acknowledgements
Disclaimer
The material in this repository is for educational purposes only and should not be considered professional investment advice.
The text was updated successfully, but these errors were encountered: