Description: This is a comprehensive study and analysis of stocks using deep learning (DL) and machine learning (ML) techniques. Both machine learning and deep learning are types of artificial intelligence (AI). The objective is to predict stock behavior by employing various machine learning and deep learning algorithms. The focus is on experimenting with stock data to understand how and why certain methods are effective, as well as identifying reasons for their potential limitations. Different stock strategies are explored within the context of machine learning and deep learning. Technical Analysis and Fundamental Analysis are utilized to predict future stock prices using these AI techniques, encompassing both long-term and short-term predictions.
Machine learning is a branch of artificial intelligence that involves the development of algorithms capable of automatically adapting and generating outputs by processing structured data. On the other hand, deep learning is a subset of machine learning that employs similar algorithms but with additional layers of complexity, enabling different interpretations of the data. The network of algorithms used in deep learning is known as artificial neural networks, which mimic the interconnectedness of neural pathways in the human brain.
Deep learning and machine learning are powerful approaches that have revolutionized the AI landscape. Understanding the fundamentals of these techniques and the commonly used algorithms is essential for aspiring data scientists and AI enthusiasts. Regression, as a fundamental concept in predictive modeling, plays a crucial role in analyzing and predicting continuous variables. By harnessing the capabilities of these algorithms and techniques, we can unlock incredible potential in various domains, leading to advancements and improvements in numerous industries.
- Collecting/Gathering Data.
- Preparing the Data - load data and prepare it for the machine learning training.
- Choosing a Model.
- Training the Model.
- Evaluating the Model.
- Parameter Tuning.
- Make a Predictions.
- Define the Model.
- Complie the Model.
- Fit the Model with training dataset.
- Make a Predictions.
- Categorical variable(Qualitative): Label data or distinct groups.
Example: location, gender, material type, payment, highest level of education - Discrete variable (Class Data): Numerica variables but the data is countable number of values between any two values.
Example: customer complaints or number of flaws or defects, Children per Household, age (number of years) - Continuous variable (Quantitative): Numeric variables that have an infinite number of values between any two values. Example: length of a part or the date and time a payment is received, running distance, age (infinitly accurate and use an infinite number of decimal places)
- For 'Quantitative data' is used with all three centre measures (mean, median and mode) and all spread measures.
- For 'Class data' is used with median and mode.
- For 'Qualitative data' is for only with mode.
- Classification (predict label)
- Regression (predict values)
- Bias is the difference between our actual and predicted values.
- Bias is the simple assumptions that our model makes about our data to be able to predict new data.
- Assumptions made by a model to make a function easier to learn.
- Variance is opposite of bias.
- Variance is variability of model prediction for a given data point or a value that tells us the spread of our data.
- If you train your data on training data and obtain a very low error, upon changing the data and then training the same.
Overfitted is when the model memorizes the noise and fits too closely to the training set. Good fit is a model that learns the training dataset and genernalizes well with the old out dataset. Underfitting is when it cannot establish the dominant trend within the data; as a result, in training errors and poor performance of the model.
Overfitting model is a good model with the training data that fit or at lease with near each observation; however, the model mist the point and random noise is capture inside the model. The model have low training error and high CV error, low in-sample error and high out-of-sample error, and high variance.
- High Train Accuracy
- Low Test Accuracy
- Early stopping - stop the training before the model starts learning the noise within the model.
- Training with more data - adding more data will increase the accuracy of the modelor can help algorithms detect the signal better.
- Data augmentation - add clean and relevant data into training data.
- Feature selection - Use important features within the data. Remove features.
- Regularization - reduce features by using regularization methods such as L1 regularization, Lasso regularization, and dropout.
- Ensemble methods - combine predictions from multiple separate models such as bagging and boosting.
- Increase training data.
- High Train Accuracy
- High Test Accuracy
Underfitting model is not perfect, so it does not capture the underlying logic of the data. Therefore, the model does not have strong predictive power with low accuracy. The model have large training set error, large in-sample error, and high bias.
- Low Train Accuracy
- Low Test Accuracy
- Decrease regularization - reduce the variance with a model by applying a penalty to the input parameters with the larger coefficients such as L1 regularization, Lasso regularization, dropout, etc.
- Increase the duration of training - extending the duration of training because stopping the training early will cause underfit model.
- Feature selection - not enough predictive features present, then adding more features or features with greater importance would improve the model.
- Increase the number of features - performing feature engineering
- Remove noise from the data
Step 1 through step 8 is a review on python.
After step 8, everything you need to know is relates to data analysis, data engineering, data science, machine learning, and deep learning.
Here the link to python tutorial:
Python Tutorial for Stock Analysis
- Linear Regression Model
- Logistic Regression
- Lasso Regression
- Support Vector Machines
- Polynomial Regression
- Stepwise Regression
- Ridge Regression
- Multivariate Regression Algorithm
- Multiple Regression Algorithm
- K Means Clustering Algorithm
- NaΓ―ve Bayes Classifier Algorithm
- Random Forests
- Decision Trees
- Nearest Neighbours
- Lasso Regression
- ElasticNet Regression
- Reinforcement Learning
- Artificial Intelligence
- MultiModal Network
- Biologic Intelligence
Algorithms are processes and sets of instructions used to solve a class of problems. Additionally, algorithms perform computations such as calculations, data processing, automated reasoning, and other tasks. A machine learning algorithm is a method that enables systems to learn and improve automatically from experience, without the need for explicit formulation.
Python 3.5+
Jupyter Notebook Python 3
Windows 7 or Windows 10
π» Do not use this code for investing or trading in the stock market. However, if you are interest in the stock market, you should read π books that relate to stock market, investment, or finance. On the other hand, if you into quant or machine learning, read books about π machine trading, algorithmic trading, and quantitative trading. You should read π about Machine Learning and Deep Learning to understand the concept, theory, and the mathematics. On the other hand, you should read academic paper and do research online about machine learning and deep learning on π»