Welcome to the Alpha Academy!

The Alpha Academy is an open knowledge base for our global community of investors, software developers, quantitative researchers, students, and educators. Below is a growing list of educational content developed by industry experts in quantitative investing, machine learning, software development, blockchain technologies, and more.

Alpha Academy | Built with ❤️ by Alpha Vantage: Stock Market API Reimagined


Building a Stock Portfolio: Key Quantitative Factors

Image from artist Jackie Niam on Shutterstock

A portfolio construction framework is key for stock data analytics and trading strategy development, especially in high-volatility, high-inflation economic conditions. In this video tutorial, finance professor Randy Cohen walks us through the top quantitative factors to consider when building a robust stock portfolio: size, value, quality, momentum, and beta. While different traders may have different investment strategies and styles, certain quantitative factors tend to be universal across portfolio management practices. We hope this tutorial, combined with our wealth of stock market APIs, can empower you to craft stock portfolios that reliably and repeatably out-perform the broader market.

Go to project →

Keywords: quantitative investing, trading strategy development, portfolio management, equities, bonds



Introduction to AI for Finance

Image from artist Blue Planet Studio on Shutterstock

Artificial intelligence (AI) and machine learning are seeing wide adoption in the financial market across asset classes such as stocks, ETFs, forex, commodities, and fixed income. Since early 2000, machines have been trading stocks alongside humans. Fast forward to today, a majority of stock trading activities in the United States are conducted by machines. This video tutorial will give you a deeper understanding of artificial intelligence, along with two important building blocks of AI systems: machine learning and deep learning. With this knowledge and the right tools, you’ll be able to harness their power to make informed decisions in the financial market.

Go to project →

Keywords: machine learning, deep learning, quantitative finance, stock market API



Predicting Stock Prices with Deep Neural Networks

Image from artist everything possible on Shutterstock

This project walks you through the end-to-end data science lifecycle of developing a predictive model for stock price movements with Alpha Vantage APIs and a powerful machine learning algorithm called Long Short-Term Memory (LSTM). By completing this project, you will learn the key concepts of machine learning & deep learning and build a fully functional predictive model for the stock market, all in a single Python file.

Go to project →

Keywords: python, data science, machine learning, stock API integration, quantitative investing



Building a Stock Visualization Website in Python/Django

Image from artist PopTika on Shutterstock

Data visualization is a key component of many modern software applications, especially in the financial technology (FinTech) domain. In this project, we will create an interactive stock visualization website with Python/Django and Alpha Vantage APIs. We will cover key software engineering and web development concepts such as HTML/Javascript/AJAX, server-side scripting, and database models - in fewer than 400 lines of code.

Go to project →

Keywords: web development, data visualization, HTML, Javascript/AJAX, server-side scripting, SQL, Python/Django


Stock Portfolio Construction: Key Quant Factors

👍377
|
Share project link
Guest Expert: Randy Cohen (Finance Professor)
Keywords: quantitative investing, trading strategy development, portfolio management, equities, bonds

In this video tutorial, finance professor Randy Cohen walks us through the top quantitative factors to consider when building a robust stock portfolio: size, value, quality, momentum, and beta. While different traders may have different investment strategies and styles, certain quant factors tend to be universal across portfolio management practices. We hope this tutorial, combined with our wealth of stock market APIs, can empower you to craft stock portfolios that reliably and repeatably out-perform the broader market.


Let's dive right in!


👍377
|
Share project link


Sign up for our Sunday Morning Markets newsletter to stay up to date with important financial & economic news around the world! It arrives in your email inbox every Sunday with pure market insights.


If you are interested in translating this project into a language other than English, please let us know and we truly appreciate your help!

Wenn Sie daran interessiert sind, dieses Projekt ins Deutsche zu übersetzen, lassen Sie es uns bitte wissen und wir bedanken uns sehr für Ihre Hilfe!

Si está interesado en traducir este proyecto al español, háganoslo saber y realmente apreciamos tu ayuda.

Se você está interessado em traduzir este projeto para o português, por favor nos avise e nós realmente apreciamos sua ajuda!

如果您有兴趣把这篇文章翻译成中文, 请联系我们。我们非常感谢您的支持!

本プロジェクト和訳にご興味お持ちの方々、是非、お問い合わせください!ご連絡お待ちしております!

이 프로젝트를 한국어로 번역하는 데 관심이 있으시면 저희에게 알려주십시오. 도움을 주셔서 정말 감사합니다!


Disclaimer: All content from the Alpha Academy is for educational purposes only and is not investment advice.

Introduction to AI for Finance

👍397
|
Share project link
Created by: Alpha Academy Staff
Keywords: machine learning, deep learning, quantitative investing, data science

Artificial intelligence (AI) and machine learning are seeing wide adoption in the financial market. Since early 2000, machines have been trading stocks alongside humans. Fast forward to today, a majority of stock trading activities in the United States are conducted by machines. Not surprisingly, many academic and industry practitioners use our stock API as the primary data source for their machine learning algorithms. This short video tutorial will give you a deeper understanding of artificial Intelligence, along with two important building blocks of AI systems: machine learning and deep learning. With this knowledge and the right tools, you’ll be able to harness their power to help you make informed decisions in the financial market.


Let's dive right in!


👍397
|
Share project link


Sign up for our Sunday Morning Markets newsletter to stay up to date with important financial & economic news around the world! It arrives in your email inbox every Sunday with pure market insights.


If you are interested in translating this project into a language other than English, please let us know and we truly appreciate your help!

Wenn Sie daran interessiert sind, dieses Projekt ins Deutsche zu übersetzen, lassen Sie es uns bitte wissen und wir bedanken uns sehr für Ihre Hilfe!

Si está interesado en traducir este proyecto al español, háganoslo saber y realmente apreciamos tu ayuda.

Se você está interessado em traduzir este projeto para o português, por favor nos avise e nós realmente apreciamos sua ajuda!

如果您有兴趣把这篇文章翻译成中文, 请联系我们。我们非常感谢您的支持!

本プロジェクト和訳にご興味お持ちの方々、是非、お問い合わせください!ご連絡お待ちしております!

이 프로젝트를 한국어로 번역하는 데 관심이 있으시면 저희에게 알려주십시오. 도움을 주셔서 정말 감사합니다!


Disclaimer: All content from the Alpha Academy is for educational purposes only and is not investment advice.

Predicting Stock Prices with Deep Neural Networks

👍429
|
Share project link
Expert Instructor: Jingles (Hong Jing) (AI Researcher at Alibaba Group; Nanyang Technological University of Singapore)
Keywords: python, data science, machine learning, deep learning, quantitative investing, time series analysis


💡 Tip: If you are just getting started on AI and machine learning, you may want to check out our Introduction to AI for Finance video tutorial first before diving into this project!

Deep learning is part of a broader family of machine learning methods based on artificial neural networks, which are inspired by our brain's own network of neurons. Among the popular deep learning frameworks, Long short-term memory (LSTM) is a specialized architecture that can "memorize" patterns from historical sequences of data and extrapolate such patterns to future events.


(Illustration adapted from Wikipedia)

Since the financial market is naturally comprised of historical sequences of equity prices, more and more quantitative researchers and finance professionals are using LTSM to model and predict market price movements. In this project, we will go through the end-to-end machine learning lifecycle of developing an LTSM model to predict stock market prices using Python (PyTorch) and Alpha Vantage APIs.

The project is grouped into the following sections, which are representative of a typical machine learning workflow:

❚ Installing Python dependencies

❚ Data preparation: acquiring data from Alpha Vantage stock APIs

❚ Data preparation: normalizing raw data

❚ Data preparation: generating training and validation datasets

❚ Defining the LSTM model

❚ Model training

❚ Model evaluation

❚ Predicting future stock prices

By the end of this project, you will have a fully functional LSTM model that predicts future stock prices based on historical price movements, all in a single Python file. This tutorial has been written in a way such that all the essential code snippets have been embedded inline. You should be able to develop, train, and test your machine learning model without referring to other external pages or documents.

Let's get started!


Installing Python dependencies

We recommend using Python 3.6 or higher for this project. If you do not have Python installed in your local environment, please visit python.org for the latest download instruction.

With Python installed, please go to the Command Line interface of your operating system and use the "pip install" prompts below to install Numpy, PyTorch, Matplotlib, and Alpha Vantage, respectively:

numpy: pip install numpy

PyTorch: pip install torch

matplotlib: pip install matplotlib

alpha_vantage: pip install alpha_vantage

Now, create a new Python file named project.py and paste the following code into the file:

If your have succesfully installed all the Python dependencies above, you should see the text "All libraries loaded" after running the project.py file.

Now append the following code to the project.py file. Don't forget to replace "YOUR_API_KEY" with your actual Alpha Vantage API key, which can be obtained from the Alpha Vantage support page.

Over the course of this project, we will continue adding new code blocks to the project.py file. By the time you reach the end of the tutorial, you should have a fully functional LSTM machine learning model to predict stock market price movements, all in a single Python script. Please feel free to compare your project.py with the source code if you would like to have a "sanity check" anytime during the project.


Data preparation: acquiring data from Alpha Vantage stock APIs

In this project, we will train an LSTM model to predict stock price movements. Before we can build the "crystal ball" to predict the future, we need historical stock price data to train our deep learning model. To this end, we will query the Alpha Vantage stock API via a popular Python wrapper. For this project, we will obtain over 20 years of daily close prices for IBM from November 1999 to April 29, 2021.

raw output from Alpha Vantage stock data API

Append the following code block to your project.py file. If you re-run the file now, it should generate a graph similar to above thanks to the powerful matplotlib library.

Please note that we are using the adjusted close field of Alpha Vantage's daily adjusted API to remove any artificial price turbulences due to stock splits and dividend payout events. It is generally considered an industry best practice to use split/dividend-adjusted prices instead of raw prices to model stock price movements.


Data preparation: normalizing raw financial data

Machine learning algorithms (such as our LSTM algorithm) that use gradient descent as the optimization technique require data to be scaled. This is due to the fact that the feature values in the model will affect the step size of the gradient descent, potentially skewing the LSTM model in unexpected ways.

This is where data normalization comes in. Normalization can increase the accuracy of your model and help the gradient descent algorithm converge more quickly. By bringing the input data on the same scale and reducing its variance, none of the weights in the artificial neural network will be wasted on normalizing tasks, which means the LSTM model can more efficiently learn from the data and store patterns in the network. Furthermore, LSTMs are intrinsically sensitive to the scale of the input data. For the above reasons, it is crucial to normalize the data.

Since stock prices can range from tens to hundreds and thousands - $40 to $160 per share in the case of IBM - we will perform normalization on the stock prices to standardize the range of these values before feeding the data to the LSTM model. The following code snippet rescales the data to have a mean of 0 and the standard deviation is 1.

Append the following data normalization code block to your project.py file.


Data preparation: generating training and validation datasets

Supervised machine learning methods such as LSTM learns the mapping function from input variables (X) to the output variable (Y). Learning from the training dataset can be thought of as a teacher supervising the learning process, where the teacher knows all the right answers.

In this project, we will train the model to predict the 21st day's close price based on the past 20 days' close prices. The number of days, 20, was selected based on a few reasons:

❚ When LSTM models are used in natural language processing, the number of words in a sentence typically ranges from 15 to 20 words

❚ Gradient descent considerations: attempting to back-propagate across very long input sequences may result in vanishing gradients (more on this later)

❚ Longer sequences tend to have much longer training times

After transforming the dataset into input features and output labels, the shape of our X is (5388, 20), with 5388 being the number of rows and each row containing a sequence of past 20 days' prices. The corresponding Y data shape is (5388,), which matches the number of rows in X.

We also split the dataset into two parts, for training and validation. We split the data into 80:20 - 80% of the data is used for training, with the remaining 20% for validating our model's performance in predicting future prices. (Alternatively, another common practice is to split the initial data into train, validation, and test set in a 70/20/10 allocation, where the test dataset is not used at all during the training process.) The following graph shows the portion of data for training and validation - roughly speaking, data before ~2017 are used for training and data after ~2017 are used for model performance validation.

data split

Append the following code block to your project.py file. If you re-run the file now, it should generate a graph similar to above, where the training data is colored in green and validation data is colored in blue.

We will train our models using the PyTorch framework, a machine learning library written in Python. At the heart of PyTorch's data loading utility is the DataLoader class, an efficient data generation scheme that leverages the full potential of your computer's Graphics Processing Unit (GPU) during the training process where applicable. DataLoader requires the Dataset object to define the loaded data. Dataset is a map-style dataset that implements the __getitem__() and __len__() protocols, and represents a map from indices to data samples.

Append the following code block to your project.py file to implement the data loader functionality.


Defining the LSTM model

With the training and evaluation data now fully normalized and prepared, we are ready to build our LSTM model!

As mentioned before, LSTM is a specialized artificial neural network architecture that can "memorize" patterns from historical sequences of data and extrapolate such patterns for future events. Specifically, it belongs to a group of artificial neural networks called Recurring Neural Networks (RNNs).

LSTM is a popular artificial neural network because it manages to overcome many technical limitations of RNNs. For example, RNNs fail to learn when the data sequence is greater than 5 to 10 due to the vanishing gradients problem, where the gradients are vanishingly small, effectively preventing the model from learning. LSTMs can learn long sequences of data by enforcing constant error flow through self-connected hidden layers, which contain memory cells and corresponding gate units. If you are interested in learning more about the inner workings of LSTM and RNNs, this is an excellent explainer for your reference.

Our artificial neural network will have three main layers, with each layer designed with a specific logical purpose:

❚ linear layer 1 (linear_1): to map input values into a high dimensional feature space, transforming the features for the LSTM layer

❚ LSTM (lstm): to learn the data in sequence

❚ linear layer 2 (linear_2): to produce the predicted value based on LSTM's output

We also add Dropout, where randomly selected artificial neurons are ignored during training, therefore regularizing the network to prevent overfitting and improving overall model performance. As an optional step, we also initialize the LSTM's model weights, as some researchers have observed that it could help the model learn more efficiently.

Append the following code block to your project.py file to specify the LSTM model.


Model training

The LSTM model learns by iteratively making predictions given the training data X. We use mean squared error as the cost function, which measures the difference between the predicted values and the actual values. When the model is making bad predictions, the error value returned by the cost function will be relatively high. The model will fine-tune its weights through backpropagation, improving its ability to make better predictions. Learning stops when the algorithm achieves an acceptable level of performance, where the error value returned by the cost function on the validation dataset is no longer showing incremental improvements.

We use the Adam optimizer that updates the model's parameters based on the learning rate through its step() method. This is how the model learns and fine-tunes its predictions. The learning rate controls how quickly the model converges. A learning rate that is too high can cause the model to converge too quickly to a suboptimal solution, whereas smaller learning rates require more training iterations and may result in prolonged duration for the model to find the optimal solution. We also use the StepLR scheduler to reduce the learning rate during the training process. You may also try the ReduceLROnPlateau scheduler, which reduces the learning rate when a cost function has stopped improving for a "patience" number of epochs. Choosing the proper learning rate for your project is both art and science, and is a heavily researched topic in the machine learning community.

Append the following code block to your project.py file and re-run the file to start the model training process.

After running the script, you will see something similar to the following output in your console:

Epoch[1/100] | loss train:0.063952, test:0.001398 | lr:0.010000
Epoch[2/100] | loss train:0.011749, test:0.002024 | lr:0.010000
Epoch[3/100] | loss train:0.009831, test:0.001156 | lr:0.010000
Epoch[4/100] | loss train:0.008264, test:0.001022 | lr:0.010000
...
Epoch[97/100] | loss train:0.006143, test:0.000972 | lr:0.000100
Epoch[98/100] | loss train:0.006267, test:0.000974 | lr:0.000100
Epoch[99/100] | loss train:0.006168, test:0.000985 | lr:0.000100
Epoch[100/100] | loss train:0.006102, test:0.000972 | lr:0.000100

Using mean squared error as the loss function to optimize our model, the above log output are step-by-step "loss" values calculated based on how well the model is learning. After every epoch, a smaller loss value indicates that the model is learning well, and 0.0 means that no mistakes were made. Loss train gives an idea of how well the model is learning, while loss test shows how well the model generalizes to the validation dataset. A well-trained model is identified by a training and validation loss that decreases to the point of negligible differences between the two final loss values (at this stage, we say the model has "converged"). Generally, the loss values of the model will be lower on the training than on the validation dataset.


Model evaluation

To visually inspect our model's performance, we will use the newly trained model to make predictions on the training and validation datasets we've created earlier in this project. If we see that the model can predict values that closely mirror the training dataset, it shows that the model managed to memorize the data. And if the model can predict values that resemble the validation dataset, it has managed to learn the patterns in our sequential data and generalize the patterns to unseen data points.

training result

Append the following code block to your project.py file. Re-running the file should generate a graph similar to the figure above.

From our results, we can see that the model has managed to learn and predict on both training (green) and validation (blue) datasets very well, as the "Predicted prices" lines significantly overlap with the "Actual prices" values.

Let's zoom into the chart and look closely at the blue "Predicted price (validation)" segment by comparing it against the actual prices values.

training result zoomed in

Append the following code block to your project.py file and re-run the script to generate the zoomed-in graph.

What a beautiful graph! You can see that the predicted prices (blue) significantly overlap with the actual prices (black) of IBM.

It is also worth noting that model training & evaluation is an iterative process. Please feel free to go back to the model training step to fine-tune the model and re-evaluate the model to see if there is a further performance boost.


Predicting future stock prices

By now, we have trained an LSTM model that can (fairly accurately) predict the next day's price based on the past 20 days' close prices. This means we now have a crystal ball in hand! Let's supply the past 20 days' close prices to the model and see what it predicts for the next trading day (i.e., the future!). Append the following code to your project.py file and re-run the script one last time.

Running the script will generate a prediction graph similar to the one below:

price prediction graph

The red dot in the graph is what our model predicts for IBM's close price on the next trading day.

Is the prediction good enough? How about other stocks such as TSLA, APPL, or the Reddit-favorite Gamestop (GME)? What about other asset classes such as forex or cryptocurrencies? Beyond the close prices, are there any other external data we can feed to the LSTM model to make it even more robust - for example, one of the 50+ technical indicators from the Alpha Vantage APIs?

Now that you have learned the fundamentals of machine learning for financial market data, the possibility is limitless. We hereby pass the baton to you, our fearless reader!


👍429
|
Share project link


References

Full project.py source code: link

To submit your questions or comments via GitHub Issues: link

To run the script on a Google Colab Jupyter notebook with access to GPU: link

To run the script on your local Jupyter Notebook:

git clone https://github.com/jinglescode/time-series-forecasting-pytorch.git
pip install -r requirements.txt


Sign up for our Sunday Morning Markets newsletter to stay up to date with important financial & economic news around the world! It arrives in your email inbox every Sunday with pure market insights.


If you are interested in translating this project into a language other than English, please let us know and we truly appreciate your help!

Wenn Sie daran interessiert sind, dieses Projekt ins Deutsche zu übersetzen, lassen Sie es uns bitte wissen und wir bedanken uns sehr für Ihre Hilfe!

Si está interesado en traducir este proyecto al español, háganoslo saber y realmente apreciamos tu ayuda.

Se você está interessado em traduzir este projeto para o português, por favor nos avise e nós realmente apreciamos sua ajuda!

如果您有兴趣把这篇文章翻译成中文, 请联系我们。我们非常感谢您的支持!

本プロジェクト和訳にご興味お持ちの方々、是非、お問い合わせください!ご連絡お待ちしております!

이 프로젝트를 한국어로 번역하는 데 관심이 있으시면 저희에게 알려주십시오. 도움을 주셔서 정말 감사합니다!


Disclaimer: All content from the Alpha Academy is for educational purposes only and is not investment advice.

Build a Stock Visualization Website in Python/Django

👍281
|
Share project link
Expert Instructor: Daniel Petrow (Alpha Vantage)
Keywords: web development, data visualization, HTML, Javascript/AJAX, server-side scripting, SQL, Python/Django


Data visualization is a must-have feature for more and more web, mobile, and IoT applications. This trend is even more salient in the domain of financial market data, where data visualization is often used for market intelligence, technical charting, and other key business and/or quantitative scenarios such as exploratory data analysis (EDA).

In this project, we will create an interactive stock visualization website (screenshot below) with Python/Django and Alpha Vantage APIs. We will cover key software engineering and web development concepts such as AJAX, server-side scripting, and database models - in fewer than 400 lines of code.

stock visualizer homepage mockup

The project is comprised of the following sections:

❚ Install dependencies and set up project

❚ Create database model

❚ Create frontend UI

❚ Create backend logic

❚ Set up Django URL routing

❚ Run the web application locally

To minimize your cognitive load, we have included all the necessary code scripts and command line prompts directly in this document. By the time you finish this tutorial, you will have a stock data visualization website with frontend, backend, and database all ready for prime time. Let's get started!


Install dependencies and set up project

We recommend Python 3.6 or higher. If you do not yet have Python installed, please follow the download instructions on the official python.org website.

Once you have Python installed in your environment, please use your command line interface to install the following Python libraries:

Django: pip install django

requests: pip install requests

The pip installer above should already be included in your system if you are using Python 3.6 or higher downloaded from python.org. If you are seeing a "pip not found" error message, please refer to the pip installation guide.

Please also obtain a free Alpha Vantage API key here. You will use this API key to query stock market data from our REST stock APIs as you develop this stock visualization web application.


Now, we are ready to create the Django project!

Open a new command line window and type in the following prompt:

(home) $ django-admin startproject alphaVantage

You have just created a blank Django project in a folder called alphaVantage.

Now, let's switch from your home directory to the alphaVantage project directory with the following command line prompt:

(home) $ cd alphaVantage

For the rest of this project, we will be operating inside the alphaVantage root directory.


Now, let's create a stockVisualizer app within the blank Django project:

(alphaVantage) $ python manage.py startapp stockVisualizer

We will also create an HTML file for our homepage. Enter the following 4 command line prompts in order:

Step 1: create a new folder called "templates"

(alphaVantage) $ mkdir templates

Step 2: go to the "templates" folder

(alphaVantage) $ cd templates

Step 3: create an empty home.html file inside the templates folder

If you are using Mac or Linux:
(templates) $ touch home.html

If you are using Windows:
(templates) $ type nul > home.html

Step 4: return to our alphaVantage root directory

(templates) $ cd ../


At this stage, the file structure of your Django project should look similar to the one below. You may want to import the project into an IDE such as PyCharm, Visual Studio, or Sublime Text to visualize the file structure more easily.


Let's take a closer look at some of the key files:

manage.py: A command-line utility that lets you interact with this Django project in various ways. You can read all the details about manage.py in django-admin and manage.py.

__init__.py: An empty file that tells Python that this directory should be considered a Python package. Please keep it empty!

settings.py: Settings/configuration for this Django project.

urls.py: The URL declarations for this Django project; a “table of contents” of your Django-powered site.

models.py: this is the file you will use to define database objects and schema

views.py: this is where all the backend logic gets implemented and relayed to the frontend (views)

home.html: the HTML file that determines the look and behavior of the homepage


Specify the database model

Databases are essential components of most modern web and mobile applications. For our stock visualization website, we will create a simple (two-column) database model to store stock market data.

Before we create the database model, however, let's open the settings.py file and quickly modify the following 3 places in the script:

1. Near the top of settings.py, add import os


2. Inside the INSTALLED_APPS, add the stockVisualizer app:


3. Inside TEMPLATES, include the templates directory we've created earlier in this project:


Now, let's define a Django database model called StockData in models.py.

The model has two fields:

❚ a symbol field that stores the ticker string of the stock

❚ a data field that stores the historical prices and moving average values for a given ticker


With models.py updated, let's notify Django about the newly created database model with the following command line prompts:

(alphaVantage) $ python manage.py makemigrations

(alphaVantage) $ python manage.py migrate


At this stage, your file structure should look similar to the one below:


The db.sqlite3 file indicates that we have registered our StockData model in the local SQLite database. As its name suggests, SQLite is a lightweight SQL database frequently used for web development (especially in local test environment). SQLite is automatically included in Django/Python, so there is no need to install it separately :-)


There are now only two steps left before completing the website:

❚ Set up the homepage file (home.html) so that we can visualize stock prices and moving averagas of a given stock

❚ Create the backend server logic (views.py) so that we have the proper stock data to feed into the frontend UI

Let's proceed!


Create frontend UI

Before we dive into the code implementation, let's first summarize the expected behavior of our homepage (screenshots below) at a high level:

❚ Upon loading, the page will display the adjusted close prices and simple moving average (SMA) values of Apple (AAPL), covering the most recent 500 trading days.

❚ When the user enters a new stock ticker in the textbox and hits "submit", the existing chart on the page will be replaced with the adjusted close and SMA data of the new ticker.


When the homepage is first loaded:

stock visualizer homepage mockup

When the user enters a new symbol (such as GME):

stock visualizer homepage mockup with user input

With the above page layout and behavior defined, let's implement the homepage frontend accordingly.

Open the (empty) home.html and paste the following content into it:


This is a chunky block of code! Don't worry - let's break it down into the following 4 functional groups:

1. Load the Javascript dependencies

https://cdn.jsdelivr.net/npm/[email protected]/dist/chart.min.js is the powerful Chart.js library for data visualization

https://code.jquery.com/jquery-3.6.0.min.js loads the jquery library that streamlines common frontend development tasks.


2. Define the page layout

<input type="text" id="ticker-input"> is the input text box for a user to enter a new stock ticker

<input type="button" value="submit" id="submit-btn"> is the submit button

<canvas id="myChart"></canvas> is the canvas on which Chart.js will generate the beautiful graph for stock data visualization


3. Define the behavior upon page loading

The $(document).ready(function(){...} code block specifies the page behavior when it's first loaded:

❚ First, it will make an AJAX POST request to a Django backend function called get_stock_data to get the price and simple moving average data for AAPL. AJAX stands for Asynchronous JavaScript And XML, which is a popular Javascript design pattern that enables a developer to (1) update a web page without reloading the page, (2) request data from a backend server - after the page has loaded, (3) receive data from a backend server - after the page has loaded, among other benefits.

❚ Once AAPL's data is returned by the backend to the frontend (the data is stored in the res variable in success: function (res, status) {...}), it is parsed by several lines of Javascript codes into three lists: dates, daily_adjusted_close, and sma_data.

❚ These three lists are then truncated into size=500 (i.e., data for the trailing 500 trading days) to be visualized by Chart.js. Specifically, the values in dates are used for the X axis; values in daily_adjusted_close and sma_data are used for the Y axis.


4. Define the behavior for when a new ticker is submitted by the user

The $('#submit-btn').click(function(){...} code block specifies the page behavior when a user enters a new ticker symbol:

❚ First, it will make an AJAX POST request to a Django backend function called get_stock_data to get the price and simple moving average data for the ticker entered by the user. The line var tickerText = $('#ticker-input').val(); takes care of extracting the ticker string from the input textbox.

❚ Once the ticker's data is returned by the backend to the frontend (the data is again stored in the res variable in success: function (res, status) {...}), it is parsed by several lines of Javascript codes into three lists: dates, daily_adjusted_close, and sma_data.

❚ These three lists are then truncated into size=500 (i.e., data for the trailing 500 trading days) to be visualized by Chart.js. Specifically, the values in dates are used for the X axis; values in daily_adjusted_close and sma_data are used for the Y axis.


As you can see, the get_stock_data backend function is now the only missing piece in the frontend-backend communication loop. Let's implement it right away!


Create backend server logic (views.py)

Now, let's update views.py to the following. Don't forget to replace the my_alphav_api_key string with your actual Alpha Vantage API key.


Let's look at the above backend server-side code a bit closer.

The variable DATABASE_ACCESS = True means the get_stock_data function will first check if there is existing data in the local database before making API calls to Alpha Vantage. If you set DATABASE_ACCESS = False, the script will bypass any local database lookups and proceed directly to calling Alpha Vantage APIs when a new ticker is queried. We have included a paragraph titled Data Freshness vs. Speed Trade-off under the References section of this tutorial to discuss the nuances of getting data from local databases vs. querying an external API.

The function def home(request) is the standard way for a Django backend to render an HTML file (in our case, home.html).


The function def get_stock_data(request) takes an AJAX POST request from the home.html file and returns a JSON dictionary of stock data back to the AJAX loop. Let's unpack it here:

❚ The segment if request.is_ajax() makes sure the request is indeed an AJAX POST request from the frontend.

ticker = request.POST.get('ticker', 'null') obtains the ticker string from the AJAX request. The ticker string is always AAPL when the page is first loaded, but will change to other strings based on user input at the frontend.

❚ The code block under if DATABASE_ACCESS == True checks if the data for a given ticker already exists in our local database. If yes, the get_stock_data function will simply get the data from the database and return it back to the AJAX loop. If not, the script continues to the next steps. (If you are familiar with SQL, the code StockData.objects.filter(symbol=ticker) is Django's way of saying SELECT * FROM StockData WHERE symbol = ticker.)


The part price_series = requests.get(f'https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol={ticker}&apikey={APIKEY}&outputsize=full').json() queries Alpha Vantage's Daily Adjusted API and parse the data into a JSON dictionary through the .json() routine. Below is a sample JSON output from the daily adjusted API:

Please note that we are primarily interested the adjusted close field (5. adjusted close) of Alpha Vantage's daily adjusted API to remove any artificial price turbulences due to stock splits and dividend payout events. It is generally considered an industry best practice to use split/dividend-adjusted prices instead of raw close prices (4. close) to model stock price movements.

❚ The part sma_series = requests.get(f'https://www.alphavantage.co/query?function=SMA&symbol={ticker}&interval=daily&time_period=10&series_type=close&apikey={APIKEY}').json() queries Alpha Vantage's Simple Moving Average (SMA) API and parse the data into a JSON dictionary through the .json() routine. Below is a sample JSON output from the SMA API:


The remainder of the get_stock_data function (reproduced below) (1) packages up the adjusted close JSON data and the simple moving average JSON data into a single dictionary output_dictionary, (2) save the newly acquired stock data to database (so that we can recycle the data from the database next time without querying the Alpha Vantage APIs again), and (3) return the data back to the AJAX POST loop (via HttpResponse(json.dumps(output_dictionary), content_type='application/json')) for charting at the frontend.


This is it! We have implemented both the frontend (home.html) and the backend (views.py). These components can now "talk" to each other seamlessly and perform read/write interactions with the local SQLite database.


URL routing

Just one last thing: let's update urls.py with the latest URL routings for the views we just created in views.py:

path("", stockVisualizer.views.home) makes sure the home function in views.py is called when a user visits the homepage in their web browser

path('get_stock_data/', stockVisualizer.views.get_stock_data) makes sure the get_stock_data function in views.py is called when home.html makes an AJAX POST request to the /get_stock_data/ URL.


Running the website locally

Now we are ready to run the website in your local environment. Enter the following prompt in your command line window (please make sure you are still in the alphaVantage root directory):

(alphaVantage) $ python manage.py runserver


If you go to http://localhost:8000/ in your web browser (e.g., Chrome, Firefox, etc.), you should now see the website in full action!


👍281
|
Share project link


References

Project source code: link


Food for thought #1: richer data visualization

The current web application supports the visualization of adjusted close prices and simple moving average (SMA) values for a given stock. We will leave it to your creativitiy to enrich the visualization. For example, how about plotting cryptocurrency prices along with the stock prices? How about adding more technical indicators to the chart or enable users to draw support/resistance lines directly on the chart? We look forward to your creation!


Food for thought #2: data freshness vs. website speed trade-off

In the get_stock_data function of views.py, we first search the local database for existing data of a given ticker before querying the Alpha Vantage APIs. In general, accessing data via local databases or in-memory caches is often faster (and computationally cheaper) than querying an online API. On the other hand, querying the Alpha Vantage API will guarantee you the latest data for a given ticker, while data in the local database is static in nature and will gradually become outdated over time. The trade-off (fresh data but lower website speed vs. stale data but faster website speed) is a common dilemma faced by developers of data-intensitve applications. Can you find a way out of this dilemma? For example, write a cron job to refresh the database on a daily basis so that the local data won't get outdated? Overall, database optimization is one ofthe most challenging yet rewarding tasks in the software development process.


Sign up for our Sunday Morning Markets newsletter to stay up to date with important financial & economic news around the world! It arrives in your email inbox every Sunday with pure market insights.


If you are interested in translating this project into a language other than English, please let us know and we truly appreciate your help!

Wenn Sie daran interessiert sind, dieses Projekt ins Deutsche zu übersetzen, lassen Sie es uns bitte wissen und wir bedanken uns sehr für Ihre Hilfe!

Si está interesado en traducir este proyecto al español, háganoslo saber y realmente apreciamos tu ayuda.

Se você está interessado em traduzir este projeto para o português, por favor nos avise e nós realmente apreciamos sua ajuda!

如果您有兴趣把这篇文章翻译成中文, 请联系我们。我们非常感谢您的支持!

本プロジェクト和訳にご興味お持ちの方々、是非、お問い合わせください!ご連絡お待ちしております!

이 프로젝트를 한국어로 번역하는 데 관심이 있으시면 저희에게 알려주십시오. 도움을 주셔서 정말 감사합니다!


Disclaimer: All content from the Alpha Academy is for educational purposes only and is not investment advice.