img

Computation of Camarilla Pivot Points as Static Inputs for Training a Trading Bot with AI Algorithms

img
valuezone 25 March 2023

Computation of Camarilla Pivot Points as Static Inputs for Training a Trading Bot with AI Algorithms

Can we train an AI model to accurately forecast the stock market? The answer is no. The reason is simple: if there is such a model available, the person who developed that model would take all other traders’ money in the stock market because trading is a zero-sum game. This situation can’t continue as other traders aren’t knuckleheads and they will adjust their trading strategies accordingly to disallow the counterparties to continue taking easy money from them.

But if we change the question this way: Can we train an AI model to forecast the stock market? The answer will be yes. Well, everyone can forecast, regardless of wrong or correct forecasts. AI is no exception. The real question is: Do you take AI’s forecast seriously and trade accordingly? This is like while you are playing chess with your opponent, your friend is standing behind you watching you play and keeping giving you advice. Would you like such a friend standing behind you and supporting you while you are trading? I think this is the correct mindset for using an AI-powered trading bot. You can take the AI’s advice as a grain of salt, just like how you treat your friend’s advice in the previous example. However, the benefit of using AI is that it will support you psychologically when your discretionary judgment aligns with the AI’s judgment. Since you know you are not alone, this will help you execute a trade firmly and stay in a winning trade confidently, which could maximally avoid psychological pitfalls. Conversely, when your forecast is contradictory to the AI’s forecast, for the sake of risk management, you could give up executing the trade, keep being patient, and hold flat for the next good opportunity.

With that said, this article is an introduction to a part of the feature engineering content in my book Day Trade with AI, which is to be released soon, and the purpose of this article is to prepare features to the best of our knowledge to help the AI algorithms converge.

A converged AI algorithm means the AI model has learned something from the data. Despite the convergence can be achieved by simply using a large neural network, this is not what we want to do here, because using a large neural network will simply force the AI to memorize the training data, which causes the common overfitting issue as the model will perform poorly when it sees new data. The convergence we want to achieve is a decrease in losses for the training data, the validation data, and the testing data (during both backtesting and forward testing). In order to achieve this, we must select meaningful features to train the AI algorithms.

What are the meaningful features? In my previous article titled “Embedding for Feature Engineering of Stock Symbols using PyTorch”, I introduced the ticker names as an important feature. If you have accumulated sufficient experience in discretionary trading, you must have realized that there is no one-for-all-stocks strategy. What stock we are trading plays an important role when we decide what strategy we should use. So, we must make sure that the AI model we are developing is also aware of this phenomenon.

Similarly, camarilla pivot points are also an important feature in day trading because there are many strategies based on them. When people believe these levels will act as support and resistance levels, they will really jump into the market at these levels using their hard-earned money and that introduces liquidity to the market and indeed move the market. I refrained to say moving the market in which direction because we really don’t know. This is one way how trading resembles gambling, despite gambling on the head or tail of a coin being a one-time guess, trading is a continuous guessing process on every technical level and every price action. This is indeed stressful for many people. With a well-trained AI, the outputs of probabilities for each technical level will inform us whether to execute the trade. I personally do love to have such an AI friend sitting behind me and giving me such advice during trading. What about you?

To give you a concrete example of the beneficial use of camarilla pivot points to aid in day trading, let’s study the screenshot of ticker NVAX on 12/15/2022, which was taken while I was practicing my discretionary trading skills in a simulator yesterday.


My trades of the NVAX 12/15/2022 ticker show the prices were contained within camarilla pivot points s3 and s4

As you can see, the stock price was rejected by the camarilla pivot point s3 twice in the premarket at 9:05 am and 9:27 am, respectively, and the bounce occurred at the camarilla pivot point s4 at 9:36 am. The intuition would be given the stock’s premarket performance, the stock appears to respect the camarilla pivot point s3 very well, thus, it may respect the camarilla pivot point s4 too. That respect turned out to be the bounce as we find out later.

The screenshot also contains the entries and exits of my three winning and one losing discretionary trades. My first trade is a short at the market open. As you can see, I was unable to hold the winning trade longer until the price hit the s4 level because the price actions tricked me into losing confidence in what that s4 level might do. We are hoping that with a well-train AI model, the model can give us a relatively confident output of the s4 level as the turning point. Imagine that your intuition is that the s4 would be likely doing something and the AI also tells you the s4 would be the turning point, what would you do? Stay in the trade and add to the position to maximize the profit out of this trade, wouldn’t we?

Okay, after knowing the importance of using carefully selected features to train the AI algorithms, let’s explore how to implement the computations.

Camarilla Pivot Points as Static Inputs

Note that the static inputs are those parameters that won’t change during our trading time frame. For day trading, typical static parameters that many day traders would love to plot by their charting software are yesterday’s high (yh), yesterday’s low (yl), yesterday’s close (yc), the day before yesterday’s high (yyh), the day before yesterday’s low (yyl), and the day before yesterday’s close (yyc). These 6 parameters are important historical levels for day trading. They are static because we know exactly where these levels are before the market opens and they stay the same throughout the day. This contrasts with the dynamic parameters such as volume-weighted average price (VWAP) and simple moving averages (SMAs) because these levels are time-dependent, and we don’t know where they are in the future.

Camarilla pivot points are another set of static parameters we would like to use as the inputs to train our AI models. They are computed using the previous trading day’s volatility spread. More specifically, the formulas to compute the 12 camarilla pivot points are as follows:


Formulas to compute camarilla pivot points

where r6, r5, r4, r3, r2, r1, s1, s2, s3, s4, s5, and s6 are the camarilla pivot points from high to low. The prefix r denotes resistance and the prefix s denotes support. yh1 is yesterday’s all-time high, yl1 is yesterday’s all-time low, and yc1 is yesterday’s all-time close.

Note that the computations of camarilla pivot points require you to use the previous trading day’s all-time high, low, and close prices. That means the premarket hours and the aftermarket hours are also included. That’s why I label them as yh1, yl1, and yc1 to distinguish them from the commonly found yesterday’s high (yh), low (yl), and close (yc) during the regular market hours from 9:30 am to 4:00 pm as you can find in many data providers such as yahoo finance.

Demo Computations

Now let’s implement the computations with Python and the following dependencies:

1. Finnhub: You can find the documentation of this API via this link . The beauty of this API is that it gives you live 1-minute pre-market data and allows 30 API calls per minute for a free account user. You need to register for a free API key to run the subsequent code.

2. Yfinance: You can find the documentation of this API via this link. The downside of the yfinance API is that you are not provided with the data off the regular market hours so you can’t compute the previous trading day’s all-time high, low, and close prices. The benefits are there are no explicit limits on the API calls, and you can use it to retrieve other commonly used static features, such as yesterday’s and the day before yesterday’s high, low, and close prices. That’s why we are using it in these demo computations.

3. We will also need Pandas, NumPy, and Matplotlib, which are commonly installed packages for any data science projects.

First of all, let’s import the libraries.

import datetime as dt
import finnhub
import yfinance as yf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
from matplotlib.ticker import AutoMinorLocator

Then we download the data. Note that I have hidden my API key. You need to replace the key with yours.

finnhub_api_key = 'YOUR API KEY HERE' # replace this with your key
ticker = 'NVAX'
start = dt.datetime(2022,12,15,0,0,0)
end = dt.datetime(2022,12,15,9,59,0)

def get_1m_data(ticker,start,end,finnhub_api_key):
finnhub_client = finnhub.Client(api_key=finnhub_api_key)
res = finnhub_client.stock_candles(ticker, '1', int(start.timestamp()), int(end.timestamp()))
df = pd.DataFrame(res)
df['t'] = [dt.datetime.fromtimestamp(df['t'][ind]) for ind,_ in df.iterrows()]
df.set_index('t',inplace=True)
df = df[['o','h','l','c','v']]
return df

def get_yy_data(ticker,end,finnhub_api_key):
start = (end-dt.timedelta(days=7)).strftime("%Y-%m-%d")
end = end.strftime("%Y-%m-%d")
df_yf = yf.download(ticker, start=start, end=end)
df_yf.rename(columns={'Open':'o','High':'h','Low':'l','Close':'c','Volume':'v'},inplace=True)
df_yf = df_yf[['o','h','l','c','v']]
start_dt1 = dt.datetime(df_yf.index[-1].year,df_yf.index[-1].month,df_yf.index[-1].day,0,0,0)
end_dt1 = dt.datetime(df_yf.index[-1].year,df_yf.index[-1].month,df_yf.index[-1].day,23,0,0)
finnhub_client = finnhub.Client(api_key=finnhub_api_key)
res = finnhub_client.stock_candles(ticker, '1', int(start_dt1.timestamp()), int(end_dt1.timestamp()))
df_fh = pd.DataFrame(res)
return df_yf,df_fh

df = get_1m_data(ticker,start,end,finnhub_api_key)
df_yf,df_fh = get_yy_data(ticker,end,finnhub_api_key)



With the downloaded data from Finnhub and Yfinance, I created three dataframes df, df_yf, and df_fh. The dataframe df is the current trading day’s 1-min OHLCV data. It is for the computation of dynamic features, which will be discussed in my next article, and visualization. The dataframe df_yf is used to retrieve yesterday’s and the day before yesterday’s high, low, and close prices. Considering that the code needs to be bug-free, we retrieved the data from yahoo in the past 7 days, because the stock market won’t be closed for more than 7 days. With any given current trading day, the date field in the last row of the dataframe tells us what its previous trading day was. Also considering that the yfinance does not have all-time data of a trading day for computations of camarilla pivot points, we referred to the Finnhub again with the date of the previous trading day to obtain the all-time data for that day to find the yh1, yl1, and yc1.

def gen_static_data(ticker,df,df_yf,df_fh):
static_data = {}
static_data['ticker'] = ticker
static_data['yh'] = df_yf['h'].iloc[-1]
static_data['yl'] = df_yf['l'].iloc[-1]
static_data['yc'] = df_yf['c'].iloc[-1]
static_data['yyh'] = df_yf['h'].iloc[-2]
static_data['yyl'] = df_yf['l'].iloc[-2]
static_data['yyc'] = df_yf['c'].iloc[-2]
yh1 = df_fh[['c','h','l','o']].max().max() # yesterday's all-hours real high
yl1 = df_fh[['c','h','l','o']].min().min() # yesterday's all-hours real low
yc1 = df_fh['c'].iloc[-1] # yesterday's after-hours real close
static_data['yh1'] = yh1
static_data['yl1'] = yl1
static_data['yc1'] = yc1
r4 = yc1 + (yh1-yl1)*1.1/2
r3 = yc1 + (yh1-yl1)*1.1/4
r2 = yc1 + (yh1-yl1)*1.1/6
r1 = yc1 + (yh1-yl1)*1.1/12
s1 = yc1 - (yh1-yl1)*1.1/12
s2 = yc1 - (yh1-yl1)*1.1/6
s3 = yc1 - (yh1-yl1)*1.1/4
s4 = yc1 - (yh1-yl1)*1.1/2
r5 = r4 + 1.168*(r4-r3)
r6 = (yh1/yl1)*yc1
s5 = s4 - 1.168*(s3-s4)
s6 = yc1 - (r6-yc1)
static_data['r6'] = r6
static_data['r5'] = r5
static_data['r4'] = r4
static_data['r3'] = r3
static_data['r2'] = r2
static_data['r1'] = r1
static_data['s1'] = s1
static_data['s2'] = s2
static_data['s3'] = s3
static_data['s4'] = s4
static_data['s5'] = s5
static_data['s6'] = s6
static_data['weekday'] = df.index[0].weekday()
return static_data

static_data = gen_static_data(ticker,df,df_yh,df_fh)

In this demo, the static features we computed are the ticker name, yh, yl, yc, yyh, yyl, yyc, the 12 camarilla pivots points and the weekday. We store this information in a dictionary called static_data. By the way, why the weekday is computed? Well, I believe people’s trading psychology differs on what weekday they are trading. You probably have heard of Black Monday and of course Black Friday. The logic is that some important news might happen during the weekends that will move the market substantially when the market re-opens on Monday. Black Friday is the day after Thanksgiving which has nothing to do with the stock market. However, I believe people tend to behave cautiously in trading for not allowing a red Friday to ruin their weekend. That’s why I also include the weekday as a static input.

Visualization of the Computed Levels

Do we correctly compute the camarilla pivots point?

To find out whether we have correctly computed the camarilla pivot points, we can compare the numbers we’ve computed with the levels that the charting software displays. Note that most retail traders are purely discretionary traders and their trading decisions are made by examining the price actions around the technical levels that their charting software give them, we need to make sure we feed the AI model with the levels that human traders actually see.

def plot_ohlc(df,ax,xtick_interval,static_data):
upper_lim = df['h'].max()*1.02
lower_lim = df['l'].min()*0.98
for ind in df.index:
ax.vlines(x = ind.strftime('%H:%M'), ymin = df.loc[ind,'l'], ymax = df.loc[ind,'h'], color = 'black', linewidth = 1)
if df.loc[ind,'o'] > df.loc[ind,'c']:
ax.vlines(x = ind.strftime('%H:%M'), ymin = df.loc[ind,'c'], ymax = df.loc[ind,'o'], color = 'red', linewidth = 6)
elif df.loc[ind,'o'] < df.loc[ind,'c']:
ax.vlines(x = ind.strftime('%H:%M'), ymin = df.loc[ind,'o'], ymax = df.loc[ind,'c'], color = 'green', linewidth = 6)
else:
ax.vlines(x = ind.strftime('%H:%M'), ymin = df.loc[ind,'o'], ymax = df.loc[ind,'c'], color = 'black', linewidth = 6)

ax.tick_params(axis='x', rotation=70)
x_list = [df.index.time[x].strftime('%H:%M') for x in range(len(df))]
ax.set_ylim(lower_lim,upper_lim)
xtick_range = np.arange(df.index[0],df.index[-1],dt.timedelta(minutes=xtick_interval))
xtick_range = [str(xtick_range[i])[11:16] for i in range(len(xtick_range))]
ax.xaxis.set_ticks(xtick_range)
ax.yaxis.set_minor_locator(AutoMinorLocator())
ax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))

technical_levels = static_data.copy()
del technical_levels['ticker']
del technical_levels['weekday']
for key in technical_levels:
technical_levels[key] = round(technical_levels[key], 2)
technical_levels = {k: v for k, v in sorted(technical_levels.items(), key=lambda item: item[1])} # sort dict based on values
for k,v in technical_levels.items():
if (v<upper_lim) & (v>lower_lim):
ax.axhline(v,lw=0.5)
ax.text(xtick_range[0],v,f'{k}: {v}')
return ax

fig,ax = plt.subplots(1,1,figsize=(10,6))
ax = plot_ohlc(df[df.index.hour>=9],ax,5,static_data)
plt.title('NVAX 2022-12-15 Fluctuates \nBetween Camarilla Pivot Points S3 and S4')
plt.show()

Concluding Remarks

By plotting the levels on our self-generated chart, we find our computed s3 and s4 levels of 14.46 and 13.51 contain the price pattern as they should. The charting software gave me s3 and s4 levels of 14.465 and 13.500, respectively, which are essentially no different compared to our results. Great!

Now we have the camarilla pivot points as part of the static inputs we need to train our AI. Please stay tuned for further articles on the processing of the data to feed the algorithms. If you have not followed my channel, please do so right now so that you won’t miss any future such articles. You can also find the complete details of the whole process in my book Day Trade with AI when it is released. To get notified when the book is available, join the waitlist by visiting.