JAFX Results of JAEA Automated Forex Trading System
The JAFX automated trading system and auto trader robot consists of a combination of multiple signal generating strategies designed to take advantage of market inefficiencies characterized as price action imbalances. To better understand what this means, one must approach financial markets as imperfect; and being price action as a living organism that is not ruled by traditional supply and demand forces. Instead, we have tackled liquidity, volume, sentiment and speculative positioning as the real drivers behind price development. The JAFX forex broker team have implemented sound statistical and inductive methodologies within an epistemological framework of seasonality theorisation. Despite of how complex this may sound, the process includes very simple steps such as:
Reconceptualization of Forex markets.
Modeling the main drivers behind price action.
- Design and modeling of potential trade sources (strategies).
- Design and modeling of compatibility rules for the different source.
- Acquisition and correction of price ticks for the last macro seasonality-
windowof volatility, i.e., from 2009-2015.
- Development of our cutting-edge in-house statistical and modeling software called “Dexter”.
Incorporation of artificial intelligence (AI) modes into “Dexter”.
Optimization of sources through our models of price drivers already implemented into “Dexter”.
- Optimization of our intra-sources and intra-assets rules designed to enhance the overall performance of the system.
- Modeling of advanced risk management features such as “risk per signal”, “profit securitization”, “profit multiplicator”, etc., that allow for exponential growth.
The ultimate goal of any trading system is to maximize profit when optimal market conditions occur. Although profit maximization may seem as the most straightforward idea it
Traditional Economic Theories of Trading
Allow me to answer the two simplest but perhaps most important questions every trader must at some point ask themselves;
1) What are financial markets?
2) What are the real intrinsic forces behind price action?
After asking these two questions to as many colleagues that I could throughout the years, I have come to the conclusion that despite of the indisputable conviction with which they have answered, none of them truly realize the complexity of the matter; and what is worse, none have seriously noticed that proper thinking is the only way to actually attempt some degree of prediction.
Trading financial markets carries economic consequences and requires a decision- making process oriented towards risk minimization and profit maximization (“efficiency enhancement”). Economically speaking, “risk” does not relate to the potential loss to be inflicted upon the decision-maker but to the level of uncertainty surrounding the outcome of any given decision either positive or negative. Similarly, “profit maximization” can be described as the derivative of the profit curve; in other words, the level whereby increasing risk does not correspond to an increase in profit, i.e., finding equilibrium.
An example can better illustrate the above. Imagine that you are driving to the hospital at midnight with an injured relative lying on the backseat of your car when a red light pops up. Legal and ethical considerations aside, your mind naturally starts a decision-making process as to whether to stop the car or to ignore the sign.
Remember, either decision carries (economic) consequences you ignore at that time but that you can nevertheless try to rationally anticipate.
If you stop the car, you could not only praise yourself for being a good citizen but you will also avoid having to pay a traffic fine; however, by doing so you will be prolonging the pain and suffering of your beloved one which could also bring disastrous aftermaths on his health. On the other hand, you could prioritize the health of your relative by ignoring the traffic light but this will also increase the risk of collusion. Despite of the complexity of the dilemma to which you are now being confronted, you have a few cognitive tools at your disposal provided by your own experience and knowledge. For instance, you can attempt to evaluate the seriousness of your relative’s poor condition by simply taking a look at their appearance; and by observing your surroundings you could realize whether or not there are surveillance cameras or other cars nearby. Regardless of the final decision you make, at the end your rationality together with the information collected by your perspicacity will allow you to behave efficiently.
Graph 1. Pictorial representation of an efficient decision-making process.
Trading financial markets is to some extent similar to the example of deciding upon whether or not to respect the traffic light; in both cases the decisions you make are the result of your rational attempt to balance risk against expected profit efficiently, i.e., reaching that level where additional risk does not increase profit, or only increases it marginally. Other factors such as emotions or a biased risk-loving personality of the subject can well lead towards totally different outcomes but these could never be efficient in the economic sense. However, it is outforwardly true that trading is with difference one of the most complex activities there is, a reality that makes it practically unreachable for most people. Its complexity derives from the difficulty of obtaining high-quality information/tools which jeopardizes the decision-making process of the trader.
Inherit Deficiencies of Traditional Information-Collective Methods
I would like to explain what type of information is made available to most traders and how we designed our unique method of market prediction. Those who start the process of becoming “expert traders” are very soon lectured on what is sold as essential and indisputable, namely that market analysis consists of a two-tier technical and fundamental approach.
“Technical analysis” is typically performed through a vast number of tools that include but are not limited to the so-called “technical indicators” and the drawing of lines, waves, channels, circles, infliction levels (support and resistance), etc., all of which are supposed to “read” historical price development in an attempt to “predict” its future direction, intensity and real scope.
“Fundamental analysis” is usually described as the process of understanding and applying economic theory “likely” to be condensed as “economic data” i.e., by “reading” the economic conjuncture of a given country, fundamental analysts claim that they can portrait the level of appreciation market participants give its currency.
Although I do not deny the relative importance to the “traditional” analytical approach, throughout the years I have become more aware of its inaccuracies as my understanding of the “hidden forces” grew robust. Put it differently, the technique of plugging a bunch of “technical indicators”, drawing a few lines on the charts, or grasping a few macro-economic concepts such as growth, employment, inflation or monetary policy carried out by central banks lacks sufficient prediction power unless other sources of information are included; otherwise any third-year student of economics could become a millionaire.
Moreover, all retail traders have regularly been confronted
We need a new perspective to better understand the complexity of Forex and to explain why most (retail) traders fail. First of all, trading implies a decision-making process which efficiency depends on the degree to which collected information is both accurate and sufficient. Second of all, either technical or fundamental, “traditional” market analysis is nothing but a method to gather information. The problem is that information related to liquidity, volume, sentiment, speculative positioning, among others, is not only vital by constantly dismissed by such methodologies. I admit that objectifying this information is rather complicated (it has taken my team and I over 1 year) which is why most traders incur the common mistake of blindly trusting analysis, commentaries, forecast, etc., performed by ego-swollen fellows who love to cynically indulge themselves with titles such as “senior market analyst”, “expert trader”, etc. Although some analysts do account for all market divers, one cannot forget from where their payrolls come, namely, market-maker brokers, bank and hedge funds cartels, to mention just a few. Unfortunately, the entire industry seems to sponsor a permanent bombarding of insufficient and contradictory information with the sole purpose of printing fresh batches of liquidity into the markets; liquidity that afterwards is profited by just a few traders. To sum up, there is nothing “paranormal” about trading and I also forthrightly reject the existence of conspiracy-wise theories. Instead, we have a problem of bias, scarce and conflicting information being provided to the masses through concerted schemes directly correlated to its complexity.
I have prepared 3 event risk scenarios to exemplify how traditional analytical methodologies prove insufficient to predict future price action development. While making the selection I realized how difficult it was to find cases where successful trading would have been possible through ordinary means. The three examples I am about to show refer to “high-impact” pieces of economic data (i.e., data with the true potential of generating high levels of volatility) that fall within most traders’ predilection, namely the US Non-Farm Payroll (NFP) and the monetary policy meetings held by the Reserve Bank of Australia (RBA) and the European Central Bank (ECB). I will first highlight the inconsistencies of traditional technical and fundamental methodologies before introducing the importance of using other sources of information.
Table 1. Analytical comparison on NFPs impact on the EUR-USD over a period of 13 months.
The so-called NFPs consist of different types of data, the most relevant of which are NFP (the number of new jobs that were created in the last period), Unemployment Rate (the number of people who have not found a job on percentage terms), Private NFP (Number of jobs created by non-governmental agencies) and Participation Rate (how many people were actively seeking for an employment). Each data includes figures related to the previous, forecasted and actual releases.
Understanding the data is crucial. The first problem which traders must deal with is to decide upon which type of data is the most important. For example, whether the actual NFP having beaten expectation can be regarded as a clear sign of economic improvement in the labor market if and only if unemployment decreases; however, unemployment can actually increase despite of the better-than-expected jobs created if participation grows. The second most common problem is how to measure the robustness of the data, i.e., it’s clarity as trending generation. For simplicity, I have established a proxy I call “deviation” which is nothing but the percentage change between the actual release and the average between the previous updated and the forecasted figures. I also include three types of “intensity” values as the pips-based fluctuation in price after the publication of the data, i.e., “contrarian” (against the expected direction), “matching” (in line with the expected direction) and “daily” (the range built on the trading day the event takes place). In addition, I have included qualitative information I call “Price Bias Afterwards” which correspond to direction with which price is expected to develop (“Expected”), the direction that occurs after near-term volatility settles down, i.e., once all market participants have digested the given data (“Lasting”) and the “trending implication” of the release, i.e. whether it had the power to generate or to continue a trend in price (“Trending Reaction”).
Table 1 clearly shows how what should make sense actually does not. For instance, one could expect that the releases with the highest deviations would in turn be those with the highest intensities and the strongest trending power. However, only the one with the highest deviation figures (occurring on Nov. 11th 2015) had the highest daily and matching intensities; although for the next 3 highest the matching intensity is higher than the contrarian one. I am referring to those values above ǀ50ǀ which are colored in green. Only 38% of the times NFPs had significant trending power and only the one with the highest deviation figure had true trending implications (though only bearish continuation). Moreover, the NFP with the highest daily intensity (June 5th 2015) had a deviation barely above the average although having a contrarian trending impact on price. Even more concerning is that 75% of the NFPs in 2015 set up what the literature describe as traps (either bullish or bearish) where not technical nor fundamental considerations really mattered. As a conclusion, the event risk that is considered as the “holy grail” in the trading world offered very few opportunities of profit-making.
Graph 2. NFP on February 6th 2015 on a 4-hours’ time-frame EUR-USD chart.
Graph 2 shows how the NFP with the lowest deviation in the last 13 month could nevertheless manage to generate the third highest volatility levels with a daily intensity of 174.6 pips. However, its lack of momentum led to a contrarian bullish constructive pattern, a typical bearish trap for mid-to-long-term traders. A better understanding of “hidden” forces such as seasonality, liquidity, positioning, sentiment and prevailing themes could have helped inexperienced traders avoid expecting a follow-through that (could) never happen.
Graph 3. NFP on February the 6th 2015 on a weekly time-frame EUR-USD chart.
Allow me to briefly explain these drivers. As Graph 3 reveals, seasonality theory suggests that, on the one hand, the huge price plummet from the 1.40 area in May 2014 down to 1.10 in Feb 2015 had already consolidated 2 major volatility windows; and due to the lack of clarity of the actual data print (poor deviation reading) it was more likely for price to follow a consolidation-constructive pattern than resuming a bearish continuation trend. On the other hand, historical analysis advises that being the second and fourth quarters of the year the ones with most market liquidity and considering the extreme bearish sentiment (positioning) of retail traders, February did not offer the best conditions to anticipate a follow-through. Similarly, the 1.10 area was a very congested area confluence with matching long-term multi-year reversal and extension Fibonacci levels (see cluster on the graph). Put it differently, all the above indicated a lack of momentum as well as the imperative need of a new fresh batch of liquidity.
Graph 4. NFP on May the 8th 2015 on a 4-hours’ time-frame EUR-USD chart.
April offers another great example of a trading trap prompted by the distortion of logic and the insufficiency of traditional market analysis. Despite of the data being clearly bearish with a deviation value of 44.33%, the contrarian intensity was more than three times higher than the matching one; additionally, the lasting and trending reaction was clearly bullish. As a matter of fact, price needed less than three trading days to resume the upward trend which by the way was also circumscribed (as graph 5 shows) within a mid-term constructive pattern.
Graph 5. NFP on May the 8th 2015 on a weekly time-frame EUR-USD chart.
Once again, a better comprehension of the “hidden” market forces became useful to avoid the bearish trap. One the one hand, the event under scrutiny occurred in the middle of a seasonality-window transition from the second of the third quarter of the year, which means that volatility was already severely subdued. Seasonality also suggests the need of pattern completion up the top of the previously support now resistance area at around 1.10. The reader should also remember relative monetary policy as the major overall theme and the balanced retail positioning of the time. Back then, whereas the ECB had already deployed most of its easing artillery and market participants could not anticipate an even more dovish stance, the FED’s expected normalization move had already been fully priced in. Needlessly to mention, it took imprisoned traders 10 days to escape the trap victorious due to the mid-term constructive (but bearishly biased) pattern.
Graph 6. ECB on DEC the 3rd 2015 on a 4-hours’s time-frame EUR-USD chart.
The second example refers to the worst bearish trap of 2015. Mr. Dragui, the President of the European Central Bank offered, with his unique and well-known unclear and perhaps non-transparent rhetoric, on December 3rd 2015 another clear example of what a Central Bank should not do in clear violation of its mandate. Everything started during the October (22nd) meeting when the ECB made it clear that more efforts were likely to be made as means to contend risk and credit imbalances across the Euro Area as well as to fight against deflationary pressures. Since then markets took advantage of every major event risk to price in a relative monetary policy theme enhanced by Mr. Dragui’s words. The problem is that despite of the ECB having lowered the Deposit facility Rate from -0.20% to -0.30% as forecasted the pair sky rocketed an astonishing 460 pips which became the deepest bullish correction (rally) since 2009. What captured my attention was the very prolonged silence from market analysts and dedicated websites which conveniently explained the move as “normal” due to the ECB not having “sufficiently” met expectations. I must dare to say that the ECB could have actually mortgaged the EU finances for the next ten generations and price would had developed in the same way (perhaps less violently though).
Graph 7. ECB on DEC the 3rd 2015 on a weekly time-frame EUR-USD chart.
So what were the hidden forces behind such move? First of all, the resumption of a trend that implies the breakout of a multi-year low requires more than convincing data; in fact it also needs the right seasonality, positioning and liquidity conditions. Historical seasonality analysis suggest not only that December tends to be less benevolent with the USD but also that volatility drains out along with liquidity and market participation. This is due to accounting consolidations typical performed by big players around this time. In addition, a bearish trap was a good way to induce fresh batches of liquidity and a flip-over repositioning of retail traders (also known as non-commercial).
To the extent of my knowledge, going beyond the traditional analytical tools could have helped traders to easily avoid such trap. Moreover, I must proudly say that my system successfully traded this event with remarkable precision (see graph 8) while many traders incurred in heavy losses.
Graph 8. How my “Dragui proof· system successfully traded a bearish trap with remarkable precision on DEC 3rd 2015.
The third example comes on behalf of another member of the world-wide neoliberal banking cartel, the Reserve Bank of Australia (RBA). Throughout the year, the RBA delivered unclear and sometimes contradictory statements that were used by those few with inside information to catch non-commercial traders off guard. Once again, traditional analytical tools alone proved insufficient to successfully trade Australian’s monetary policy event risks.
Graph 9. RBA’s Monetary Policy Meetings from January to April 2015 on a 4-hours’ time-frame AUD-USD chart.
Graph 9 shows how the January unexpected better employment data was soon forgotten when Mr. Steven, the president of the RBA once again insisted on the Australian Dollar’s overvalue. That is, at least commonly described as the fundamental trigger of the next bearish leg from 0.83 to 076. The weird part is that markets did not forecast the next rate cut on the February (3rd) Meeting when in reality it was such dovish tone the main driver (the only driver actually) of that 600 pips move. Anyway, the “unexpected” rate cut drained liquidity to exhaustion right at the 100 Fib extension level. As a result, 24 hours afterwards price had completely retraced back to pre-event levels entering into a multi-week ranging although bearishly biased pattern.
Graph 10. RBA’s Monetary Policy Meetings from April to July 2015 on a 4-hours’ time- frame AUD-USD chart.
A succession of bearish and bullish traps characterized subsequent RBA Meeting in the second and third quarter of 2015. For example, on April the 7th, markets used Mr. Stevens dovish tone to form a triple-bottom reversal pattern that was followed by the retest of the 0.81 confluence area; although markets had forecasted a further cut in interest rates which materialized on the May Meeting i.e., yet another bearish but short-lived trap within a larger constructive pattern. By the Middle of May (seasonality window coinciding with the second quarter of the year, which typically favors major-theme-based trend developments) comments from Federal Reserves’ officials served to articulate a USD bullish speculative repositioning (concerted across the board). In the June (2nd) Meeting, the RBA’s neutral approach was “regarded” by market analysts as the beginning of a long-term upwards constructive pattern; nevertheless it ended a few days later after having been seconded by a 260 pips move. New batches of dovish commentaries from RBA officials triggered the resumption of the bearish trend, only that with less intensity this time.
Graph 11. RBA’s Monetary Policy Meetings from August to December 2015 on a 4- hours’ time-frame AUD-USD chart.
As graph 11 shows, the development of the new multi-year bear leg down to the 0.7 area was not prompted by any event risk but by inertia (bearishly biased sentiment). In fact the RBA September (1st) meeting was used as a pretext to enhance fresh flows of liquidity by flipping over the already squeezed bear positioning of non-commercial traders which in turn coincided with an exhaustion in price. The October (6th) Meeting determined no change in policy-making, although price continued to develop a short-lived bullish constructive pattern up the previous support now resistance extension area at around 0.73. Similarly, the November (3rd) Meeting, followed by the October NFP resulted in well- elaborated bullish and bearish traps successively.
Implemented Methodology to overcome Inefficiencies
As “imperfect” as financial markets are, a scientific approach to price action requires the implementation of a unique methodology that goes beyond traditional descriptive means of the supply and demand laws while accounting for the real underlying forces. If there is one thing my vast experience as a professional trader has taught me is that liquidity, seasonality, speculative positioning and trading sentiment (among others) are the essential market forces. I have spent the last two years designing a methodology that could objectivize these drivers into a workable strategy, by incorporating advanced statistical tools, artificial intelligence modes for robust pattern recognition and setting optimization, and of course the acknowledgment of the trading costs.
Step I. Implementation of Gann’s seasonality Theory.
Describing my system as probabilistic is self-explanatory of the implemented methodology. The first step was to procure for the correct infrastructure to achieve statistically robust results. Now, statistic robustness is only achievable with accurate, well documented and vast data samples.
Graph 12. Examples of price ticks for the EUR-USD
Seasonality theory was applied to establish the correct time-frame within to perform robust statistical analysis. I have acquired price ticks for 14 assets including all majors and the most liquid cross-pairs, from 2009 to 2015, which corresponds to a “macro-window” that is expected to include, at least from a theoretical viewpoint all possible market conditions, from extremely flat to highly volatile. As graph 12 shows, the ticks include relevant cost- related information such as spread and slippage as well as the bid and ask prices.
Step II. Back-testing and Analytical Software nicknamed “Dexter”.
The second step was to develop a backtesting and analytical software we have nicknamed as “Dexter”, which in Latin means the “right hand”. Designed to accurately process billions of price ticks data, Dexter is fully written in performant C++ including meta-programming and Cuda C; it is a zero cost abstraction layer over the core, so that strategy programming is no longer about dealing with the tick files and candle generation. The described program is running on a 24 Core Server, which has enabled us to perform a vast number of runs simultaneously. We have also data mined each run through a Python / R program to process results and further optimized our strategy settings in a real professional fashion. Graphs 13 and 14 show Dexter’s unique features.
Graph 13. Dexter’s Core Engine. Programming an Automatic GUI generation from given settings.
Graph 14. Programming Process of “Dexter”.
Step III. Modeling of Real Market Drivers.
The third step consisted in modeling what I describe as the real forces of price action, i.e., liquidity, volatility, sentiment, speculative and non-speculative positioning, etc. We confirmed that understanding such drivers prevents trading traps as those in the three examples above. I must admit that this was the most complex stage in the research and development process. With the assistance of a world-class professor of economics, it took me over two thousand formulas to cover all drivers. Graph 15 shows a tiny fraction of how I modeled volatility.
Graph 15. Mathematic Algorithms to Model Volatility.
Step IV. Sources of Signals.
A fourth step was about the development of what I call “sources of trades”, i.e., sound trading ideas of how to objectivize market inefficiencies (i.e. temporary price imbalances resulting from external factors) as price clusters (areas which means are inflection price levels) over short-to-mid-term periods. Graph 16 shows an example of one of the six sources of trading I have so far designed on a 4-hours’ time-frame. It is worth mentioning that different sources can generate contradicting trading opportunities simultaneously and that such opportunities (i.e. signals) can refer to call or put scenarios (i.e. signals to buy or sell a given asset); similarly, signals are generated on different time frames and have varying “intensities” (likelihood of success).
Graph 16. An example of cluster recognition on a 4-hours’s time-frame for the EUR-USD.
Step V. Artificial Intelligence and Conciliatory Rules
The fifth step consisted in the implementation of artificial intelligence (AI) modes for pattern recognition and setting optimisation. Put it simpler, AI helped us conciliate conflicting sources of signals within multi-periods layers by establishing a pattern-wise probabilistic grading system. It also proved essential for applying and optimizing my intra- asset and risk-management rules. Graph 17 below describes Machine Learning, perhaps the easiest AI Mode.
My “intra-asset” and “intra-sources” rules enable the system to “process” signals simultaneously generated by two or more sources and/or in different currency pairs. Put it differently, they allocate that portion of the initially taken and then deleveraged-in-profit risk from an existing signal into emerging opportunities. For example, if two or more signals happen to be detected at the same time, the system places the one with the highest value (i.e. probabilistic ration) while discarding the other(s); new signals in the same and/or on a different time-frame, and/or generated by another source will only be traded after all or part of the risk of the placed position is discharged. Thus, signals cannot be categorized as “trending” or “reversal” ones; instead, the system permanently scans the markets for trading opportunities, and carrying out a grading process it combines them in a compatible fashion.
Graph 17. A pictorial description of “Machine Learning”, one of the several artificial intelligence modes implemented.
It is time to combine all ideas into something tangible. For that I am presenting you the different optimization stages; for
Graph 18. Results of a typical “white strategy” on the EUR-USD over 7 years.
The results introduced by Graph 18 correspond to those a good trader would have reached over 7 years after applying a typical “white strategy”, i.e., a trading system that lacks any statistical robustness. Needlessly to say, neither technology nor advanced trading knowledge has been implemented, except for a couple of technical indicators and certain logic of how financial markets develop. Despite having reached reasonably good profit over time, this strategy carries severe risks due to the several sharp drawdowns.
Graph 19. Results after applying liquidity proxies on the EUR-USD over 7 years.
After applying some proxies of liquidity results improve as shown in Graph 19; not only draw-downs are less sharp (i.e., lower risk of burnout) but periods of stagnation (i.e., time intervals where the profit curve does not reach “higher highs”) shortens while “sacrificing” just a few trades in the process. Nevertheless, implied risks are still high suggesting more work was required.
Graph 20. Results after accounting for trading costs as optimization filters on the EUR-USD over 7 years.
Dynamic trading costs such as spread and slippage are very sensitive to sharp near-term fluctuations in price, typically associated with temporary dysfunctionalities caused by economic and/or political event risks. Perhaps the main troublesome consequence traders must face in such scenarios is the increased possibility that such fluctuations, sometimes remained unnoticed (usually described as “noise”) activate their pending orders, thus invalidating what otherwise could have been sound setups. Whereas spread is the floating fee charged by brokers, slippage can be defined as the price difference between two consecutive ticks; both costs can be used as proxies to efficiently detect and avoid the described problem. Graph 20 shows how accounting for trading costs helps to filter less than 4% of the trades that were responsible for 17.5 % of unrealized profit (from $157,959 to $134,426); Moreover, only 94 of such situations (in 7 years) lowered the depth of the worst draw-down about 50% and lowered the risk of burnout from high to moderate.
Graph 21. Results after adding speculative positioning optimization filters on the EUR-USD over 7 years.
As I have repeatedly explained throughout this text, speculative positioning (SP) is one of the most relevant drivers of price action; and it is actually the most complex to
Graph 22. Optimization of low-volatility trades on the EUR-USD over 7 years.
One of the many purposes served by artificial intelligence was to recognize the optimal threshold pattern used to group trades into high and low volatility
Graph 23. Optimization of high-volatility trades on the EUR-USD over 7 years.
It can be appreciated in Graph 23 how optimization is better enhanced in the high-volatility group. This group contains fewer traders than the low-volatility class due to the impact that the trading costs proxies had on the sample; which is actually a very good indication that over-fitting was properly tackled.
Graph 24. Aggregated low and high volatility results separately optimized on the EUR-USD over 7 years.
The optimization process my team and I went through consisted of two main chapters. Whereas the first part included the design of filters abstracted as descriptive proxies of market forces and its implementation through statistical and AI methodologies, the second part referred to risk management rules developed to enhance exponential growth. Put it differently, the first part is about collecting accurate and reliable information to help regular traders reach an efficient outcome out of a decision-making process. On the contrary, the second part refers to a process of massive data analysis only perform-able by an automated forex trading system. Graph 24 shows the final stage of the first part of the R&D journey.
Risk Management And User Interaction With Examples
Any degree of trading atomization implies, by definition the limitation of the freedom with which users are able to interact with the given system. Although some level of predetermination is unavoidable, features related to risk sentiment should be kept open to all users. All EA’s I am aware of
Therefore, those who use my system will obtain
How The JAEA Will Work
After collecting the information provided by the user, the EAR sends it to our matrix EA “Generator” (EAG). The EAG processes this information and sends tailored signals back to EAR previously installed in the MT4. It is actually the EAR that places all trades. The entire communication process between the EA’s only takes a few milliseconds which enhances complete reliability.
The advantages of this structure go beyond allowing users to print their own risk sentiment on the system; we must include the protection of my intellectual property rights as it keeps a strict control over who is paying for my signals, and the total externalization of liability on the customer. My duty of care implies the provision of sufficient information of all the different variables users are permitted to customize; however, I am not allowed to bias anybody’s free will by recommending any given setup. Consequently, I cannot disclose the settings I use in my live accounts where the public is able to dimension the true power of my system. Times will come when we as a company take the next step and achieves all required regulations to manage funds from
third parties. Until then, users must be aware of the possibility of losing all their investment money if deciding upon settings that carry an extreme exposure of risk.
I will now introduce the different variables that are subject to customization by users:
Users have the possibility of selecting the assets that best matches their own preferences among a wide range including the most liquid major assets, i.e., the EUR-USD, GBP-USD, USD-JPY,
This feature relates to the actual risk each user is willing to undertake for all open trades combined in relation to their equity. Imagine you choose a 10% risk. Your first signal, if lost will lower your equity by 10%, accounting for how much of your profit is secured (i.e., “profit securitization” and “risk multiplicator”) and your sentiment settings. If part of the signal is deleveraged in profit (for example) and your current risk is only 6% of your new equity, a new generated signal from another asset (for example, but it could also be
This feature allows users to preserve dynamism (i.e., by trading more generated signals) while preventing losing more than what their risk willingness suggests. It is important to warn you that our models suggest that a risk higher than 35% is only suitable for risk- loving users as the possibility of having a losing streak is relatively high which has the potential of lowering the given equity to a negative level of no return. I have started with a very low risk of only 5% and have progressively increased it to 10% where it is now. I am also planning to gradually raise this benchmark up to 15-20% once exponential growth kicks in. Another suitable option is to start with 10% and to slowly increase it up to 25% while keeping profit securitization at 40% at the start before gradually lowering it down to 20% once exponential growth kicks in.
Graph 25. An example of a high-risk setting.
Graph 25 provides an example of how a high-risk setting generates very sharp fluctuations on the profit curve, reaching over 2 billion profits at the peak before plummeting in just a few trades. This is of course not the ideal scenario as I personally prefer less but more steady yields.
Profit Securitisation and Investment Multiplier.
Perhaps the most amazing characteristic of my “high-leagues” system is that it capitalizes winning streaks into what I call “exponential growth” of the equity while preventing losing streaks from wiping earnings and/or balance off. Whereas exponential growth is primarily empowered by the variable “risk” as it increases the leverage (
PS is the percentage of the booked profit that is protected (i.e., neglected) by the system when calculating the leverage of the next incoming signal in accordance
Now, the level of profit protection gradually increases from zero up to the elected PS benchmark which coincides
The user must balance off the importance of protecting his profits against the feasibility of exponential growth. Put it into perspective, the more profit you secure the more difficult it will be for your equity to grow exponentially when winnings streaks occur as the available leverage remains subdued. As I have repeatedly explained, users should have a mid-to-long term approach. My models suggest that winning streaks are common and it will only be a matter of patience until users benefit from the true strength of my system.
Graph 26. A Descriptive Representation of the relation between protection of profit and exponential growth/profit
However, readers should also be aware of the diminishing return of the efficiency curve by accounting for the side effects of PS and IM. Put it differently, a high level of protection at the start will delay exponential growth but not in a linear progression, i.e., at some period over-exposure will also enhance growth steadiness as the impact of losing streaks will be diminished (See Graph 26 above).
Since I favor neutrality in my decisions, I have chosen a PS of 30% and an IM of 100;
“Sentiment” refers to how dynamic a given user wishes the system to be. A trading system is more or less dynamic depending on the amount,
Graph 27. A Descriptive Representation of the inverse correlation between “Sentiment” and dynamism.
I have abstracted the above explained inverse correlation as a scale from 1 to 5, 1 being the most “dynamic” and 5 the least; needlessly to say, this increases the level of customization even more as users can in a simple fashion resemble their true “risk sentiment” preferences. It is worth mentioning that I have chosen the best strategy settings for each number of the scale, making sure that users will always achieve a great performance regardless of the level he or she selects. However, readers must also be aware that each asset has its own peculiarities and that consequently all variables subject to customization, especially “sentiment” require a special calibration for each one of them.
Consistent with my moderate character I have chosen sentiment No. 3 which implies a well- balanced calibration of the stop loss and take profit so that price has enough room for consolidation without sacrificing risk control and dynamism.
Customization would not be feasible without a user-friendly platform for simulating the wide range of possible combination of variables.
Beyond the desirability of personal settings’ customization, I am fully aware of the complexity of trading as well as of how certain concepts are simply unreachable for some users. Consequently, the EAR contains 4 different templates which once selected inhibit all the other variables.
I have categorized each template as “conservative”, “neutral”, “liberal” and “risk-loving”. Please consider that the templates are not designed to classify users or to offer a bullet- proof replacement of anybody’s free will, i.e., it is only my personal and subjective way of understanding the wide and complex nature of “typical” traders. The following graphs introduce an example of each kind for the EUR-USD.
Graph 29. A pictorial representation of the “conservative” template’s performance for the EUR-USD.
Trading “conservatively” by definition implies low risk,
Graph 30. A pictorial representation of the “neutral” template’s performance for the EUR-USD.
As graph 30 shows, a “neutral” approach (my favorite) enlarges yields (approx. $9
Graph 31. A pictorial representation of the “liberal” template’s performance for the EUR-USD.
The transition from “playing safe” to having a gambling-wise spirit is well represented by graph 31 which contains “liberal” settings. Higher risk and lower profit securitization
Graph 32. A pictorial representation of the “risk-loving” template’s performance for the EUR-USD.
Even though I do not recommend it, I admit that some users do have a risk-loving character. Graph 32 shows how profit reaches an astonishing $8 Billion level before dramatically plummeting in a matter of a few weeks; at the
The following graphs show the impact of changing just one of the customizable variables, i.e., risk, while keeping profit securitization, risk multiplicator and sentiment unchanged. I have chosen rather balanced settings, namely those belonging to the “liberal template”. For the sake of
Graph 33. Profit curve with “liberal template”,
As graph 33 shows, patience is highly rewarded by my system. By grading risk as low as 5% users can avoid sharp fluctuations of profit,
Graph 34. Time-based analytical description of “liberal template” with risk at 5%, EUR-USD from 2009 to 2015.
However, the side effect of having risk so low is that profit stagnation periods last longer; as shown in Graph 34, the 5th year experienced tough market conditions that resulted in the sharp reduction of signals and the negative value of the expected aggregated averaged monthly return in relation to the previous month, i.e., profit stagnation. It must be said that even if such situation happens at the start, a low risk prevents the risk of burnout to almost none.
Graph 35. Profit curve with “liberal template”,
The effect of doubling risk to 10% is to multiply profit by nearly 6 times, from 600k to 3.5 million over 7 years as represented on graph 35. Now, 10% risk can also be considered as highly conservative while efficiently enhancing exponential growth and reducing the side effects of losing and neutral cycles.
Graph 36. Time-based analytical description of “liberal template” with risk at 10%, EUR-USD from 2009 to 2015.
Another great consequence of having
Graph 37. Profit curve with “liberal template”,
With risk efficiency being reached at any level between 25 and 30%, a 15% risk starts showing 2 side effects, i.e., more pronounced profit fluctuations and the true possibility of lowering equity to levels from which recovery starts to become a problem. However, as graph 37 shows, the closer to the
Graph 38. Time-based analytical description of “liberal template” with risk at 15%, EUR-USD from 2009 to 2015.
Graph 38 clearly indicates that at a 15% risk the number of profit fluctuation sharpness decreases but the depth of draw-downs increases as the impact of pros and cons widens. For example,
Graph 39. Profit curve with “liberal template”,
20% risk is undoubtedly the last redoubt of what a risk-neutral trade may regard as safe havens. I certainly recommend this value if and only if PS is very high and RM rather low. But what is the true inherent risk of such selection? Our models tell us that the worst losing signal in 7 years of collected data is 6 completely leveraged positions, with means a decrease of the initial investment of 73.78%. Since no system is able to predict the future and our models are by no means indicative that worse losing signals are not possible, users must consider the possibility of such dramatic draw-down happening right at the start. As Graph 39 shows, the worst situation happened in the 5th year where a draw-down of 44.7% in relation to the highest equity took place.
Graph 40. Time-based analytical description of “liberal template” with risk at 20%, EUR-USD from 2009 to 2015.
As Graph 40 shows, efficiency is almost reached at 20% risk if one considers the low divergence between the highest equity and the consolidated profit.
Graph 41. Profit curve with “liberal template”,
What may seem as a nice picture for those who already anticipate recalibrating settings at one of the peaks in the profit curve shown by graph 41, in
Graph 42. Profit curve with “liberal template”,
Put it differently, the greater the risk taken the deeper and more intense the impact of cycles become, either winning losing or neutral. As explained above, the crucial point at which such an impact begins to soften in where exponential growth gears up; and a good example of this is depicted in graph 42, where after a great start profit plummets to nearly 90% from the highest peak.
Graph 43. Time-based analytical description of “liberal template” with risk at 25%, EUR-USD from 2009 to 2015.
“expected aggregated average monthly return in relation to the previous month” concept. The side effect is of course that the risk of burnout becomes “moderate” for the first time; once more a confirmation that efficiency has been reached.
Graph 44. Profit curve with “liberal template”,
Risk at 30% is analyzed as it still falls into the category described as “liberal” more due to the risk-reward ratio than to the inherent risks of facing a losing or neutral cycle right at the start. Personally, I consider this level highly attractive at a later stage of the process, whereby profit has already reached many times the initial investment. Users must be aware that selecting such a great risk at the beginning is only suitable where PS is very high (i.e., anything above 60%) and RM is very low (i.e., anything below 20).
Graph 45. Time-based analytical description of “liberal template” with risk at 30%, EUR-USD from 2009 to 2015.
One of the most commonly heard
Graph 46. Profit curve with “liberal template”,
Graph 47. Profit curve with “liberal template”,
Glossary of Terms
The only reason why I decided to include graphs No. 46 and 47 is to show how profit performs at
Initial Investment. The initial stake initially deposited by any given user.
Risk. One of the risk-management variables subject to customization by the user, “risk” refers to the percentage-wise amount of the current equity that could potentially be lost by trading each incoming signal.
Profit Securitization. One of the risk-management variables subject to customization by the user, “profit securitization” (PS) refers to
Risk Multiplicator. One of the risk-management variables subject to customization by the user, “risk multiplicator” (also risk multiplier or RM) refers to
Risk Sentiment. One of the risk-management variables subject to customization by the user, “risk sentiment” refers to the abstracted layer that comprises the level of dynamism a given user prints into the system; on a scale from 1 to 5, 1 being the most dynamic and 5 the least one. In general terms, a system is more or less dynamic depending on the actual combination of
Daily Average of Trades. It is the total amount of trades divided by the actual number of trading days included in any given period. The number of signals does not necessarily equal the number of trades as 1 signal could comprise more than 1 trade.
Consolidated Profit. It is the USD value by which the equity grows over any given period; i.e., the net final profit.
Monthly Averaged Profit. It is the USD value that results from dividing the net realized profit over any given period into the number of months it comprises.
Expected Aggregated Averaged Monthly Return in Relation to the Initial Investment. It is the percentage-wise value of the “monthly expected profit” calculated in relation to the initial investment.
Best Monthly Profit Increase In relation to the Initial Investment. It is the percentage-wise value of the best performing month in any given period in relation to the initial investment.
Best Monthly Profit Increase In relation to the Previous Month. It is the percentage-wise value of the best performing month in any given period in relation to the previous month.
Total No. of Lots Used. It is the total number of lots required to achieve the net consolidated profit in any given period.
Consolidated Averaged Profitability per Lot Used (Lot Cost $10). Assuming that the costs of trading 1 lot
Total No. of Pips Earned. It is the net amount of all trading pips booked over any given period.
Risk-Reward Ratio. It is the indexed value of two variables, i.e., the taken risk and the actual realized profit.
Expected Time to Double Investment (In Months). It is
Expected Lots to Double Investment. It is
Expected Months for Exponential Growth (Risk Multiplicator). It is number of months required to achieve exponential growth, considering a conservative level of 100% profit in relation to the initial investment.
Worst Consecutive Period (No. of Sequential Losses). It is a way to describe losing cycles in terms of the total number of consecutive losing signals over any given period.
Risk of Burnout (Qualitative). It is a qualitative assessment of the intensity of losing cycles, the moment of occurrence (i.e., whether they compromise generated profits and/or the initial investment) and the speed in which drawdowns can recover (i.e., profit stagnation periods) over any given period.
Worst Drawdown in Relation to the Highest Equity (in %). It is the percentage-base value of the worst drawdown in relation the highest equity ever reached over any given period.
Worst Drawdown in Relation to the Realized Profit (in USD). It is the USD value of the worst drawdown in relation the current equity before such drawdown occurred over any given period.
Profit Fluctuation Sharpness (No. of Drawdowns > than “Risk”). It the number of equity drawdowns which intensity is higher than the risk chosen percentage-wise over any given period.
Highest Equity (In USD). It is the USD value of the best equity ever reached over any given period.
Lowest Equity (In USD). It is the USD value of the worst equity ever reached over any given period.
JAFX EA Live Results
After such a long journey, I am proud to reveal real evidence of how my system performs on a live trading account. With absolute confidence I have plugged my EA on a real-money account I opened with the true ECN broker called USGFX, mainly because of their low spreads and direct access to liquidity providers. My initial investment being 10,000 USD is by no means indicative of any required benchmark of any sort; on the contrary, any investment is possible. I plugged the most liquid currency-pairs on October the 22nd 2015. Due to my holidays and low liquidity during the Christmas recess, the system was disconnected from December the 14th 2015 until January the 17th 2016. I must add that the system has responded as good as our models have anticipated.
Graph 48. Overall results of the JAFX real trading account.
Graph 49. Profit results of the JAFX real trading account
As graph 49 shows, due to liquid market conditions, the equity of my account never went below my initial investment. By looking at the profit curve, it is also possible to appreciate the different trading styles that I have incorporated in my system.
Graph 50. Statistical analysis of JAFX real trading account’s performance.
Advanced statistics offers us several interesting features to consider. For example, my system is very selective with the trades it places (only 20 in 25 trading days) but due to this is also able to generate a good deal of volume (62 lots). Also appreciable is the total pips earned (440 in total), the average win ratio of 22 pips per trade, and of course the very high standard deviation value which helps to prevent margin calls from triggering (i.e., which proves the suitability of my system for all type of users).
As mentioned above, keeping risk tightly under control is perhaps the most attractive feature of my system; as it is shown by graph 51, the risk of burnout in all levels in lower than 0.01% (i.e., almost nonexistent).
Graph 52. MAE/MFE risk-reward analysis of JAFX real trading account
The MAE/MFE value is a good gauge of risk-reward. As shown in graph 52, all but one trade (i.e., an outlier) falls within the 1:1 RW condition which is essential when measuring the long-lasting profitability of any system.
Graph 53. Relation of all performed trades on JAFX real trading account.
Graph 53 shows the list of all trades performed by my system over only 5 weeks. I would