Market Technicians Association - MTA - Chartered Market Technician - CMT - Technical Analysis
Welcome Guest! 
Shop MTA - Technical Analysis Shopping Cart
 



Winter 1997

Table of Contents

 

Journal Editor & Reviewers

1

Combining Departure Studies into Power-Gram
Jon Knudsen, CMT

2

A Comparative Study of Simple Rate of Change versus Weighted Rate of Change Indicator
Thomas L. Hardin, CFP, CMT

3

Volume-Weighted Relative Strength Index
Russell R. Minor, CMT

4

Dividends Matter. Does Anyone Care?
Gregory J. Roden

5

Market Noise. Can It Be Eliminated and At What Cost?
Jeremy J. A. du Plessis, CMT

6

Sector Analysis Using New Highs and New lows
Frank Teixeira, CMT


 

Journal Editor & Reviewers

Editor


Henry 0. Pruden, Ph.D.
Golden Gate University
San Francisco, California


Associate Editor


David L. Upshaw, CFA, CMT
Lake Quivira, Kansas

Connie Brown, CMT
Aerodyzamic Investments Inc.
Gainesville, Georgia

John A. Carder, CMT
Topline Investment Graphics
Bouldq Colorado

Ann F. Cody
Invest Financial Corporation
Tampa, Florida

Manuscript Reviewers

Don Dillistone, CFA, CMT
Cormorant Bay
Winnepeg, Manitoba

Charles D. Kirkpatrick, II, CMT
Kirkpatrick and Company, Inc.
Exeter New Hampshire

John McGinley
Technical Trends
Wilton, Connecticut

Robert I. Webb, Ph.D.
Associate Professor and Paul Tudor Jones II Research Fellow
Mclntire School of Commerce, University of Virginia
Charlottesville, Virginia

Michael J. Moody, CMT
Dory, Wright & Associates
Pasadena, Calijornia

Richard C. Orr, Ph.D.
ROME Partners
Bevertj, Massachusetts

Kenneth G. Tower, CMT
UST Securities
Princeton, NJ

 

Publisher

Market Technicians Association, Inc.
One World Trade Center, Suite 4447
New York, New York 10048

Return to Table of Contents


 

1: Combining Departure Studies into Power-Gram

Jon Knudsen, CMT

Introduction

This paper will review the fundaments of cycle analysis and show how to combine seueral departure studies into the development of a neu indicator calkd Power-Gram. The construction and characteristics of Power-Gram will be PX amined in detail, concluding with a review of ongoing research into associations.

Background

Cycles are everywhere. They come in many forms, and are typically stated as fixed periods, but they actually represent averages of historical observations. Commonly accepted cycles are tides, moon revolutions, heart beats, and four seasons of the year. Markets are driven in the long term by cyclical behavior in fundamental factors affecting a particular market (in- ventories, inflation and interest rates). In the short term, cvcles reflect shifts in crowd psychology as the majority oscillates between excessive optimism and overbearing pessimism. The Foundation for the Study of Cycles (FSC) states that “cycle researchers calculate that most markets are SO-85% cyclical and 15-20% random.” Therefore the application of cycle techniques may offer insight into market movements past, present, and future.

A quick way to discover a cycle is to first perform a visual inspection or to apply a Ehrlich Cycle Finder (an accordion type tool) to indicate the most likely dominant cycles. This procedure should be repeated several times to yield several cycles of varying length. The visually detected period are used to detrend the data. One of the most popular ways of detrending is to perform a departure analysis. Departure analysis is accomplished by applying a centered moving average of a length equal to the visually detected period. The moving average is subtracted from the close and the remainder is plotted as a histogram. The highs and lows of the histogram will reveal cycle highs and lows. Several departure analyses will have to be performed to identify and veri$ the dominant cycles. Other more mathematically intense methods are Fast Fourier Transforms (available on Computrac) or Spectral Analysis (FSC has a program available called Basic Cycle Analysis).

Once the cycles are verified, the qualities of the cycles should be examined. The key qualities are amplitude, period, and phase. Amplitude is a measurement of the height of a cycle from low (trough) to high (crest). Period measures the time between lows. Phase determines the time location of a wave (rising, cresting, falling, troughing). A cycle’s strength is determined by dividing the amplitude by the period. The higher the number, the more significant the cycle is in explaining price movement.

Statistical measures may be performed upon the selected crcles for determination of statistical significance and validitv. These tests are F-Ratio (standard detection), Chi-Square (phase consistency), and Bartels (phase and amplitude). Of these various tests, Bartels is the single best measure of a cycle’s authenticity.

One of the primaT cycle principles is the Principle of Summation, which states that price movement is the simple addition of all active cycles. Most markets have at least five dominant cycles. John Murphy classified these dominant cycles and their periodicity as long-term (2-t years); seasonal (1 year); primary/intermediate (9- 26 weeks); and trading (4 weeks). Walt Bressert has discovered that some markets have a l/2 primary cycle, whose periodicity lies between the primary and trading cycles. In addition, Bressert states that the trading cycle breaks down into 2 cycles called alpha and beta. Richard Mogey has classified four types of key cycles. These are Beat, Timing, Swing, and Trend. In the stock indices the periods (in market days) for these cycles are Beat - 5.39; Timing - 14.99; Swing - 23.1; Trend - 77.

Power-Gram

While centered departure analysis is useful for determining precise cycle highs and lows, its inherent time lag due to the centering operation reduces its timeliness as a trading indicator. Non-centered departure analysis (NCDX) provides current information with no lag, similar to a basic momentum study. We can combine severa1 NCDAs into B single indicator, adhering to the Principle of Summation, called Power-Gram. Power-Gram provides information about cycle amplitude and dominance, divergence, and overbought/oversold extremes in the market. However, Power-Gram does not provide a basis upon which forecasts of successive turning points into the future may be made. Rather, Power-Gram series as a reactive indicator that will alert the analyst/trader to significant shifts in the cyclical behavior or potential directional changes in the market under study.

Power-Gram is plotted bp summing the current strength readings of the dominant cycles. A cycle’s strength is measured b\ dividing its amplitude (or NCDX) bv its period. This formula, CS7 = [ (close-ma7)/ 71 where CS7 = Cycle Strength of the 7 da) cycle, close = today’s close, ma7 = 7 day moving average of close, and 7 = moving average period used ‘7, determines the strength of a seven day cycle. Power-Gram is the addition of all selected dominant cycles’ strength. PG = CSYtCS13tCS23, where dominant cycle periods are 7, 13, and 23 (see Table below). Power-Gram is plotted as a histogram. The name arose because we are following the cvcles’ net power and that it is plotted as a histogram.

Plotting NCDX based on each individual dominant cycle gives great insight into the phasing and amplitude of each cycle independently, and when liewed as a complete picture (Power-Gram) the cycle controlling the present price movement mar be easily detected. First, observe each cycle individually and look for oscillation around zero. If the observed cycle is’oscillating, check the longer term cycles’ behavior. If all obsewed cycles are in conflicting phases (some positive and others negative) or oscillating, then a trading range is underway and Power-Gram will exhibit short term oscillations around zero (Chart 1). Price will usually follow the strength of longer term cycles, however (while in a trading range) if the longer term cycles’ strengths are diminished, be alert for the short-term cycles failure to oscillate as an early indication of a trend developing. But if a majority of cycles are in a positive or negative phase (each cycle’s NCDA above or below zero), then a trend is in progress and Power-Gram will remain above or below zero for an extended period (Chart 2). Therefore, explosive movements (all dominant cycles in synchronization) and choppy trading ranges (opposing cycles) may be more easily explained and anticipated. Trend strength can be determined by examining the longer cycles once a trend has commenced. As a trend develops, extend the cycle focus length. Once a lessening in trend strength has been detected, then begin to focus on shorter duration cycles for price indication. Power-Gram’s direction (up/down) can be useful for verifying the short term trend, while positive or negative ralues reflect the longer term trend.

Divergence may be easily seen in both NCDA of each observed cycle and Power-Gram (Chart 3). The short term NCDX will function as an early warning system, while divergence in Power-Gram indicates a potential major trend change. Amplitude extremes in NCDAs and Power-Gram are useful as olerbought/ol-ersold indicators. Once an extreme has been reached, one should be alert for a dnrergence or reversal to develop (Chart 4).

Conclusion

Current research is focused on using Power-Gram as a base upon which to construct a trading system. Power-Gram is attractive since it provides an oscillating price cuwe with a majority of noise removed and a minimum amount of lag. Power-Gram can be smoothed by applying a simple mo\ing average to minimize the occasional whipsaws and using a momentum indicator on the moling average to provide a leading indicator (reflects slope changes in the moling average) (Chart 5). One may also appl! Gerald Xppel’s moling average convergence divergence technique (MACD), with Power-Gram as the price input, and using a histogram of the difference between the averages as a leading indicator. Initial research shows promise lvith Power-Gram identiQing turning points with a high degree of accuracy with a smoothed Power-Gram functioning similar to Stochastics’ %D-Slow but reflecting a combination of cycles versus a single cycle (Stochastics). However, one must be careful in selecting cycles to track. Too many short cycles will result in a high frequency of trades. If the desired frequency of trades is too great, adjust the cycle composition to focus on longer term cycles by either adjusting the weights or utilizing longer term cycles. My preference is to utilize longer term cycles, avoiding an over-optimization trap.

 

Footnotes

  • Foundation for the Study of Cycles, Basic Cylt A nalpis soflware manual, Copyright 1991, pg. Appendix C-i.

  • J.M. Hurst, The Profit Magic of Stock Transaction Timing, Copyright 1970, Published by Prentice-Hall, Inc., pg. 32.

  • JohnJ Xlurphy, Technical Analysis of the Futures Markets, Copyright 1986, Published by Xew York Institute of Finance, Prentice-Hall, Inc., pg. 430.

  • AJurphJ, pg. 430.

  • Foundation for the Study of Cycles, pg. Ap pendix C-8.

References

Analyzing and Forecasting Futures Prices,
by Anthony Herbst. Copyright 1992.
Published by John Wiley & Sons.

Basic Cycle Analrsis software manual, by
Richard Mogey and Foundation for
the Study of Cycles, Inc. Copyright
1991. Published by The Foundation
for the Study of Cycles, Inc.

Investing with a Computer, by Herb
Brooks. Copyright 1984. Published b!
Petrocelli Books.

Mesa and Trading Market Cvcles, by John
F. Ehlers. Copyright 1992. Published
by John L\‘iley & Sons, Inc.

The Profit Magic of Stock Transaction Tim-
iq, bp JA4. Hurst. Copyright 1970.
Published by Prentice-Hall, Inc.

Real Time Futures Trading, by Al Gietzen.
Copyright 1992. Published by Probus.

Technical Analysis of the Futures hlarkets,
by John J. Murphy. Copyright 1986.
Published bv New York Institute of Fi-
nance, Pren’tice-Hall, Inc

Jon E. Knudsen, CMT

Jon Knudsen is a \‘ice President of Alternative Investments within The Chase Slanhattan Bank’s private bank . Mr. Knudsen joined Manufacturers Hanover Futures, Inc. (a division of Manufacturs Hanover Bank, which later merged into Chemical Bank, which then merged into The Chase Manhattan Bank) in 1986, working on the trading floor of the Chicago Mercantile Exchange (the CME). From 1988 to 1989, Mr. Knudsen managed the Eurodollar Options Trading Desk and was a member of the Index and Options Market Division of the CME. In 1989, Jlr. Knudsen transferred to New York to manage futures based, yield enhancement fixed income portfolios within the private bank. Mr. Knudsen also managed offshore commodiw funds and coordinated derivitive investment strategies for the private bank’s investment management areas. Mr. Knudsen received a Bachelor of Science in Business Administration in Finance from the Unitersity of Denver.

 

Return to Table of Contents


 

2: A Comparative Study of Simple Rate of Change
versus Weighted Rate of Change Indicator

Thomas L. Hardin, UP, CMT

Introduction

One fact upon which market technicians can a,gree is that markets and srruvities go through periods during zohich theirpvires mozve in trends. Foryars, market researchers have studied such trends and cycles from man! diffprent perspertizles. ‘1s a result, many studies hazve been published that pant@ the surress of using dijferent trending indicators to help predict the dirertion ofprice changes before thq oc/ewebuv: The uast use of computers during the ’80s and ’90s has made price behavior vnuch easier to evaluate and monitor.

One terhnical indicator used to monitor this behavior is the rate of change or momentum indicator The general purpose of this indicator is to detervnine when a stock and/or index has bepln moving rapidly in one direction or anothen or to detert the slowing or nwrsal of a preoiousl/eweb strong trend. Theosetirally this is a trend indicatorpointing to both strong uptrends and strong downtrends. The purpose of this paper is to explore some adaptations to the Sivnple Rate of Change indicator zoith a goal of ivnprozring its long-term success.

Calculation

Before we discuss the calculation for the FVeighted Rate of Change indicator, we must first have a common understanding about how to calculate a Simple Rate of Change. The most common calculation compares the current price of a stock or index to its price sometime in the past. The following formula is used:

Simple Rate of Change
Current price I previous price

The result of this equation is a number that fluctuates around the number 1.0. Atry number above 1 .O indicates an upward trending stock or index and, comersel!; any number below 1.0 indicates a downward trending stock or index over time period used for the rate of change measurement.

Simple Rate of Change (SROC)

Many books have been written about the use of this indicator. The most basic use is to buy a stock when the rate of change is above 1.0 and sell when it drops below 1.0.

According to a study presented in The Encvclonedia of Techn’ical Market Indicam, a 31-week period was the optimum. This result was reached after testing periods ranging from 1 to 78 weeks using the New York Stock Exchange Composite Index from Januan, 5, 1968, to December 31,1986. The benchmark comparison was made to a -10/ewebweek simple moving average crossol-er rule. The 31.week rate of change did outperform this benchmark, but suffered substantially worse drawdown. An explanation given for the I-olatilit) in results was the fact that the indicator put too much weight on the price in the past. For example, if the current price was rising moderately but the price 31 weeks ago was rising rapidly, the rate of change value would drop. This could be a false signal.

Another disadvantage of using a trend indicator for imesting is that, by the nature of its construction, investors miss the beginning of the uptrend. The trend must first turn positive and persist for some time before trend indicators move investors into the market. In addition, sell signals come some time after the market has turned south. This can have a significant effect on profits. This problem suggests that trend indicators should possibly be used in conjunction with other indicators that could indicate oversold markets for better entry points and overbought markets for better exit points.

Regardless of the above discussed drawbacks in this indicator, it does work Successfully. In addition, another benefit is its simplicity. The calculation is easy to perform, all technical analysis software has the indicator available, and the buy and sell decision is very clear and easy.

Weighted Rate of Change (WROC)

The purpose of my work was to create an adaptation to the Simple Rate of Change indicator with the goal of improv ing performance. To do so, tests were performed using a variation of the Simple Rate of Change calculation. The variation was created to attempt to eliminate the problem of too much weight being placed upon the price in the past. The following formula was used:

The above formula does not just look at one price in the past, it looks at even price for the number of periods back to the current price. In addition, the most weight is placed on the most current price back, and the weight diminishes to the oldest price used. An example of the calculation follows for a 5week LVeighted Rate of Change on current S&P 500 Index data.

To further improve the indicator, after the rate of change value was calculated, a moving average was applied to the indicator. This helped eliminate an excessive number of trading signals and whipsawing in and out of the market.

This \Veighted Rate of Change indicator was first spot tested using various time periods for the indicator. The data tested were weekly S&P 500 prices starting in January 1970 to June 1995, more than 25 years. Spot testing indicated that a very short time period and very long time periods were the least successful. As a result, an optimization was performed using a M’eighted Rate of Change from 25 to 40 weeks, and moving averages on each from one-third to hro thirds the rate of change period. For example, for the 30.week period N’eighted Rate of Change, moving averages from 10 periods (l/3 of 30) to 20 periods (2/3 of 30) were tested. A total of 189 tests were performed. See Appendix B for results of these tests.

The results of this optimization are provided in the following table and graph. As the information indicates, the \VROC outperformed the SROC in the lengthy periods. The third column of the above table show the results of using the crossing of the zero line by the moving average as the by/sell trigger. The results were enhanced significantly.

The optimum results lvere achieved using a 34week iveighted Rate of Change with a 15week moving average. This combination resulted in an ending equity position of $5,591 from a starting point of Sl,OOO in January 19iO for a compounded annual return of 6.98%. This is a respectable result for a single indicator. Commissions and taxes were ignored during these back tests.

The Weighted Rate of Change indicator improves upon a Simple Rate of Change calculation. It can be practically applied to both stocks and indexes. Because using only this trading rule results in returns less than a buy-and-hold strategy it is suggested that it be used in conjunction with other indicators. As discussed abol-e relative to the Simple Rate of Change indicator, success can be improved by using an indicator that can help an investor determine oversold and overbought markets. This gets the investor into the market before too much of the uptend is missed, and gets the investor out of the market before losses have been incurred after a reversal in the trend.

In conclusion, the \Veighted Rate of Change indicator has improved upon the Simple Rate of Change. As with nearly any other indicator, it should be used in conjunction with other indicators to improve upon its success.

Appendix A

“Overall, (simple) rate of change must be judged as too erratic relative to some other indicators, such as moving averages. The reason for this seems to be due to over dependence of rate of change on the oldest data. For example, for a 31/ewebweek rate of change, if price for this week was up moderately but the price 31 weeks ago was rising steeply, then the rate of change calculation would fall sharply this week. Obviously, such a severe rate of change plunge would have nothing to do with the market’s current trend. Despite that, a sell signal could be given, assuming that the previous week’s rate of change calculation was only slightly positive. Thus, this dependence on old data many explain the more erratic behavior of the optimization curve for the rate of change indicator.”

The Encyclopedia of Technical Market Indicators
Robert W. Colby & Thomas A. Meyers

Appendix B

The following table shows the results of back testing the \\‘eighted Rate of Change for several weekly periods and several moling averages per period. The equitu value is for June 1993. It is the result of $1,000 invested in January 1970.


Appendix C

The following graphs show several stocks’ weekly price graphs. Below the price graph is the \Veighted Rate of Change (NXOC) indicator used in conjunction with a 15.week moling average.

Buy decisions were triggered when the 1Sweek moving average of the WROC crossed above the 1 .O line. Comersely, sell decisions were triggered when the 15-week mo\ing average of the I\‘ROC crossed below the 1.0 line.


Bibliography

Robert W. Colby and Thomas A. Meyer, The Encvclonedia of Technical Market Indicators, NewYork, Nk;: Irwin, Inc., 1988

Thomas 1. Hardin, CFP, CMT

Thomas Hardin is a Director and a Senior Portfolio manager with the Portfolio Management Group division of Smith Barney. He has been in the investment industry since 1977. First atJ.C. Bradford; then at Prescott, Ball& Turbin; and in 1982 at E.F. Hutton which, through mergers, became Smith Barney Tom became a portfolio manager, through Hutton Portfolio Management in 198i. Mr. Hardin began managing global balanced portfolios in 1994. Tom is currently writing a book on international and global investing, “The Conservative Global Investor,” due out bv the end of 1997. He attended the Universiv of Louisville and received a BS/B=2 from Skidmore College in Saratoga Springs, NY

 

Return to Table of Contents

 


 

3: Volume Weighted Relative Strength Index


Russell R. Minor. CMT

Introduction

In this paper I propose to examine On Balance Volume (OBV) and Relative Strength Index (RSI) format and to create a Volume Weighter Relative Strength Index (VWRSI).

The first principle that must be fully explained and understood is that an oscillator. either price or volume derived, is sub ordinate to a basic trend analysis. “A trend is a time measurement of the direction in price levels covering different time spans. There are many trends, but the three that are most widely followed are primary, secondary, and short-term.” ’ There is no universal law governing the time frame which these three trends must follow. There is a tendency for certain time periods to be more common than others: “Dow Theory for example, classifies the primary trend as being in effect for longer than a year. Dow defined the intermediate, or secondary, trend as three weeks to many months. The short-term trend is usually defined as anything less than two or three weeks. Each trend becomes a portion of its next larger trend.” ’

When securities or markets are trending, trendlines and moving averages are valuable tools to trade with. Ll%en a sideways environment develops, the technician must resort to another tool, in order to detect sudden changes in direction. Sidewavs movements occur more often than technicians like: “research indicates that markets generally move in trading ranges and trend less frequently” ’ It is in this instance that the oscillator is very helpful.

An oscillator measures the rate of change of prices as opposed to the actual price levels themselves. M%en first discovered, oscillators appear to be a magical device to the noyice technician. The more experience the technician gains with oscillators, the less likely it becomes that oscillators appear infallible.

In this paper, the oscillator being examined is the Relative Strength Index (RSI), originally created by Welles J. Wilder.

Where “RS is the ratio of the exponentiall! smoothed moving average of n- periods gains divided by the absolute value (i.e., ignoring sign) of the exponentially smoothed moving average of n- period losses.” ’ The RSI is then plotted on a vertical scale from zero to one hmidred. As originally defined (11 days for n), movements of the RSI above 70 are considered to be overbought, and movements below 30 identify an oversold condition. It usually pays to wait for the RSI to exit or enter the overbought/oversold region. “Xn! strong trend, either up or down, usuall! produces an extreme oscillator reading before too long. In such cases, claims that a market is overbought or oversold are usually premature, and can lead to an early exit from a profitable trend. In strong uptrends, overbought markets can stay overbought for some time. Just because the oscillator has moved into the upper region is not enough to liquidate a long position (or, heaven forbid, short into the strong uptrend) .” ’

Volume holds additional information that must be watched closely; price and Volume are two important pieces of the technical puzzle. “First, the matter of volume. It is always to be watched as a vital part of the total picture. The chart trading activitv makes a pattern just as does the chart piice changes.” ’ It is believed by some (not all) technicians, that the internal strength of the market or a stock is manifested in volume first, then in price; in other words, volume leads price. The fact that some technicians have this belief led me to examine whether a volume derived oscillator provides any better signals than a price derived oscillator. The Relative Strength Index is a price based oscillator.

Formulating The Volume Weighted Relative Strength Index

The format used was On Balance Volume (OBY) in the RSI formula, producing a \‘olume 12’eighted Relative Strength Index. Joseph Granville’s OB\’ is defined as:

C is the current period’s closing price. Cp is the previous period’s closing price. C-Cpl is the absolute vAue of the difference between the two closing prices. b7 is the current period’s volume.

“The total volume for each clay is assigned a plus or minus value depending m whether prices close higher or lower for that day. X higher close causes the volume for that day to be given a plus value, bvhile a lover close counts for negative value. X running cumulative total is then maintained b? adding or subtracting each day’s volume based on the direction of the market close.” ’ The level of the OB\’ line is not important, rather the direction of the OB\’ line itself. The theory behind the OBY line is that volume activity should be directly proportional to price. 11%en divergence develops between price and volume, the technician should regard this as a warning signal of a possible change in trend. “\hlume usually goes \\-ith trend; i.e., volume advances with a rising trend of prices and falls \\ith a declining one. This is a normal relationship, and anything which diverges from this characteristic should be considered a warning sign that the pre\-ailing trend may be in the process of reversing.” ’

Divergence occurs when the price chart makes a new high or low that is not accompanied by a new high or low in the oscillator. The concept of divergence is the basis of the Dow Theory. “The movements of both the railroad and industrial stock averages should always be considered together. The movement of one price average must be confirmed by the other before reliable inferences may be confirmed. Conclusiom based on the movement of one average, unconfirmed by the other, are almost certain to prove misleading.” ”

The formula for the Volume Weighted Relative Strength Index (\11XSI) is

The WVRSI is then plotted on a vertical scale from zero to one hundred. J. J. hlurphl- outlines the three most useful qualities of an oscillator: “when the oscillator reaches an extreme, when it diverges from price, and when it crosses the neutral line if present.” “’

Testing

In order for an oscillator to be valuable, it should be applicable in more than one situation. The \WXSI was tested against the RSI on a daily basis over a two-year period and for a six-year period on a weekly basis. The original 11 day period was used for both the RSI and \7’1XSI. This was then compared to a 9 day period. RI. Chaiken found that a 9day period is more effective in trading stocks. “For trading stocks, I found that a g-day RSI is much more effective with the 30’and i0 break points for trading.” ” Swings are wider on a g-day formula; and some technicians employ 80.20 in place of the 70,30 values.

Criteria Contintued
(See Tables l-6)

All tests were conducted net of commissions for every criteria. At a glance Tables 1 through 6 yielded ambiguous and inconsistent results. I decided to isolate one study (T.S. Steel) wave by wave, for each scenario studied on a daily and week]: basis. Each case was analyzed on a percentage return per trade. I offer the results of this analysis in two IvaTs, firstl: the tables below, and second in graph form (page 21). Finally I compared cumulative average percentage returns for all three cases studied daily and weekly (pages 22-23).


 

 

 

Conclusion

The value of the \%RSI is that it is volume derived. \‘olume provides additional information not found in a price-derived oscillator. The empirical data provided on graph 1 illustrate that the \‘M’RSI was superior to the RSI in certain cases studied on a daily basis. In the weekly data, the RSI proved superior and generated lower losses, illustrated on graph 2. Taking a cumulative average of all the signals studied for both RSI & \%XSI (graphs 3 & 4), the 1WRSI outperformed the RSI on both a weekly and daily basis, for the 14day time period studied.

I feel that this evidence substantiates the fact that a volume derived oscillator will enhance a price derived oscillator. Simultaneous volume and price oscillator divergence from the price chart itself should be strongly heeded. No oscillator signal should be acted on without respect for the underlying trend. The actual inception and application of the 1’MXSI can be followed on pages 22-23.


Footnotes

  1. Jhtin J. Pring, Technical dnalmis Ex- plained: Third Edition (LW ,UcGrazv Hill Inc., 1991), p.13.

  2. Joh/ewebzJ Murphy, Technicaldnal/ewebsis of The Futures Markets (A?: LVew krk Institute of Finance, 19911, p.36.

  3. Thomas R. De&lark, The Xew Science of Technical Analysis (‘VK John 1171eJ &s Sons. Inc., 1994), p. 129.

  4. Robert II: Colby and Thomas d. Jlqers, The Enqyclobed/eweba Of Technical Jlarket Indicators (;W’: Richard D. Irwin, Inc., 1988), p. 43.

  5. John J. Murphy, TechnicalA-inalrsis of The Futures Markets (?LY: .\‘plv York Institute of Finance, 199I), p. 300.

  6. Robert D. Edzvards andJohn Xlagee, Tech nical Analysis of Stocks: Xezv Enhanced Edition (Boston, MA: International Technical .ina[ysis Publishers, 1991) p.51.

  7. John J. i\furphJ, Technicaliinalwis of The Futures Markets (AT: Aka York Institute of Finance, 19911, p. 185.

  8. John J. Murphy, Technical Analyis of The Futures Markets (.X77 LpUl York Institute of Finance, 1991), p. 300.

  9. Robert Rhea, The Dow Theoq (Burlington, lT: Fraser Publishing, 19931, p. 14.

  10. . John J. Jluqbh/eweb, Technicaliinalwis of The Futures Markets (XY: NW York Institute of Finance, 1991), p. 277.

  11. Marc Chaiken, Technical dnalyis of Stocks CG Commodities Magarine: Januaq 1994 (Seattle, \\A), p.24.

Bibliography

  1. Robert M’. Colby and Thomas A. Mever, The EncvcloDedia of Technical Market Indicators, New York, NY, Irwin, Inc., 1988.

  2. Thomas R. DeMark, The New Science of Technical Anahais, New York, AY, John \Viley & Sons. Inc., 1994.

  3. Robert D. Edwards and John Magee, Technical Analvsis of Stocks: New Enhanced Edition, Boston, MA, International Technical Analysis Publishers,
    1991.

  4. Joseph Granville, StrateFr of Daily Stock Market Timing, SJ, PrenticeHall, Englewood Cliffs, 1960.

  5. John J. Murphy, Technical Analysis of The Futures hlarkets, New York, NY Institute of Finance, 1986.

  6. John J. Murphy, Intermarket Technical Analvsis, W, John iriley &: Sons, Inc., 1991.

  7. hlartin J. Pring, Technical Analysis Explained. Third Edition, NY, RlcGraw-Hill, 1991.

  8. Robert Rhea, The Dow Theory, Burlington, \T, Fraser Publishing, 1993.

  9. Russell Richard, The Dow Theov Todal; Burlington, \T, Fraser Publishing, 1981.

  10. Jack D. Schwager, The Market IVizards, hY, New York Institute of Finance, 1989.

Periodicals

1. Technical Analvsis of Stocks & Com-
modities,Technical Analysis Inc., 4’i5i
California Ave. S.\\‘., Seattle, ‘I\‘A
9811&4499, January 1994.

2. Technical Analysis of Stocks & Com-
modities Market Timing, 2nd Edition,
\‘olume 6, Technical Analysis, Inc.,
3517 S.1V. Alaska St., Seattle, LVA
98126-2700.

Software

1. Supercharts: Omega Research 2.1, Miami, FL, 33513.

Russell Ft. Minor, CMT

For the past four years, Russell hlinor has been an institutional equit7, trader at RBC Dominion Securities in Toronto. Prior to that, he was an institutional equity salesman at BBN James Capel, a research boutique. He started in the investment business as a retail salesman after receiving an economics degree from York Unitersity Downsdew, Ontario. His first exposure to technical analyis occurred when he was a retail salesman.

 

Return to Table of Contents

 


 

4: Dividends Matter. Does Anyone Care?


Gregory J. Roden

They say I’m crazy, got no sense, but I don’t care.“’

From year end 1925 to year end 1995, the Dow Jones Industrial Average (hereafter DJIA) increased from 156.66 to 5117.12, a mean rate of appreciation of 5.29% per year. The rate of appreciation in the DJW is well below the 10% total return for the DJIA during the same period because the index does not include reinvesting of dividends as do the total return figures. Given that dividends and their reinvestment provide half of equities total return, it could be argued that low dividend yields in and of themselves will reduce future stock total returns ‘? and, likewise, high dividend yields will increase future stock total returns. This paper will first explore and quantify the relationship between dividend yields and subsequent total rates of return. Then the author will then ponder this relationship in light of the current low dividend yield and will speculate as to when the historical relationship will come to fruition.

In addition to its direct effect on the total return, the Dividend Yield is widel) quoted as being a means of measuring the value in the market on a historical basis. Low dividend yields have accompanied Fifteen Year Real-Total Rate of Return from the top to bottom, the 1929 market lost 89.2% and -W the correlation coefficient having a possible range from -1 to tl, the measurement here of ,715 represents a good, positive correlation. Another measure of correlation is r? (RSQ, which has a possible range of 0 to 1. The 15.year rates of return and the dividend yields have an r? of ,526.

The above table examines the correlation for several other holding periods. There is a general increase in the correlation numbers as the holding period increases, up to 15 years. As it turns out, the Is-year time period has the highest correlation of the holding periods examined, as measured bv r2 and the correlation coefficient. At &e same time, there is a sub stantial decrease in the standard deviation of the returns for the respective holding periods. Then, after 15 years, the correlations begin to decrease even though the the 1973 market 45.1%.

Though bull and bear market moves are both randomly accompanied by low and high dividend yields, secular bear markets do tend to follow low dividend vields. The two charts below illustrate &is tendency The top chart represents the dividend yield on the S&P 500 at the quarter end from 1926 through the fourth quarter of 1980. The bottom chart is the annualized rate of return (dividends reinvested and returns inflation-adjusted) for the 15-year holding period subsequent to the indicated quarter, e.g. for the fourth quarter of 1980 the time period is 12/31/ 80 - 12/31/95.” The similarities of the hvo charts is indicative of a correlation between dividend yield and subsequent rate of return.

To further explore this possible correlation, the 15- year rates of return are matched with the dividend yield for the initial quarter, which yields the scatter diagram above. With the wide distribution of returns on this scatter diagram, it is obvious that the relationship is not a strict linear one. The general upward slope of this chart again indicates a positive cor- relation, which is confirmed by the dividend yield and 1 j-year rates of return haying a correlation coefficient of ,715. With the correlation coefficient having a possible range from -1 to tl, the measurement here of ,715 represents a good, positive correlation. Another measure of correlation is r? (RSQ, which has a possible range of 0 to 1. The 15.year rates of return and the dividend yields have an r? of ,526.

The above table examines the correlation for several other holding periods. There is a general increase in the correlation numbers as the holding period increases, up to 15 years. As it turns out, the Is-year time period has the highest correlation of the holding periods examined, as measured bv r2 and the correlation coefficient. At &e same time, there is a sub stantial decrease in the standard deviation of the returns for the respective holding periods. Then, after 15 years, the correlations begin to decrease even though the standard deviations are also continuing to decrease.

The question as to how “good” a correlation has been demonstrated bv r’) and the correlation coefficient is subjective as said measures of correlation are biased towards straight linear relationships. Consider the chart below a hypothetical relationship between dividend yield and the rates of return for a fictional holding period. The straight line on the chart depicts a linear relationship wherein the rate of return increases 0.02 % for each one percentage point increase in the dividend TieId. This straight line relationship has a perfect correlation with an r‘? and a correlation coefficient both equal to 1. Unfortunately the relationship is of no significance as the return figures scarcely increase. The minute changes in the return numbers can be measured by the standard deviation of said numbers, which is a minuscule ,003.

The curved line on the chart represents a geometric relationship between the dividend yield and another hypothetical holding period with the total return going up geometrically as the dividend yield increases. Although this geometric relationship is obviously more significant than the linear relationship, its r9 is .793 and its correlation coefficient is ,890, not a perfect linear relationship. The geometric relationship has a standard deviation for the return numbers of ,307, a much more volatile group of numbers than the linear ones. The moral of the story is that rp and the correlation coefficient alone are not enough to judge the significance of a statistical relationship. So, including the Standard deviation of the return numbers in this analysis is helpful to gauge the significance of the relationship.

Considering the foregoing, the decreases in the holding period correlation numbers after 15 years, even as the standard devia- tions continue to decline, take on increased significance. An explanation could be that valuations might have a 30-year cycle, with 15 years from crest to trough. ’ Thereby after 15 years valuations start to turn into the next cycle. This would result in a decline in the correlations behveen dividend yield an rates of return after 15 years even as the standard deviations decrease.

With this in mind, an investor with a linear time frame would want to ascertain what rate of return he could expect over the next 15 years given the prevailing dividend yield. In order to so quantib this relationship, it will be necessary to take some sort of average of the various plots on the previous scatter diagram.

For this study the dividend yield prevailing at the end of each quarter will be the independent variable and the subsequent 15 year rate of return will be calculated for those rates of return that correspond to similar dividend yields; in other words, the 15-vear scatter diagram will be segmented along the X axis and the mean of the Y values in each segment will be calculated. The X axis will be segmented by grouping the dividend yields as follows: less than or equal to 3%; greater than 3% and less than or equal to 3.5%; greater than 3.5% and less than or equal to 4%; greater than 4% and less than or equal to 4.5%; greater then 4.5% and less than or equal to 5%; greater than 5% and less than or equal to 6%; greater than 6% and less than or equal to 7%; and greater than 7%, designated as 8% on the chart below.

The resulting values on the chart are the mean real total rate of returns for equity investments made at a calendar quarter end, having a ISyear holding period, with the dividend yield at the quarter end within the segmented dividend yield range. Based on the market experience since 1926, these values are what investors Gth a ISyear holding period could expect to receive in the future for an annual return at a given dividend yield. Hereinafter, these values will be rkferred to as the Total Return Expectation.

This Total Return Expectation as it might involve overlapping of time periods or totally different time periods is not a historical measure of market return in the usual sense. Rather, it is the mean return received during similar conditions as determined bv the dividend yield at the start of the holding period and is what investors might thereby expect in the future under similar conditions, i.e. similar dividend yields.

The Total Return Expectation makes use of the geometric mean which provides a truer indication of what the “average” rate of return would be than what might be expected from looking at a scatter diagram. For example, if a given dividend yield had only hco corresponding values of t.50 and -./eweb0, a casual observer might conclude that these hvo observations average out to 0.0. But that would not be the actual result because rates of return are compounded. Consequently, if a portfolio has a 50% gain one J-ear and a 50% loss the next year (or visa versa) the net result is a 25% loss ol-er two years (1.5*0.5=0.75).

Scatter diagrams for other time periods can similarly be brought into focus using this same Total Return Expectation methodology The table at the top of the following page enumerates the Total Return Expectations for various holding periods.

This Expectation method has yielded some surprising results; first consider that the 5-year time frame now has the highest correlation. Note also that all the correlation numbers hare increased and the standard deviations decreased. This ) is because rather than measuring the correlation and standard deviations for all the time periods corresponding to a given dividend yield, the Total Return Expectation instead measures the mean of all the time periods corresponding to a given dividend yield.

The correlation numbers increase substantially from the 1 year to the P-Tear holding period, but then remain fairlv constant through the 30-year holding period. The standard deviation numbers of the Total Return Expectations drop significantl? from the l-year holding period through the &year holding period. Then, they remain fairlv constant through the 15-year holding pkriod, after which they resume decreasing. That is to say, the Total Return Expectation numbers for a l-year holding period show a large difference from the smallest expected return at a yield of 3 to the highest expected return at a yield of 8. This difference decreases through the &year holding period. Then this difference remains significant and relatively constant through the 15-year holding period. After the 15-year holding period, the difference narrows. Again, this is indicative of a 30-year cvcle in valuations.

In today’s market, investors are told not to worry about valuations and corrections, rather to invest for the long-term. This study of Total Return Expectation illustrates that the “long-term” might end up being a “bridge too far” for maw investors. This mav be so because at a dividend yield 3% or less it is not until the 20-year holding period that the expected rate of return is greater than 2.2% annually which is approximatelv equal to the inflation adjusted rate of r&turn for U.S. intermediate government bonds. li

Investors with a holding period less than 30 years at this dividend yield who expect td earn the historical average are probably in for some disappointment. This study also suggests a 30-year cycle in stock valuations, as an investor would have to hold stocks for 30 years to reach from crest to crest. A 30-year cycle could also help explain why stocks can stay in overvalued territory for so long.

How might the Expectation method work in actual practice? A comparison can be made between the annual return predicted by the Expectation based on the prevailing dividend at the end of a calendar quarter and the actual market return subsequent to that quarter. The first such comparison will use the 15-year holding period returns. Also, a comparison will be made using a common benchmark, the trailing average market return, and how well it predicted the subsequent performance of the market. Specifically the trailing l+ear average return at the end of a calendar quarter will also be compared to the subsequent 15-vear annual return, as illustrated in the &art below.

An analysis of the 15 year holding period comparison reveals that the Expectation method and the subsequent returns have an I-’ of ,839 and a correlation coefflcient of .916. The trailing average has a weak correlation with the subsequent rate of return with an r? of ,312; unfortunately, it is an inverse correlation as the correlation coefficient is -.559. So, as the trailing Is-year average has increased over time, the subsequent return has decreased, and visa versa. Accordingly the phrase “past performance is no guarantee of future performance” is more than just a disclaimer. IVhereas, the Expectation method pro vides a much better indication of future performance.

The same sort of comparison can be done for the j-year holding periods; the j-year Expectation prediction and the 5- year trailing average can be compared to the subsequent 5-year return. The winner is the Expectation method with an rl’ of ,339 and a correlation coefficient of ,582. The j-year trailing average has a no correlation with the subsequent j-year rate of return with an ri’ of .005 and a correlation coefficient of -.068. It is worth noting that the standard deviation of the 5-year subsequent returns was .0754, considerabl) higher than the 15-year returns of .0383. In light of the large variance in the j-year returns, the Expectation method’s correlation numbers take on greater significance.

The Expectation method compares favorabl? with trailing averages of equal holding periods, but how about against the total real “return since 1926?’ To make this comparison, the return since 1926 through every quarter from 1950 to the last quarter of 1995 will first be calculated. Then, the l+ear and j-year subsequent return will be compared to the “return since 1926” for the starting quarter. The results: the l+ear subsequent return and the “return since 1926” ha\-e an r’) of .691 (not bad), but a correlation coefficient of -.831 (bad); the five year subsequent return and the “return since 1926” have an r’) of ,437 (not bad again), but a correlation coefficient of -.668 (bad again).

At the risk of “beating a dead horse,” one final comparison was made of the “return since 1926” and the l-!-ear subsequent return. The comparison turned in an rl’ of .021 and a correlation coefficient of -. 143. L\‘hereas, comparing the l-year subsequent return to the l-year Expectations yields an r’) of .075 and a correlation coefficient of .275. In the above examples, the Expectation method is a more superior estimator of future performance than the method of assuming the historical averages will continue in a linear fashion.

So, why is the “return since 1926” and the trailing returns used so frequently? The main reason is that the rates of return for the last few years have been far above those predicted by the Expectation method; indeed even exceeding the historical averages. Furthermore, the dividend yield has been at historic lows, at or less than a 3% dividend yield, since the first quarter of 1992, a couple of thousand DJIA points ago, causing some to question the validity of historic valuations, such as the dividend yield. As Alan Abelson has wryI! written, “(E)verrone knows in this New Era-New Age-Neiu Paradigm market, history is pretty much bunk.” ’

“You see I’m sort of independent, of a clever age descendant”

It is time now to switch gears, to put the r”s and correlation coefficients aside (for a while) and ponder this “New Era.” By numerous measures, besides the dividend yield, this market has gone to extremes in valuation: the Price-To-Book \‘alue is 3.8, a new record; the Yield Ratio of 30-year Treasu? bond to earnings yield on S&P 500 is 1.4, higher at the peak of only one other bull market which was 2.2 in August of 1987; Tobin’s Q ratio (the ratio of stock market value to corporate net assets at replacement cost) is 1 A, a new high; the dollar value of the stock market is now lOl%t of nominal GDP, a new record; the number of weeks it takes a manufacturing employee to buy one S&P share is 1.25+, approximately equal to the previous record set in 1929.” The market having gone to such extremes in and of itself questions the validity of measures of valuation. ProfessorJames Tobin received a Nobel Prize for developing his Q ratio, which has a normal valuation around SO%, causing some to observe that the market is making a $3 trillions dollar bet that he should return his prize (i.e. should the market return to normal valuations, the total market valuation of stocks would be reduced by, $3 trillion). ‘” Accordingly the problem wth valuation models is that this bull market has pushed valuations beyond historic levels. That is to sa!; history is bunk.

Be that as it may, this is a historic era for the stock market in terms of lofty valuations. However, as this may be a historic era for stock market valuations, it does not necessarily follow that the recent market performance will continue indefinitely, in keeping with the “New Era.” Rather, the recent bull market may prove to be an anomaly brought about bp various factors, which, when they subside, will result in the market returning to average levels of valuation (or less). h couple of recent examples of historic markets that regressed to the mean after reaching historic highs would be the U.S. bond market in the earl\ 1980’s, which produced the highest interest rates in the histoy of this republic, and the Japanese stock market in the late 1980’s reaching extraordinary levels of valuations. The real question then is not whether histon- is bunk, rather it is which histo? is bunk’- recent history or ancient histon: Consequently, in contemplating this “kew Era,” particular attention will be paid to some factors given credit for bring ing it about in an effort to debunk it.

Conventional wisdom says that dividends do not matter anymore in this new era. An argument against using the dividend yield as a measure of valuation is that the payout ratio of earnings is very low, which makes the yield lower than it would be under an average payout ratio. ” The payout ratio on any stock is set by its management and its board of directors (hereinafter management). The payout ratio is therefore subject to the management’s long-term view of the company’s and its indust@s earnings prospects. ‘” This author does not know of any statistical studies showing that management is more often wrong than it is right in setting the payout ratio. On the contrary, there is evidence that returns received by stock picking strategies based on a companv’s dividend yield (relative to the market’s deadend yield) exceed the overall market returns. “’ The implication being that management is more often right than wrong in setting the payout ratio. Consequently the low payout ratios cannot be shown to be a “New Paradigm.” Rather, an indication that management believes the current healthy business cycle to be transitory.

Conventional wisdom also holds that companies are smarter for buying back their own stockwith its inherent “goodwill” value than paying out dividends which are subject to double taxation. ” First of all, purchases of treasury stock are not tax deductible by corporations either, so in this respect t!eT are no different from the payment of dividends. Also, the increase in capital gains (if am) accruing thereby to the remaining shaieholders is not tax exempt, but tax deferred and subject to a capped capital gains rate at the federal level. Furthermore, the tax consequences are irrelevant to all those shares in retirement plans, which are tax deferred. Lastly, some companies are buying back their own stock at an average 80% premium to replacement value of net assets, a premium ostensibly for “good will.” ” (If business is so good, why don’t they simply expand production by buying additional plants and equipment at an SO’% discount?) .h “good will” is a premium paid for compa nies with above-average future earnings potential, how could it be possible that the average company is 80% above average? ‘li Ergo, the argument that companies are smarter to buy back their own stock at historicall!- high levels ofraluation isjust plain sille

‘The argument being made here is not that stock bwbacks cannot inflate the price of the siock. Rather, it is that the stock buyback programs are of a transiton nature predicated upon companies having excess liquidity to engage in such buybacks. In turn, this excess liquidity is coilditioned upon a healthy and growing econom!- as recessions tend to dry up the excess liquidity in the economl: Consequently, during a recession, companies will have less liquidi5; to buy back their stock, regardless of then intelligence quotient. It could be then that these stock buyback programs might accentuate the cyclical swings of the stock market along with the up and downs of the business cvcle - that, of course, remains to be seen.

Stock buyback programs might be contributing to the levels of valuation along with other forces of demand in the market. But, before looking at other forces of demand, it would be helpful to review the market itself with these forces of demand in mind. The stock market is an ongoing public auction of ownership interests in corporations, although the volume of shares traded daily is only a small fraction of the total shares outstanding. At the end of each day, every portfolio containing shares that could trade in the stock market is revalued based on the closing price for those shares that did trade in the market. Consequently the $6.9 trillion of to- tal market capitalization is arrived at by the daily auction valued at less than l/dOOth of the total market capitalization. Ii Forces of supply and demand on the daily trading therefore have a tremendous leverage on the total market valuation.

A familiar reason given for this bull market is the aging Baby Boom generation. The Boomers are said to have a need to save for retirement. This need has been postulated to power the market through the year 2010.” The basic demographics for this argument are supported by the U.S. Bureau of Census and illustrated on the top of the next page. This chart illustrates over time the changes in various age groups as measured by those groups’ percentage of the population as a whole. The chart illustrates the Baby Boom generation causing the 45-54 year age group to peak around the year 2010. A Federal Reserve study found that stock ownership as a percentage of financial assets peaks around retirement age of 65, which means a peak in stock ownership for Boomers around 2025.‘” However, this rosy scenario cannot be demonstrated to have any historical jus&ation. Of all the age groups, the group with the highest correlation to the DJLJ is the 35 to 44 year age group (population numbers rather than percentage to the whole), with an r? of ,855 for the years 1914 through 1995, as charted on the left. Other population groups hare no greater correlation than that of total population itself, with an r’) of .65. As can be seen from the chart of the 35 to 44 vear age group, the major increase in this age group has coincided with this bull market, although the this age group is topping out and will decrease after the year 2000. This age group could have a greater correlation with the stock market because it offers greater net buyers in the dail! market auction. Though older age groups could hold more stock, holders do not drive the market, buyers and sellers do. Seemingly, Boomers shouldn’t trust anyone over 50, to drive the stock market up that is. (It should also be noted that the market can experience significant corrections even as the 35-44 year age group increases. For example, the severe bear markets of the 1930’s occurred even as this age group increased, albeit at a slower rate of growth than in the 1920’s.) TYhile the significant recent increase in this age group could alone explain our historic valuations in the stock market, there is another demand force that bears examining, that being liquidity.

The Federal Reserve has an army of followers with various disciplines because of the Fed’s agility to control monetan policy and influence interest rates. Lately some have argued that foreign central-bank purchases of U.S. Treasuries also add to our liquidity and, therefore, increase demand in the stock market, in addition to other economic effects. ‘?” The market has appeared to have confirmed this theory by having a good correlation with the Fed’s Moneta? Base, the Fed’s reported “Foreign holding of U.S. Debt” and the sum of the two (hereinafter Dollar Liquidity). ” The chart on the bottom left illustrates the correlation between the DJIA and dollar liquidity (the sum of the monetary base and foreign holding of U.S. debt), an r? of ,935 for the time period 1941-1995. As a result of this high correlation, it could be reasonable to assume the bull market will continue until liquidity dries up, and all one need do is monitor liquidity. The fault with this line of reasoning is revealed in a close inspection to the 1972-75 time period. During this bear market, the DJW can be seen dipping below the dollar liquidiF, line significantly while dollar liquidity is shown to be increasing. By measuring the DJW and the three basic monetary measures discussed above with a trailing 25-year time frame, the present high correlation is shown to be a fairly recent phenomenon. As shown on second chart on the bottom left, the last time there was as close a correlation was during the last bull market. Unfortunately for the Fed watchers, the correlation broke down when in 1969-70 the DJL1 corrected while dollar liquidity kept on increasing.

Again, in the absence of a cogent argument identifying
some forces in the market disrupting the relationship between
liquidity and the stock market, the occasional and present cor-
relation might just be statistical flukes. If liquidity does af-
fect demand in the public auction of stocks, then perhaps
the resolution to the monetan- model problem lies in exam-
ining supply.

 

“My star is on the ascendant”

If increases in liquiditv increase demand for stocks, the price of stocks should go up, as they have recently. The wild card that needs to be taken into consideration is the suppl! of stocks reaching the market. Ignoring the effects that new issues have, just a scant increase in sellers can greatly expand the supple of stocks available to soak up anv conceivable amount of liquidity An example is provided in the top chart left, a comparison of the DJM to lo-day average volume (as measured by advancing plus declining volume) on the NYSE. The August to October 1973 rally saw the DJIA increase 15.5% during that time period while IO-day average volmne increased 102%. Based on the price change alone, it would take 15%’ more money to move the market any given percentage. For example, to increase a SlOO stock 1% would take a purchase at $101, an increase of $1 in absolute dollars But, should that stock increase 15% from SlOO to $115, a 1% further increase would require a purchase at $116.13. This represents an increase of $1.15 in absolute dollars and 15% more money than at the $100 level. Then, to compound that, with volume expanding 102%, it would take 130% (2.02*1.15) more money to move the market any given percentage, therefore supply overwhelms demand.

The hypothesis that supply is being increased by holders turning into sellers is confirmed bv the second chart on left titled Cumulative Yolume, a running total of the h?iSE daih up volume minus down volume, equivalent to cumulative daih breath. From its peak of 341,790,OOO on 12/11/72 (wherk Cumulati\-e I’olume = 0 on l/l /70), the highest it could rise to in the October ‘73 rally was 207,080,OOO on 10/12/T3, nearly 40% lower than the December ‘72 high. As for the DJM, its peak at 987.05 on 10/26/73 was 6.1%’ lower than the l/11/73 peak of 1,051.iO. Mllereas, the lo-day average volume showed a marginal increase in the October ‘73 ralh over the previous high in November of ‘72. This prepondeiante of selling going into the ‘73-74 bear market is illustrated bp the top chart above comparing the 10 Day .Iverage Vol ume to the Cumulative Yolume.

As for the current market, the Cumulative Volume bottom chart above shows the tremendous strength in this market, and it has recently reached a new high. As long as the investors in this market remain holders, the forces of demand will continue to levitate the market. ;Uthough an allusion here will not be made to the “tulip-mania” “:‘, another Low Count? legend seems appropriate, that of the Little Dutch Boy with his finger in the dike (daily trading volume) hold- ing back the rising waters (total market capitalization). I$%at might cause the dam to burst remains to be seen. although rising interest rates are quite possible. “’ M’hatever the cause, its effects on Cumulative I’olume will help to gauge the flow of selling.

“That’s why I don’t care.”

In summation, as market valuations are determined by the daily market action, in which less than 1/400th of the total market capitalization actuallv trades hands, valuations can be driven to extremes in times of great demand for stock. FVith this amount of leverage on market valuations, it is difficult to argue that measures of valuation, such as the dividend yield, place any sort of upward limit on thk stock market. The thin float on which market valuations rest is a double edged sword. Should a small fraction of the holders, who on the average day outnumber active traders 399 to 1, decide to become sellers, they would overwhelm this thin float, driving prices down towards (and possibly past) the mean valuations.

Valuation models, such as the Dividend Expectation, can be very helpful in maintaining a perspective on the market. Although the nature of valuation models if used alone would result in investors getting out too earl!; missing out on the latter bull rallies when stocks get even more overvalued, and getting back in too soon, while stocks become even cheaper. For investors using valuation models, patience is not a Yirtue, it is a necessity Alternatively, investors might look to the tape and forces of supply and demand acting on the market for clues as to when the forces drying the market to extremes in valuation hare been expended and the pendulum starts to swing back to the mean.

Footnotes

  1. From the song “‘I Don’t Care, ” Harry 0. Sullon (1905).

  2. E.g. John I! Hussman, Hussvnan Econometrics, Nay 3, 1995, (see also ‘tLLlav-ket Tlhtrh, ” Ba&n ‘s, i\lay 15, 1995, at 42).

  3. As investments are aform ofsavings, a store of wealth for future consumption, the author belieues that real-total rate of return provides the best measure of the increase ov decrease in an asset’s $nlrchasingporoeI; especially given the study period of 1926 through 1995, which includes peviods of significant inflation and deflation.

  4. .A cursor:) reuiew of Federal Resnve data on households’ securities as a percentage of total assets would indicate a similar 30 year yle, which could be a rnusalfactor in the ualuations yle . though time and space do not allow for a though ,witzo and analp sis of said data.

  5. “T&e nuvnberE(X) is also called the expected value of X or the mean of X, and the terms expectation, expected value, and vnean can be used interchangeably. ‘I ,Uorris H. DeGroot, Probabilit! and Statistics, page 144, (lYi5).

  6. Ibottson dssociates, SBBI 1995 Yearbook.

  7. Alan dbel.ron, “(‘I LJ Down Il’nll Street, ” Barron ‘s, .\huember 18, 1996, at 5.

  8. Harry 0. Sutton, ibid.

  9. Patrick ,IlcGeehan, ‘Abreast of the Market, ” The TM1 Street lournal, June 3, 1996, at Cl; Alan Abelson, “c-p LY Down llnll Street, ‘I Barron ‘s, June 17, 1996, at 5, ,\‘IaJ 2i, 1996, at 3, .Youember 27, 1995, at 3, and March 25, 1996, at 3; .-indrpru Bary, “l’ntested Ilhters, ” Barron Is, Sovevnber 23 1996, at 1 i.

  10. Alan Abelson, “(‘1, ti Down llnll Street, ” Barron’s, -\lay 2i, 1996, at 4.

  11. dbby Joseph Cohen intevvinu, “Still Bullish,; Barron ‘s, October 14, 1996, at 26.

  12. Anthony E. Spare, RelativeDividend Ii’eld, page 15, (1992).

  13. Anthony E. Spare interview, “A Diuidend’s Yield, I’ Barron Is, October 28, 1996, at 30; dndrero Bar!, “World-Beatev; ” Barron Is, Februa/eweb 12, 1996, at 15.

  14. Kathry 111. Il>lling, “Boring Doubles, ‘I Barron Is, June 10, 1996, at 22.

  15. Alan Abelson, “Up and Down Wall Street, ‘ Barron's, June 17, 1996, at 5.

  16. Ibid.

  17. 16 ‘%u/eweb&k Stock Exchange Data Bank, ‘I Barron ‘s, .Vovember 25, 1996, at MI’1 06.

  18. Harry S. Dent, The Great Boovn Ahead, page 34, (1993).

  19. Federal Reserve Bulletin, October 1994.

  20. . Randall IV. Forsvth, “Pumped l’p, ‘I Barron’s, lllarch 18, 1996, at 15; John Jluellev; “The Real Conspirators Behind High Gas Prices, ‘I The MU1 Street lournal, May 8, 1996, at A14.

  21. “Federal Reserue Data Bank, ‘I Barron Is; Federal Reserve Bulletin; Annual StatisticalDigest, 1970/79 and 1980/89editions, FederalResevueSystem; Bankinpand Moneta?1 Statistics, 1941-l 970, Federal Reserve System.

  22. . HarT 0. Sutton, Ibid. 7Dov2’t Care” loas made popular by Eva Tanguay, the “Girl Il’ho L1llade lhudeuille Famous” and shocked audiences zoith her scanty costuvnes and risquesongs. See lbluvne 11, TheAYew Encylo$edia Britannica, 15th Ed., at 542.

  23. Alexandre Dumas, The Black Tulib, page ix 011. Girard rd. 1960).

  24. E.g. John I? Hussvnan, Hussman Econometrics, June 6, 1996, (see also “‘Market Wtch, ‘I Barron’s, June 1 i, 1996, at 44).

  25. Han? 0. Sutton, Ibid. Eua Tavpa? lost her fortune during the stock market crash of 1929 and spent the last two decades of her lifp a bedridden recluse. The .plu EW rylotiedia Britannica, supra note 22.

Gregory J. Roden

GregoF Roden is a Trust Officer at Nonvest Bank Minnesota, S.A. in Bloomington, MN. He received a law degree from South Texas College of Law in 1991 and a B.B.A. in Actuarial Science from the University of Wisconsin - Madison in 1979.

 

Return to Table of Contents


 

5: Market Noise. Can It Be Elimated And At What Cost?


Jeremy J. A. du Plessis, CMT

Introduction

As the proverbial rustling of a cand! wrapper - during a Beethoven piano concierto - distorts the music for the listener, so noise in market data distorts the analyt’s perception of the trend in the price.

But what is market data noise? The nature of fluctuations about a mean value is a common experience and is often referred to as noise. Noise in data is the minor random price movement that distorts the underlying trend, making it difficult to detect the direction of the price at any time. However, this is rather a vague definition, as it does not help to define what is noise and what is significant price movement. What may be noise to one analyst is ‘music’ to another. Part of the answer lies in the analyst’s analysis method and time horizon. Some Technical Anahsis techniques rely on eyen the smallest price movement. Furthermore, the longer the time horizon, the greater the proportion of price movement that can be defined as noise. This, however, brings subjectivity into noise definition and adds to the complex job of defining it before it can be eliminated. But should we attempt eliminate noise, or can we only reduce it?

Chart A show the a daily close line chart of BAA (British Airports Authority) from October 1993 to February 1996. The price changes every da): One day it is up, the next it is down. An inexperienced eye may have some difficulty in deciding the direction of the trend of BAA at any time. This uncertainty is caused bp noise - those price movements of ten against the prevailing trend that tend to disguise the trend.

Technical Analysis is essentially to do with trend recognition, so anything that makes that task more difficult must be investigated, and if possible, eliminated. The elimination of market data noise raises many questions, the main being that in order to eliminate it, it must be discretely defined and identified. Because’identilication of market noise is somewhat subjective, it is virtually impossible to discretelv identih it for each instrument. In view of this, noise elimination must be considered impossible, making noise reduction a more realistic target to aim for.

This article will, therefore, investigate traditional methods of noise reduction as employed by Technical Analysts, and look at the advantages and disadvantages of each. The article cannot, however, cover digital filtering processes employed by Electrical Engineers to filter noise. Whilst these may be applicable to price data, it is not the purpose of this article to become highly mathematical. The majority of Technical Analysts employ charts to determine price direction and wish to hare relatively simple method of reducing noise. For those readers who wish to investigate the route of signal processing, cybernetics, spectral analysts and digital filtering, however, several references are included at the end of this article.

Once some of the traditional methods emploved by Technical Analyts have been discussed, the article will then look at a simple, innovative method of objectiveh removing noise from data, and with investigate whether the results warrant it as a method that non-mathematicians can employ.

Is Noise Bad?

Before we investigate noise reduction, it should be noted that not all Technical Analysts want to reduce noise. Elliotticians, for example, use even the smallest, seemingly random, price mo\-ement for their wave counts, and so prefer to use the raw price data and not data that has had noise reduced. The assumption in this article, however, is that noise is bad and, therefore, its brief is to investigate way of reducing it. The article will show that tion of market data noise raises many questions, the main being that in order to eliminate it, it must be discreteh defined and identified. Because’identilication of market noise is somewhat subjective, it is virtually there are significant benefits in reducing noise.

Elements of the Pricing System

Carlson’ describes the contamination that occurs in the course of signal transmission within an electrical communication system in three effects: distortion,’ interference and noise. These contaminations are analogous to those found in market price movements, which cause certain unwanted and undesirable effects to take place. They alter the shape of the price pattern, making analysis more difficult. The three categories can be described as follows:

Distortion is the least important of the three and is due to imperfections in the pricing process. For example, some price recording systems use the mid-price-the price between the bid and offer - rather than the price at which a particular share or commodity has traded. This can lead to distortions because it is possible that no trading may have taken place at the mid-price quoted at the end of the trading period. The arguments for and against the use of mid- prices as opposed to traded prices are equally strong. Proponents of the mid- price system believe that the system avoids the occasional rogue traded price from being recorded as the last trade of the day. \\‘hichever method one subscribes to, the result will be some distortion in the price, although consistent use of one or other method will lessen the effect.

Interference is contamination of the price by extraneous factors. The price recording system itself mav be deficient. Some price recording systems are initiated by a manual process, where human error can have an effect on the data. Like distortion, interference can be eliminated by improving the recording system.

Noise on the other hand, is random and unpredictable, and is caused bp the pricing process - the interaction between thousands of buyers and sellers with differing and ever changing perceptions about the value of the instrument that they are buying and selling. These changing perceptions can cause the price to move against the prevailing trend, often disguising it. M’hat makes noise unique, however, is that it can never be completely eliminated - even in theory

Traditional Ways in which Technical Analysts Reduce Noise

There are many way bv which Technical Analysts attempt to reduce noise. We will look at three important methods in depth - Moving Averages, Filtered \Vave Charts and Point & Figure Charts. r\‘e will then comment on other methods, such as Gann angles, Speed/Resistance lines and Regressions lines, without going into depth.

Moving Averages

One of the most common way of reducing noise and exposing the trend is the use of a moving average. This can either be a simple arithmetic average or a weighted exponential ayerage, but the function of each is the same - to average out the price movement, thereby reduc- ing the noise.

Chart B shows the price of BAA superimposed with a 20-day simple arithmetic moving average. Notice how the average eliminates some of the minor price movements and corrections, making it easier to see the direction of the price at any stage.

Chart C shows the price of BAA with a 2Oday exponential moving average. The result is similar to that of the simple average. Because of its weighting to the latest data, however, the exponential average tends to turn sharper than the simple average and is thus preferred bv some analysts.

Let us now remove the price from the chart and replace it with the moving average alone. Chart D shows the 20-day exponential average on its own.

The noise in the data has certainly been reduced, making it easier to see the direction of the trend at any stage. But how do we know we have removed the right amount of noise? Perhaps we have removed too much-perhaps too little. Let’s look at an extreme situation. Chart E shows a 200-day exponential average on its own. One can see that not onlv has noise been removed, but many of’the trend changes have been removed as well.

So, moving averages can reduce noise, but how efficient are they in doing so, and what are the advantages and disadvantages?

At first observation, they are fairly good at reducing noise. They certainl; do expose the underlving trend. The disadvantage is that the period selected for the moving average is subjective. In the example above, a 20-day average was chosen. But when does this profit could just as well have been a lo-day or 40-day or even a 200-day as shown in Chart E. Each period of average eliminates a different amount of the noise - or genuine price movement. The greater the moving average period, the more noise and trend are removed. The lower the period, the less noise and trend are removed, until a moving average period of 1, which is the original price itself, is used and no noise is removed at all.

But when does this process move out of the bounds of noise reduction and into the domain of price morement removal? A 200-da! moving average removes most of the intra-month price movement and exposes the long-term trend of the market. The question of what period to use and how much noise should be removed continually plagues analysts, because there is a tradk off. The purpose of removing noise is replace the original price line with a new noise- reduced line. In this case, it is the moving average line. But can we trade using this line instead of the price line, because that is the goal?

Observing Charts B & C, you will see
that although the mor-ing average exposes
the underlying trend ve1T effectiveh. there
is a time lag. In other words, there’is a lag
of a number of davs or weeks before the
moving average indicates a change in
trend. The greater the moving average
period, the more noise/trend is removed
and the greater the time lag.

Filtered Wave lines

Another way to reduce noise is to replace price with a series of straight lines. These lines, called Filtered M’aves’, are drawn from intermediate highs and lows avoiding all the price movement between them. Unlike moving averages, filtered waves use the actual price at turning points, making them useful for trend line analysis. They are ideal for drawing trend lines and Gann angles, because the minor price movements are excluded without excluding the major movements and turning points.



If the intention is to eliminate noise between intermediate turning points, then filtered waves do the job very well. Once again, however, they are subjective, because the analyst must decide how much movement he &ants to eliminate. Chart F shows a 10% filtered wave chart of B.A.I. This means that any movement from an intermediate high or low which is less than 10% is ignored. The result is a very effective chart, which removes much of the uncertainty in the trend.

Chart G shows a lo-point filtered wave chart. The calculation is similar to that of the 10% chart above, but instead of ignoring price movement which is less than 10% from an intermediate high or low, it ignores any which is less than 10 price units - points or cents or pence. In both cases, the actual price turning points are shown. This distinguishes the method from the moving average method. It is easy to see how the filtered wave method reduces noise. It is replacing all the minor price movements between intermediate highs and lows with straight lines that go straight through the price movement. It is important to note, however, that the final straight line cannot be drawn until the price has reversed bp the required amount. Some analysts draw the latest plot to the latest price, but this is not correct and can be very deceptive, because it looks like a reversal has taken place, when in fact it has not. This writer prefers to draw a horizontal line (see Chart F) from the last intermediate high or low until the latest price moves the required percentage or points away from it. There is, therefore, a built-in lag in filtered wave charts, in that the last plot can only be made once the price has moved by the filter (points or percentage) amount. This is obviously a disadvantage in one sense and an advantage in another. The disadvantage is that the chart is only up to date when the price reversal is complete, which makes it difficult for the analvst to complete his analysis. But in another sense, this’is exactly what noise reduction is all abodt. L\‘e must assume that any price movement below the reversal percentage, is noise; and, therefore, the analyst should be unconcerned b) this movement until it breaks the reversal barrier.

It has been said that filtered wave charts are no good for trading, because they are drawn with the benefit of hindsight. This is not entirely true. If ther are constructed as described above, then ‘it is possible to trade using the filtered wave chart bv waiting for the chart to turn. For example, Chart F shows the price and hence the filtered wave line reached a high in July at 535 and then turned down reaching a low of 465.5 in October. Since then, although the price has risen, it has not risen by more than 10% from the low point of 465.5, so the trader is still short waiting to go long if the price does rise by more than 10% from the low. Of course, If the price falls below the 465.5 low, a new: line from the 535 high will be drawn to the new low. (The percentages used here are simply to illustrate a possible trading strateg- and are by no means optimised for the chart in question.)

So, what are the advantages of filtered wave charts? Thev are simple to read and identify the trend kxceptionallv well. They can be used for objective trading by selecting a filter percentage (or points) and waiting for the waves to turn. But there is another area lvhere wave charts are better than moving averages, and that is for trend lines, speed/resistance lines, Gann angles, retracements, in fact any straight line analytical technique. The reason for this is that wlike moving averages which alter the original turning points, filtered wave charts use the actual data for turning points.

The disadvantage, however, is the fact that the filtered wave does use straight lines which may not be the best wav to reduce noise. Chart H shows a 10% filtered war-e chart of BA% again, but this time it has been superimposed over the actual close price. Notice how, unlike moving averages, the straight lines go straight through the price. There is no offset to the right, which occurs with moving averages. The straight line turns on the intermediate highs and lows and it, therefore, appears that there is no lag, but this is deceiving. Remember that the straight lines can only be drawn once the price has reversed by the appropriate amount - 10% from the high or low in this example - so there is a considerable lag.

Point & Figure Charts

Because of their construction, Point & Figure charts have an in-built noise reduction capability. This is manifested in two ways, by altering the box size as well as the box reversal. Before drawing a Point & Figure chart, the analyst must decide how many price units are to be assigned to each box. The greater the box size, the greater the amount of noise is removed. This is because the price must move by a larger amount before the chart changes. The reversal size also play a part in eliminating noise by ignoring retracements which are less than the value of the reversal size.

Let us look at the example below to see how this works in practice.

At 10 x 3 Point 8: Figure chart is one where the box size or value is 10 price 35 units and the reversal size is 3 boxes or 30 price units. To see how the filteringworks, let us assume that the price is in an uptrend at 100 signified by the column of x’s. In order to plot the next up-X, the price must move to 110. If it reaches 109, no plot will be made. On the reversal side, the price must fall b? 3 boxes or 30 price units to 70. If it falls to 71, a fall of 29 points, the reversal will not be complete and the chart will not change. So, this means that the price can fluctuate between 71 and 109 without it effecting the chart. Assume, however, that the price does fall to 70 and a column of O’s is plotted, signifying a downtrend. Once this happens, the filtering mechanism changes. In order for a new 0 to be plotted, the price must fall by 10 price units to 60, but for a new column of X’s to be plotted, the price must rise b! 3 boxes, to 100 again. So once again we have our range of prices 61 to 99 between which there will be no change to the chart. In this way, noise or minor price changes are removed from the chart, but not the data. The original data remains unaltered. Altering the box size and the reversal alters the amount of noise, which will be removed from the chart. Although this method is a similar technique to that of the Filtered Wave chart,.it has two important differences. The first is that the filter in Point &Figure charts is different in the direction of the trend from that against the trend. As we saw in the example on page 35, the filter in the direction of the trend is 10 points, but against the trend is 30 points. The second difference is that Point & Figure charts take no account of time. Although this does not appear to have any significance in the noise reduction process, some analysts prefer not to use charts without a time dimension.

Chart HH shows a 2 x 3 Point& Figure of BAA. Each X and 0 represents 2 pence and the price must reverse by 3 boxes, or 6 pence in order to change columns. The chart should be compared and contrasted with the line and filtered wave charts of BAA shown earlier.

We have discussed three traditional ways of reducing noise, but we need to acknowledge other methods which are used by some Technical Analysts.

Regression Analysis

Noise distortions can be removed from a price series by using Ordinay Least Squares Regression Analysis. With this method, a series of price data is replaced by a mathematical best-fit straight line, which best represents the price action. The line and its slope are direct) influenced by the data under consideration, so that choosing the section of the price data to be analysed has a major effect on the regression line.

The regression calculation needs to be run often as new data is added to the end of the time series. The angle of the regression line then changes accordingly. The analyst will choose a starting point in the time series and continue drawing regression lines on a regular basis so that the latest direction of the price can be determined.

Chart HHH of BAA shows a number of regression lines that have been drawn each month from a common starting point of ‘i/3/95. Each regression line shows the trend of the share price from T/3/95 to the end of the month in question. The fan-like appearance shows that the steepness of the trend has decreased over the months up to the current date. An analyst using this method w-ill use the angle of the regression line to assist in determining the underlying trend.

Gann Angles, l/3 2/3 Speed Resistance Lines

As stated earlier, Technical Analysis is to do with trend recognition, so if a method can be found that identifies the trend without being ‘confused’ by noise, then the method must be regarded as a noise reduction technique.

Gann Angles are objective angles, drawn from a high or low point at a number of price/time ratios which identify, objectively the trend and the points at which the trend can be expected to change. In this way Gann analysts can ignore much of the price movement unless one of the Gann angle lines is intersected.

Although Speed/Resistance lines have a different basis for their construction, they perform a similar trend recognition task. They are also drawn from low or high points and indicate the levels at which the price can be expected to supported or resisted.

A few traditional noise reduction techniques employed by Technical Analysts have now been discussed. Although this writer is aware that there may be many more ways in which the n&se in the data can be reduced, the methods discussed so far are sufficient to give some background to this complex problem, before proposing a novel method. Before doing so, however, let us now look at how these traditional methods perform when traditional Technical Analysis techniques are employed.

Applying Traditional Analysis Techniques to Noise Reduced Charts

Having identified a number of noise reduction techniques, we now need to apply Technical Analysis techniques such as trend lines, moving averages and oscillators to charts which have been subjected to noise reduction.

Trend lines on Noise-Reduced Charts

This writer is sceptical about the value of using trend lines, or an! straight analysis technique on moving averages. This is because moving averages no longer haye the original data. There are no original turning points, hence there is some doubt as to whether trend lines, l/3 2/3 speed resistance lines, Gann lines, etc. can be used on these charts. However there are those analysts who do use straight lines on derived charts, with no hesitation.

On the other hand, the filtered wave method of reducing noise makes the charts ideal for an!: straight line analytical techniques. This 1s because the turning points are the original turning points, with onl!- the data between the turning points being altered.

Chart I shows a 7.5% wave chart of British Steel (BS) with the original price superimposed. X set of Gann lines is drawn from the November 1993 low. Notice how the Gann 1 x 1 (45x) line offers support to the uptrend of the filtered wave line and the price. This is because the filtered wave uses original price turning points. This would not happen with a moving average, because the data is altered by calculation and the position of the turn’ing points is altered.

Point &: Figure charts lend themselves to trend line analysis, particularly 45x lines, because they are constructed with boxes. The diagonal line joining the corner of one box to the next produces a 45x line. These 45x lines indicate the direction of the bull or bear trend and provide support or resistance to the trend.

Moving Averages On Noise-Reduced Charts

Moving averages are often used to generate buy and sell signals. When the price crosses above the moving average a buy signal is generated; when it crosses below, a sell signal is generated. This article is not required to comment on the effectiveness of this technique, but simply to comment on whether moving averages can be used on data that has had the noise reduced.

Chart J show the original price of Guiness (GUN) with a SO-day simple moving average. Chart K shows the 10% wave chart of GUIS with a &day average. At first glance, the wave chart with the moving average performs far better than the actual price with the moving average. The crossover signals are earlier and there are no whipsaws, but this is an extremel! dangerous obsemation. There is a built-m lag in the wave chart in that it does not turn until the reversal has been reached. So it looks like the averages have crossed on time but they have not, because there is the reversal delay This really disqualifies filtered wave charts from being used with moving averages.

Chart L shows a 20-day exponential average of GUS - in effect a noise-reduced moving averages. chart, with a 50-day average superimposed. The main line is smoother because the noise has been reduced, so there are far fewer false signals and whipsaws. There is of course a trade-off for this, and that is the signal lag. It is, however, not a constant lag, for example; there is an 8 da! lag in the sell signal in hlarch 1993; but only a 4 day lag in the buy signal in March 1995.

Point & Figure Charts do not lend themsehes to moving average crossover analysis.

Oscillators

The purpose of reducing noise is to make charts more readable and more reliable in giving signals, but can the noise- reduced data be used to draw oscillators.

Chart Ll shows a price of GUIN. Chart L2 shows a standard 14-day RSIZ (Relative Strength Index) of the piice. Chart L3 shows a 14-day RSI of the smoothed data (the 20 exponential average of the price).


For some, the original RSI is quite difficult to read, because of the sharp turns and vigorous oscillations. Chart L3 is far smoother - obviouslv because it is an RSI of the 20-day noise-reduced data. But does the noise-rdduced RSI give better signals than the original?

  1. Certainly the smoothness of the noise-reduced RSI makes it easier to read. The turns are easier to judge, because there is no noise creating uncertainty about them.

  2. RSI’s are used for divergence and this is where there is an improvement with the noise-reduced RSI. The original RSI shows no divergence during the November 1993 to December 1994 top, nor does it show it during mid 1995. The noise-reduced RSI, on the other hand, does show these divergences, albeit marginally.

  3. The greater extremes in the noise-reduced chart make it easier to detect overbought and oversold conditions.

  4. As with all noise reduction. however, there is a considerable lag in the turning points.

In order to show that drawing an RSI of a 20-day moving average is not the same as drawing a 20-da\ moving average of a 14day RSI, chart i4 shows the chart of a 20-day aI-erage of the 14dav RX This shows a smoothed RSI of the briginal data instead of an RSI of smoothed data, which is really what we are aiming for.

This writer does not see the object of using data from filtered wave charts to draw oscillators, because only the turning points are original. The rest of the data is calculated to create the straight lines. Hence, no attempt has been made to draw an RSI of a filtered wave chart.

It is, of course, possible to draw point and figure charts of the RSI, but this does not reduce the noise in the price data, but rather the noise in the RSI, which is similar to drawing a moving average of an RSI described above.

The three traditional methods of reducing noise discussed above, namel!; moving averages, filtered wave charts and Point 8c Figure charts, suffer from a number deficiencies - already explained. A%lthough filtered wave charts are an excellent \l-av of reducing noise for clearer straight iine analysis, they cannot be used for other for& of an&is and must therefore be discarded. 0; the three, therefore, moving averages are more flexible, but the problem is the determination of a suitable moving average period. How does the analyst determine the period without being too subjective?

The purpose of reducing noise is to replace the original data with a new set that has had the noise reduced. This new data set must perform like the original in every way with the exception that am noise should be reduced with the hope that this increases the reliability of the signals. An objective wa!- of doing this is, therefore, required and is proposed below.

The Proposal

  1. Take a stream of data and smooth it with a moving ayerage.

  2. Calculate the trend of the smoothed data (the moving average) at every point.

  3. Once calculated, the trend can be extrapolated to forecast the next period’s data point.

  4. This forecast is then be compared with the actual price for the next period and the difference taken.

  5. Repeat this process for every datapoint throughout the data stream and square the differences to remove negatives.

  6. Sum the squared differences to obtain a total of the squared differences.

The smaller this total is, the more effective is the moving average in forecasting the next period’s price. In other lvords the moving average that produces this least squared error is the best for replacing the original data. Noise reduction, therefore, becomes swift, simple and objective.

The Model

The model of the proposal described above uses an exponential moving average for two reasons. Firstly it is a weighted average and so it is effected to a greater extent by the latest prices. Secondly, and perhaps more important, it is possible to have fractional periods. I\%ereas a simple arithmetic average can onlv hare whole numbers as the period of the average, exponential averages are calculated using a factor, l\.hich when translated into a period bv the following formula’, translates into a’fractional period.

where n is the moving average period

f is the exponential smoothing factor In other words, a smoothing factor of .3 equates to a moving average of 5.67 days The mathematical model is defined as follows (see also Appendix B) :


In practice, it has been found that it is best to start with a factor of 0.5 and test factors either side of this until the least squared error is found. ;\ computer performs this task veT quickly. Once the factor is found, an exponential average using this factor is drawn to replace the original price.

Now that we have the model, let us look at some examples to see whether this method can be relied on to gir-e superior results. For consistency we l\ill first look at BAA and then at some wider markets.

For comparison, chart Rl shows the price of B.U. with a new noise-reduced line in chart R2. Ah explained above, this line is an exponential aI-erdge using a factor determined by the least squares difference method. In this case the factor was found to be 0.527. which equates to a moving average period of 2.795 davs. Comparing the hvo charts, it is easy to see 19% the effect that it has had on the data. Much of the minor price oscillations have been removed without destroying the trend; even the short-term trends are still intact. It seems that the quality of the data has been improved by this reduction of noise. The line is smoother and less erratic and there is generally no lag. Vertical lines have been drawn from various highs and lows to show this.

We now need to look at whether noise can be removed again from data which has already been ‘noise-reduced,’ and if so, does it continue to improve the data?

Chart R3 shows the result of applying the noise reduction model to the data a second time. This is done by applying the noise-reduction model to the data which has already had the noise reduced once. The 2nd reduction factor comes out at 0.817 or 1.447 days. This means the chart is constructed by taking a 1.447 exponential average of the 2.795 exponential average of the close price. Once again, the appearance of the chart is improved. The minor one-day corrections have been removed, but the important minor trends are still in place. Once again however, the turning point lag has not changed, as seen by the vertical lines. Chart R4 shows the result of applving the noise reduction a third time.

It is significant to note that the chart patterns appear much clearer.

For example:

  1. There are two Double tops, one in January/Februarv 1994, and the other in July/September 1995.

  2. There is a head &: shoulders pattern in October/Sovember 1994.

Although these patterns are visible on the original price line in chart Rl, there is no doubt that they are easier to pick out in the noise-reduced charts. One may continue removing noise in this wal ,. ‘Each time the noise reduction calculation is performed on the previous set of noise-reduced data. At some stage, however, additional reductions will not remove additional noise. The factors produced for the three noise-reduced BAA charts are as follows:

  • 1st reduction factor of 0.527 equates to a moving average of 2.795 days.

  • 2nd reduction factor of 0.817 equates to a moving average of 1.447 days.

  • 3rd reduction factor of 0.921 equates to a moving average of 1.171 davs.

Therefore, in order to obtain chart R4, a 1.171 day exponential average is calculated from a 1.447 exponential average of a 2.795 exponential average of the original price. This allow the analyst some subjectively in removing noise according to his time horizon. After each noise reduction, the chart can be inspected to see whether sufficient noise has been removed and the appearance of the chart has been improved.

Let us look at the results on other markets to see whether the same amount success can be achieved.

The charts above and on the following page show:

Sl Dow original data
S2 Dow with one reduction
S3 Dow with two reductions
S4 Dow with three reductions
S5 Dow with four reductions
S6 Dow with five reductions
Si D&v with ten reductions
S8 Dow with twenty reductions


Table 1 right shows the factors required to produce the reductions. Sotice how, as the noise is reduced, the factor required to reduce further noise becomes larger, meaning the moving average is becoming smaller. The first reduction requires a 2.663 moving average, whereas the 5th reduction only requires a 1.026 average. The reason for this is that the line is becoming smoother and so a shorter period moving average is required to reduce further noise. Notice too, that the factor increases sharply up to the 5th reduction when it reaches 0.987. It then increases slowlv over the next 5 reductions until it reaches’1 .OOO and remains at that level thereafter. A factor of 1.000 indicates a moving average of 1 and that no further smoothing is undertaken.

The charts show that none of the reductions have lost any of the essential patterns in the price, which determine the trend, and that there is virtually no lag.

Table 2 lists the number of day lag for ever) reduction on the DOI\’ taking the two turning points in January 1994 and December 1994.

In order to see how the reduction factors are distributed for a number of simi lar shares, two factors were extracted for each constituent of the FT-SE 100 index. That is to say, the model was used on the original price to produce the 1st factor and then on the smoothed data after applying the 1st factor to produce the 2nd factor. Table X in the Appendix A1 shows the result.

The 100 constituent shares are listed in ascending order of the first factor. It should be noted that the second factor is listed in the ascending order of the first factor and is, therefore, randomly distrib uted. The third column shows the difference between the two factors.

Fig. 1 on the next page shows the distribution of the 1st reduction factor for all the 100 constituents from the low of 0.468 to the high of 0.673. This should be compared and contrasted with Fig. 2, which shows the distribution of the 2nd reduction factor for the same 100 stocks. The factors do not increase evenly from stock #l to stock #lOO, like the 1st reduction chart, although the trend is generallv up as shown by the superimposed trend iine. This means that, just because a stock has a higher 1st factor than another stock, it does not follow that its 2nd factor will be higher as well, although the tendency is that it will be. This is confirmed bv the varying differences between the 1s; and 2nd factors shown in Fig. 3. The superimposed trend line, however, shows that the differences generally decrease as the 1st factor increases. The maximum difference between the 1st and 2nd factor is .348 and the minimum is ,195. Table 3 on page 42 shows the statistical analysis of the factors and the difference between them.

Let us start by looking at the analysis of the first factor. The median is 0.578 and the mode is 0.573. The mean is almost the same value as the median, differing by only .00033. This indicates that the factors are all concentrated around the value of 0.578. In fact the range between the highest and lowest is only .0205. This shows a high level of consistency amongst these data series, which are related to one another because they are the top 100 shares in the U.K. The realitv that the factors are so simila; indicates that the stocks have a similar amount of noise.

The 95% confidence interval is 0.00691, which means that we can be 95% confident that the factor will occur between 0.58G and 0.5iO5.

Before looking at the 2nd factor, let us look at the 1st Factor Frequency Distribution Histogram below. As expected it is bell shaped and slightly skewed to the left with a negative skewnes factor of -0.0203.

This should be compared to the Frequency Distribution Histogram of the 2nd factor above.

It is also a bell shaped curve, and negatively skewed with a factor of -0.16432. The statistical analysis of the 2nd factor shows an higher level of consistent!- than the 1st. The range at 0.09 is far smaller. The standard deviation is also much smaller at 0.0183f. The mean and median are almost identical. The 95% confidence level is half that of the first factor. =Uso shown below is the Frequency Distribution Histogram Dtthknnco bHwomnlstL 2nd Factor Frequency Distribution Histogram of the difference between the two factors. The statistical analysis and the histogram show that the difference between the two factors is fairly constant.


Having looked at the consistency of the model across similar instruments, such as the FT-SE 100 constituents, we need to see how it per- forms on other markets. T\vo reductions were conducted on a variety of markets. Table 3 shows the 1st and 2nd reduction factors obtained for \Vorld Indices, a selection of UK equities, UK Futures, US Futures and Commodities and a selection of Intrada) hourlv data. Notice that the DO\V and S&P 500 are the same.

The few UK market indices provide an interesting insight into the noise reduction process. The factor for the FT-SE 100 index is lowest, next is ET-SE 350, then the FT-SE 250, then FT-SE Small cap index. This may be explained in terms of the smoothness of the data. Charts S9, SlO, Sll and S12 show the original data for FT-SE Small Cap, FT-SE 250, FT-SE 350 and FT-SE 100 indices respectively. By simple inspection it is easy to see that the FT-SE Small Cap is the smoothest, the next is the FT-SE 250, then the FT-SE 350 and finallu the FT-SE 100 is the noisiest. This is no surprise because we have already seen that the second reduction, which is conducted on smoother data, is always higher than the first. Data smoothness can also be measured in terms of volatility measured by the annualised standard deviation. The annualised standard deviation of the four data setz are as follows:

Therefore, the smoother, less noisy less volatile the data,the higher the reduction factor.

Under the heading UK equities, although there is a mix of Fl-SE 100 constituents and lesser shares, it is impossible to say which is which bp simply looking at the factors. In fact the first 16 on the list are the constituents and rest are not. This means that the capitalisation and tradability of the share has no visible effect of the reduction factors.

The next heading in the table is FT-SE 100 Futures. \‘arious contracts were investigated. The factors are slightly higher than those for shares and spot indices.

Various US Futures contracts were also investigated. As with the FT-SE 100 future, the CS futures are generally higher.

Because the least squares method can be used on any time series - daily weekly as well as intraday data - a number of reductions were performed on various intraday data sets. The factors for howl! equities, .Yllied Domeq, ASDh, British Airports, British ;2irway, British Telecom and Dixons are all lower than the equivalent daily factors, indicating a greater amount of noise in hourly data. The hourly DOM and FT-SE 100 factors are also lower than the daily

So, what are the advantages and disadvantages of the least squares method? Is it better than ordinary moving averages or filtered wave charts?

Let us compare the method with ordinary simple moving averages.

  1. The least squares method allows for fractional periods, which simple moving averages do not. Even if exponential averages were used subjectively, the analyst would not know lvhat fractional period to use.

  2. The least squares method is objective. The exponential factor is obtained by using back history to calculate the smoothing factor, whereas with ordinay moving ayerages, the analrst must decide on the moving-average period by inspection. However, the least squares method does allow the analyst to be subjective, by re-running the noise reduction algorithm on the already noise-reduced data until such time as he is satisfied with the resultant chart, or he can keep running it until the reduction factor equals 1 .O.

  3. There is less lag with the least squares method, especially if more than one level of reduction’ is undertaken. In order to remove additional noise with ordinar!: averages, the period of the average 1s increased, whereas with the least squares method, additional noise is removed by applying a varying short-term average to data which has already had its noise reduced.

Applying Traditional Techniques

Having proposed and discussed the least squares method, we now need to see whether traditional Technical Analysis techniques can be applied to the noise-reduced data.

Moving Averages

Chart Tl shows the supermarket chain XSDX (ASSD) with a 20-day moving average. The 20-day tracks the price well, with a number of false penetrations. Chart T2 shows a 1st noise reduction of XSSD with a 20-dav average. Remember that this is a 20-day average of the noise-reduced data and not of the original data. The result is that the chart signals are improved. The line is smoother and some of the minor whipsaws are removed. However, the ‘buy’ and ‘sell’ signals given by the moving average, still occur on the same day. Charts T3 & T4 show the result of imposing a 20-day moving average on the 2nd and 3rd noise reductions respectively As more noise is removed, the false signals disappear completely albeit at the expense of a slight signal lag. This is because the noise reduction model reduces the range of the price and so sharp movements are ‘rounded off.’ For man!; these more reliable signals are of great benefit, because of the additional confidence obtained by avoiding false signals.

Oscillators

Welles Wilder’s RSI oscillator is one of the world’s most popular momentum indicators, so let us look at whether its readability can be increased. Chart Ul shows the &iness (GUN), with an original 14 day RSI (chart c‘2) right. Chart U3 shows a 14-day RSI of data from the 1st noise reduction and Chart U4 shows a 14-day RSI of data from the 2nd noise reduction.

The first point to notice is that there is no lag with either the 1st or 2nd noise-reduced RSI lines. Furthermore, none of the original RSI pattern is lost. The essential RSI patterns and trends, although smoother, are still visible. But what is important, is that the divergences are still there and in most cases they are improved. Look at the lower price lows that occurred in December, .Januarv and March, and the high& lows in the original RSI (chart Ul) marked with line AB. All the noise-reduced RSI’s show this divergence as well, but it is only marginally clearer.

However, the important price high in Januaq 1994 hardly shows any divergence on the original RSI line-marked with line CD. The noise-reduced chart U3 on the other hand shows it very clearly and is shown even more decis&elv by ;he 2nd noise-reduced line, chart U4.

The price high in September 1995 creates a problem. Chart U2, the original RSI, shows confirmation rather than dirergence - marked with line EF. Chart U3, the 1st noise-reduced RSI, shows marginal divergence, but the 2nd noise-reduced RSI, chart U4 shows divergence very clearlv.

The next aspect to note about the noise-reduced RSI’s is that they extend to greater extremes, making it much easier to pick the traditional turns above 70 and below 30. The range of the original RSI is from 17 to 86, the 1st reduction is from 10 to 94 and the second reduction is from 7 to 95.

It is important to note that an RSI drawn using noise-reduced data is not the same as drawing a moving average on original RSI data. Charts U5 and U6 show a 3-and 6day exponential of the 14day RX. In both cases the range is reduced rather than increased. Although some of the divergences show, the important high in September 1995 is shown as a confirmation in both charts.

Volume Indicators

Can noise reduction improve volume based indicators like Granville’s On-Balance i’olume (OBV) ‘?

Chart 6 shows the price of United Biscuits (IBIS) with a regular OB\‘shown in chartV2. The uptrend of OBV chart shows general accumulation of the share over the period under consideration, but generally during this time, the price was in a down trend. Particularly disturbing is the period behveen July 19$5 and November 1995, when the OB\’ showed strong accumulation, during a strong price decline - nor- mally a bullish analysis. Chart V3 shows an OBV drawn using 1st stage noise-reduced data. This is done by reducing the noise in the price data and then using the noise reduced data to determine whether the noise-reduced price is up or down from the previous day. The smoother data will result in more consecutive days in the same direction. Tke resultant OB chart is entirely different. It shows marginal accumulation between July 1994 and the beginning of 1995, during which time there was a marginal improvement in the price. However, the noise-reduced OBV started to decline sharply during 1995, in line with the actual price. There is a slight improvement in the behaviour of the OB\’ when it is drawn using the 2nd stage noise-reduced data, chart V4.

We have looked at a number of examples of noise-reduced data using the least squares technique. It is also important to see if the reduction factor changes over time. Table 5 shows the 1st and 2nd reduction factors for 600 days of the FT-SE 100 and DO1V indices, taken at 6 monthly intervals from December 1984 to December 1995.

The tables show that the factors over a ten year period are consistent for both the DOM’ and the FT-SE 100 index. Looking at the 1st factors first, we can see that the 1st factor for the K-SE 100 has the same mean, median and mode of 0.527, with a very low standard deviation of 0.00191 and confidence level of 0.00869. The range is only 0.05. The 1st factor for the DOk\’ is marginally less consistent. Its mean, median and mode are each different. Its standard deviation and 95% confidence level are both higher than those of the FT-SE 100.

The consistency measures show an almost identical con$istencv for the 2nd factors of the DON’and FT-SE 100. Although their means are different, their standard deviations and confidence levels are similar.

This consistency of both factors over the lo-year period includes factors obtained during the sharp correction in 1987. Notice that when the factor is ob tained from data which includes the 1987 correction period, the factor is lower, indicating that a greater moving-average period is required to smooth the data. Considering the sharpness of the correction, however, it is remarkable that the difference is so slight.

Although the statistics show that the reduction factors are fairly constant over time, the amount of variance that is present is to be expected, as the period under consideration has a number of completely different trends - from sharp corrections, to long consolidations, to long uptrends.

Conclusion

The assumption behind this article is that noise is the enemy of the Technical Analyst. On the other’hand, it has been acknowledged that some Technical Analysis disciplines, such as Elliott U’ave Analysis, do not favour tampering with the data to remove noise, because ‘noise’ is important to the analysis method. It has been shown that it is difficult to eliminate noise from data, if the decision as to what constitutes noise is subjective. If noise cannot be defined in absolute terms, it cannot be eliminated. It has, however, been shown above that noise can be reduced to such an extent that it can significantly improve trading signals. Traditional methods of noise reduction, namely moving averages, filtered waves and point & figure charts were discussed and evaluated. Although they are all able to reduce noise, it is felt that they are too subjective in their operation.

The Least Squares Noise Reduction Method proposed and eraluated in this article does give better results than these purely subjective methods. Part of the reason is that there is a mixture of objectiviq as well as subjectivity in the technique. The ‘run’ through the Model to achieve what has been called the first noise reduction factor is objective. Through calculation the Model produces an exponential factor, which is then used to draw a new smoothed line. The analyst is then able to inspect the new line according to individual subjective criteria and has the op portunity of deciding whether sufficient noise has been removed from the data. If so, the line may then be used to replace the original data and normal Technical Analytical techniques may be employed as if it was the original data itself. If not, more noise may be removed using the objective technique again and again until the analyst is satisfied with the result.

It has been shown, however, that there is an effective limit to the number of times that the data can be passed through the model, as the reduction factor reaches 1 .O after about 3 ‘runs.’ This is a major advantage of this method over all other methods, as a factor of 1.0 indicates that the forecast matches the price, suggesting that all the noise has been removed. But this seems to contradict the statement made at the beginning of this article, that noise cannot be removed because it cannot be defined. However, if we now define noise as that part of the data which prevents the next period’s price from being forecast b! the Least Squares Noise Reduction method, then we can pronounce that all the noise has been removed when the reduction factor reaches 1 .O.

Another benefit of this technique is that unlike normal smoothing techniques where a larger average period is used in order to remove additional noise, the least squares method uses varying, very shortterm averages which are applied to previously smoothed data. This incremental technique appears to give better results as it alloivs a pause for inspection between each noise reduction. It also preserves the essential patterns in the data.

We have seen that the reduction factor increases as the data becomes smoother, indicating that a shorter-term average is required to reduce further noise. This increase continues until the reduction factor reaches 1 .O.

It has also been seen that if the chart of the original data appears smoother to the eye, the factor produced b? the least squares algorithm is higher. This was shown by comparing the noise reduction factors and charts of various UK market capitalisation indices.

As the ‘mood’ of markets change from one year to the next, it was significant to see that over time the reduction factors change yen little. This was illustrated using the FT-SE 100 and the DOM’ index.

The title of this article asked whether noise can be eliminated, and if so, at what cost. It has already been stated that noise elimination is possible if noise is defined as that part of the data which prevents the next period’s price from being forecast by this technique. But what of the cost? The only cost is the turning point lag that is present in any averaging technique, which the Least Squares Noise Reduction method is. However, the amount of lag is so small that it should not be regarded as a negative factor. In fact, a certain amount of lag may be beneficial because although the signals are delayed, their reliabilie is increased.

It is, therefore, concluded that the proposed Least Squares Noise Reduction method proposed and outlined in this article is superior to the more traditional methods of smoothing data.

Finally this article has merely scratched the surface of noise reduction. Its purpose however, is to present a simple objective method of reducing noise to those Technical Analysts who do not wish to be involved wit< high level mathematical techniques. However, further reading is listed for those who wish to delve deeper into the subject of communication theov and signal noise.

References and Further Reading

Carlson, A. B., Communication Systems, An Introduction to Siznals and N’oise in Electrical Communication, McGraw-Hill Inc., 1953.

Frost, A. J. & Prechter, R. R., Elliott LVave Princinle - Kev to Stock Market Profits, New Classics Library Inc., New York, 1978

Granville, J., New Key to Stock Market Prof & Prentice Hall, Englewood Cliffs, NJ., 1963.

Harris, R. U’. gC Ledwidge, T. J., Introduction to Noise Analysis, Pion Limited, London, 1971.

Hurst, J. -II., The Profit Magic of Stock Transaction Timing, Prentice Hall Inc, Englewood Cliffs, NJ., 1970.

Kaufman, P. J., The Sew Commodity Tradinc Systems and Methods, John iVile? 8: Sons Inc, New York, 1987.

McLaren, L\‘., Gann Made Easv - How to Trade Using- the methods of 1V.D. Gann, Gann Theory Publishing Company Corpus Christi, Texas, 1986

Murphy J. J., Technical Analvsis of the Futures llarkets, New York Institute of Finance, New York, 1986.

Pierce, J. R., Symbols. Siwals and Noise, Hutchinson S: Co. Ltd., London, 1962

Pike, E. R. & Lugiato, L. A., Chaos, Soise and Fractals, Adam Hilger, Bristol, 1987.

Pring, 11. J., Technical Analvsis Exnlained McGraw Hill Inc, New York, 1983.

Spiegel, M. R., Theory and Problems of Statistics, McGraw-Hill Inc., 1961

Tufte, E. J., The Visual Disnlav of Ouantitative Information, Graphics Press, Cheshire, Connecticut, 1983.

Wainstein, L. A. & Zubakov, Y. D., Extraction of Signals from Noise, Prentice Hall, Inc., Englewood Cliffs, NJ., 1962

M%eelan, A., Study HelDs in Point & Fitwe Techniaue, Morgan, Rogers &Roberts Inc., 1954

M’ilder, I\‘. J., New ConceDts in Technical Trading-s Systems, Trend Research, Geensboro S.C., 19%

Wonnacott, T. H. 8r Wonnacott R. J., b troducton Statistics, John M’ilev B Sons, 1977

Footnotes

  1. Carlson, A. B., Communication Systerns, An Introduction to Signals and Noise in Electrical Communication, McGraw-Hill Inc., 1975, 4.

  2. Filtered M’aves were introduced b\ Arthur Merrill over 20 years ago and have been duplicated and renamed b) other analysts.

  3. Wilder, M’. J., New Concepts in Technical Tradings &terns, Trend Research, Geensboro N.C., 1978

  4. Although f = 2/ntl is the most common translation formula, some analysts prefer to use f = 2/n. However, this yields a factor 1 when the period n = 2. It is for this reason that the f = 2/ntl uses a divisor of ntl.

  5. It is possible that over wide-ranging data, using the difference between the forecast and the actual price will tend to distort the results. Instead therefore, when conducting the process over a long time series, it is better to normalise the difference by dividing it by the price. The difference between these two methods is shown in Appendix C.

  6. ibid

  7. Granville, J., New Kev to Stock Markel Profits, Prentice Hall, Englewood Cliffs, SJ., 1963.

 

 

 

 

When a long time series is studied, such as the DON’ from 1952 to 1996, the factor that produces the least sum of the squared error is different from the factor that produces the least sum of the normalised squared error. However, when a shorter time series is studied, such as the 600 day of the DO\V, the factors are the same.

Jeremy J. A. du Plessis, CMT

Jeremy; du Plessis is the Managing Director of ISDEXM Research Limited in the United Kingdom, specialising in technical analysis software. He is responsible for program specification and development as well as indicator research. He has used and researched technical analysis for over 17 years.

 

Return to Table of Contents


 

6: Sector Analysis Using New Highs and New lows


Frank Teixeira, CMT

Introduction

New High and New Low data are widely used in analyzing the stock market on a technical basis. New Highs and Xew Lows, vnurh like advances and derlines, illustrate the leuel ofparticipation in the market, and are a guide to internal strength or weakness of the market. In fart, lUu/eweb High/Xew Low data are often considered supevior to advance/decline data because to register on the Xew High/Xew Low list, a stork usually has to travel far&; whereas a stock on4 has to be up or down l/8 to reg’ster as an advancing or declining issue. Thus, internal strength is accompanied bJ an expanding list of X-w Highs, while internal weakness is accovnpanied by an expanding list of New Lows.

This paper will explore the use of Sector New High and New Low data for spotting changes in sectors of the market. I use some divergence analysis, movnentum and/or relative strength analysis in an attempt to quantijj bulJ and sell signals. I belieoe this study is pioneering and gives technicians another tool for analyzing the markets.

Divergence Analysis

Turning points in the markets are often led bv changes in various momentum indicators including rate of change analysis, relative strength, advance/decline measures and, of course, New High minus New Low measures. New High minus New Low oriented indicators may prol-ide better entry and exit points to the stock market because, in my opinion, they hare a tendency to lead the broad market averages. For, example, as a bull market matures, a majori? of stocks will stop posting New Highs, while a narrow list of stocks continues to carry the market averages and/or sector indices to a higher level. Similarly, when a momentum indicator diverges, we receive preliminarv warning of a possible trend change. A divergence occurs when a price index makes a high unaccompanied by an underlying indicator. A divergence also occurs when a price index makes a low and is unaccompanied by an underlying indicator. The first is a bearish divergence, while the latter is a bullish divergence. Divergence analysis will be used in this paper to some extent with respect to New Highs and New Lows. Divergence analysis is an important element of technical analysis, but interpretation sometimes involves a high degree of subjectivity.

Momentum & Relative Strength

There are two way of looking at momentum: as a measure of rate of change and as a measure of internal market vitality Rate of change is more suitable for measuring a price average, while certain measures of internal strength are better applied to monitoring market indicators.

One of the tools used frequently in technical analysis is relative strength. Relative strength measures how well a stock and/or market is performing relative to another stock and/or market. A rising relative strength ratio implies improving performance, while a declining relative strength ratio implies weakening performance. In this paper, I use this concept of “relative momentum” to generate buy and sell signals using Sector New Highs and Sew Lows. This, to my knowledge, will be the first time relative strength has been used in conjunction with New High/New Low data.

New High/New Low data have usually been used as a momentum measure. In other words, as a market moves higher, New Highs should expand and New Lows should contract. If a market makes a new high and the Sew High list contracts, this is regarded as a sign that internal momen tum is slowing. The rate at which the New High list expands is also critical in determining a market’s internal strength. During the initial stages of an advance, New Highs should expand rapidly. The longer the duration of the expansion in h’ew Highs, the longer or more durable the rally.

The New High/New Low list offers manl- possibilities for analysis. Investor’s Business Dailv’s Sew Hig

Sector Analysis Using New High & New Low Data

Investor’s Business Daily divided its New High/New Low list into various sectors (30 in all) starting on July 26, 1991. The lim- ited histon is a shortcoming, but it is still enough to draw some reasonable conclusions. Four sectors will be analyzed: Banks Energy Medicals, and Utilities.

The form of this analysis is twofold. First, a lo-day moving average was taken of the Net New Highs for each sector (this will be referred to as the NNH throughout the paper). Divergence analysis is then used to gain some insight. Admittedly the insight is limited due to the fact that divergence analysis tends to be more subjective than objective, but the NNH was still useful in some cases.

Second, I introduce a new term called the RSH, which is the relative strength of a sector’s New Highs. It is calculated by dividing the number of Sew Highs in a sector b!- the number of New Highs in the market. X lo-day moving average of the RSH is then taken and used for the analysis.

Lastly, I also introduce another term called the RWL, which is a relative weakness of the New Lows in a sector relative to the total New Lows in the market. h loday moving average of the RM’L is then taken and used for the analysis.

The term “relative strength” in the RSH refers to a rising number of New Highs in a sector relative to the total number of New Highs in the market. The term “relative weakness” in the RM’L refers to a rising number of New Lows in a sector relative to the total number of New Lows in the market. The RSH and the RMZ more accurately measure the percentage of New Highs or New Lows in a particular sector to the total number of New Highs or New Lows in the market.

The RSH is used to generate the by and sell signals, while the RF11 is used to gauge the degree of weakness in a sector. The RSH rises quickly when a sector is strong and descends quickly when a sector corrects. This happens because stocks of a particular group tend to rally and correct together. Corrections are healthv as long as New Lows do not expand to any great degree.

The four sectors chosen are presented with a chart of the sector index, a loday moving average of the sector Net Ne& Highs (NYH) , a lo-day moving average of the sector New Highs relative to the Market New Highs (RSH), and a lo-day moving average of the sector New Lows relative to the Market New Lows (RUT). The parameters used for the studies are as follows:

  • The NNH will be used mostly for subjective divergence analysis. The divergences in some instances precede buy and sell signals given by the RSH.

  • The RSH gives a trading buy signal b) moving to or above 7. In other words, when 7% of all Sew Highs in the past 10 days on average belong to a particular sector, a buy signal is given. A trading sell signal is triggered when the RSH mo\-es to or under 4 (4%) after being at or above 7.

  • The RWL will trigger an alert when it moves above 10 or when 10% of all the New Lows in the past 10 days on ayerage belong to a particular sector. The R\$X above 10, however, does not supersede the RSH in the buy mode. The RU’L above 10 does mean that the risk in a sector is rising and that a more selective stock selection approach should be used for that sector.

  • Aggressive trading parameters are also established using the RSH. In this case, the RSH has to generate a new buy signal (the RSH has to move back to or above 7 from a level at or below 4). Once in buy mode, an aggressive trade is only activated when the RSH mores above 15 and drops bp half. For example, if the RSH rises to a peak of 18.4, then a sell signal is given when the RSH drops to or under 9.2. It is only a sell for aggressive traders. The original position is not closed out for investors until the RSH is at or below 4.

An explanation of the parameters is appropriate before going on to the results The parameters were calculated as follows:

  1. The RSH was developed by dividing 100% by the 30 sectors. The rationale behind dividing 100% by 30 is that if each of the market sectors were to all post the same number of new highs at the same time, each sector would make up 3.33% of the total Market New High list. This 3.33% gives us a benchmark number on which to base the RSH, even though it is understood that it would be a rare event, even impossible, for all 30 sectors to post the same number of New Highs simultaneously.

  2. The 7 parameter of the RSH was developed by doubling 3.33% to 6.66% and rounding up to 7. The 7 is more than twice the 3.33 benchmark percentage and offers more satisfying proof of sector strength.

  3. The 4 parameter of the RSH was developed by simply rounding the 3.33 benchmark percentage up to 4. Rounding up enables the investor to sell before the sector drops under 3.33, to retain a better portion of the profits or to limit the losses.

  4. The 15 for the aggressive trades was chosen because 15% of the total number of New Highs is better than four times the 3.33 benchmark percentage and is considered very strong momentum. Once that peak number drops by half, enough of the momentum has been exhausted for the purpose of aggressive trading.

  5. The RU’L of 10 was selected simply as a round number made up of 3/30ths of the sectors. Ten percent is significant weakness for any particular sector and is about 3 times the 3.33 benchmark percentage.

Results

The results for the four sectors, the Bank sector, the Energy sector, the Medical sector and the Utility sector are presented in the following pages. Note that since the Investor’s Business Dailr database started on ‘i/26/91, three of the sectors were already in buy mode at the inception (see tables 1, 3 and 4). Note also that the results are as of s/30/96.

Bank Sector

The Bank sector has given six by signals and six sell signals. At the inception of the database, the RSH indicator for the Banks was in buy territory The sell signal was activated on 12/21/91. The first by signal was on 12/26/91 and the first sell signal on g/14/92. The profit was 7.80%‘. The results for this sector are summarized in table 1.

Chart 1 of the Bank index appears with the accompanying three aforementioned indicators: the NNH, the RSH, and the RWL.

The NNH depicts a few divergences that were helpful in the early detection of some corrective pressures for the Banks. In the three instances indicated on the NNH chart, the divergences led to corrections and were ultimately confirmed bp sell signals given bv the RSH. These instances are pointed out in the chart using the parallel lines. The subjective nature of divergence analysis makes it difficult to come up with consistent conclusions, but used in conjunction with the RSH it presents a vital guideline. The NNH can come before a sell signal, but it is not the sell signal itself. Note that the divergences only preceded 3 of the 5 sell signals given bu the NNH.

The RSH chart depicts the indicator used to generate the buy and sell signals. By way of review, a move to or above 7 from a level at or below 4 generates a buy signal, while a move to or below 4 from a level at or above 7 generates a sell signal.

The chart can also be used to show how dominant a sector has or hasn’t been. The Banks, for example, have had five instances since 1991 where the RSH was above 15. This means that the Banks made up at least 15% of the total number of New Highs on average over a loday period on five occasions in the past five years. They have been leaders.

The RWL can be used to search for deeper problems in a sector. For example, if a sector dominates the New Low list it is perhaps indicative of more serious structural problems. A reading above 10 on the RWL is indicative of a sector that should be approached more carefullv. The Banks have not had such a ieading of 10 on the RWL since the inception of this database further accentuating the sector’s strength.

The results of the buy and sell signals are detailed in table 1. Three out of six trades (50%) resulted in profits. The average return on the Bank Index for all six trades was 12% versus 6.9% on the S&P 500 for the same period.

The aggressive trading results were even more impressive. By way of review, agqessive trades are activated when the RSH rises above 15 and drops by half from the peak level. Four of the five (80%‘) instances had positive results. h)Iore importantly, however, each aggressive trading result beat the S&P 500 for the same comparable period. This is a 100% track record and is true for each sector analyzed in this paper. The average profit for the aggressive trades in the Banks was 14.3% versus 4.1% for the S&P 500.


(Click To Enlarge)

Energy Sector

The Energy sector was an interesting sector to analyze because it gave more buy/ sell signals (9) than any of the other sectors reviewed in this paper.

The Energy index and accompanying NNH, RSH and RUZ are found in chart 2. The NNH provided insight into two of the nine sell signals generated. A look at the NSH, however, also shows a noticeable expansion in the total number of Net New Highs in 1993 and again in mid 1995 to the present. The Energy sector has been among the best performing in 1996.

The RSH provided the buy and sell signals. Chart 2 shows why there were so many signals given. Many of the holding periods were short and not very profitable. In fact, notice that the Energy trades only outperformed the S&P 500 in 4 of the 9 (44%) instances.

The RWL moved above 10 on six different occasions since Y/26/91. Notice that the RUT has not been above 10 since mid 1995, further accentuating the Energy sectors strength in 1996.

The results of the buy and sell signals are detailed in table 2. Fire of the nine (56%) trades resulted in profits for an average gain of 2% versus 1.4% for the S&P 500. The total return of the nine buy signals was .12% versus 1.25% for the S&P 500. The results are not very compelling for the Energ sector, but the S&P 500 did not fare much better.

The better results, as in the Bank sector, are the aggressive trading results. The RSH indicator moved above 15 twice. Both instances yielded profits that outpaced the S&P /eweb00. The average aggressive trading return for the Energy sector was 8% versus .4% for the S&P 500.


(Click To Enlarge)

Medical Sector

The Medical sector, because of its growth orientation, has very different charts than the other sectors analyzed. The medical sector has given four buy signals and fire sell signals (see details in table 3).

A review of the NNH (see chart 3) reveals no divergence which preceded a sell signal. This lack of a divergence, however, is not a reason to disregard the NNH altogether. Note that the NNH expanded broadly in 1995. At the very least, the SNH can be used as reinforcement of the strength in the RSH.

The RSH chart was used to generate the buy and sell signals (see chart 3). Note that four times since 1991, the Medicals encompassed better than 15% of all New Highs being posted on average over a loday period as opposed to five times for the Banks. M%en the Medical sector had its highest RSH, it took almost hvo years to generate a sell signal (‘i/8/94 - 6/20/96) and produced a return of 82.3%.

The RWL is where the most notable chart differences show up. The RWL was above 10 on several different occasions. In fact, the RMZ went above 10 twice during that hvo year period, where the RSH did not give a sell signal. Recall that the R\2’L above 10 does not generate a sell signal. The RWZ is merely used to gauge whether or not the risk in a sector is increasing. A reading above 10 is used to consider more careful stock selection in the sector.

The results in the Medical sector are displayed in table 3. Four of the five trades generated profits for an average gain of 31.3%. The average return of the five trades was 24% vs. 12% for the S&P 500. trades was 24% vs. 12% for the S&P 500.

The aggressive trading results were, once again, impressive. The RSH was above 15 on four occasions. i!Xen the RSH dropped by half from its peak value, a sell signal was given. The average gain was 16.5% versus .6R for the S&P 500.

The reason there are only two aggressive trades even though the RSH was above 15 on four different occasions, is that only two of the four in- 15 . stances above 15 were part of a new buv signal. Sotice that the aggiessive trade was closed out on l/10/95, while the longer term trade was closed out on 6/26/96, about 1 l/2 years later. In other words, after the aggressive trade was closed out, there was no sell signal followed by a new buy signal.

The Medical sector had the highest and most prevalent RWLs above 10 (14 times). The growth characteristics of the group make selloffs more violent. The higher betas and the greater focus on earnings are reasons for the deeper selloffs. Banks and Utilities have lower betas and are interest sensitive to some extent; Utilities more so than Banks during this period. Energy stocks are greatly influenced by energy commodity prices and to a lesser degree, earnings.

(Click To Enlarge)


Utility Sector

Lastly, I reviewed the Utility sector. The Utilities had six buv signals and seven sell signals (see results’ in table 4). None of the returns proved to be outstanding, but 4 out of the 6 (67%) were positive.

A review of the NNH showed two divergences which led to declines. Indeed, the second divergence preceded a severe decline of 30%. M’hile divergence analysis is difficult to quanti@, an analyst can at least be somewhat alerted to the fact that something in a sector may be changing, as in this case the 30% bear market decline in the Dow Jones Utilitv Index.

The RSH chart shbws something different, not seen as clearly in our previous sector observations. During the deep selloff in the sector in 1994 and the subsequent stabilization period, notice that the RSH started to perk up (see points A, B, and C in the RSH section of chart 4). The implication of this is that as the New Low list was being dominated by Utilities, some were actually showing up on the New High list as well. These stocks were the earl! leaders out of a deeply oversold condition and indicated that the sector had seen its worst. Afterwards, the NNH started to show higher lows and the RUZ dropped off dramatically. Both indicated that downside pressure on the group had eased.

The results of the buy and signals, much like the Energy Sector, were not very compelling. Five of the seven trades were profitable. The average return of the seven

The aggressive results, however, were a bit better. There were three instances where the RSH was above 15. The average aggressive trading gain for the Utilities was 3.6% versus 2% for the S&P 500.

Conclusion

The four sectors shown here are 4 out of 30 such sectors in Investor’s Business Daily. These sectors were chosen because of their different characteristics: Banks are characterized as value/growth, Energies are considered value/growth, Medicals are considered growth, and Utilities are considered value.

The work I have done here is in its infancy. Divergence analysis, while subjective, will always be used by technical analysts to gain additional insight. The RSH and the RMZ, however, attempt to quantify meaningful changes in a sector. There are certain weaknesses to the methods outlined in this paper. First, it is possible that different parameters work better for different sectors.

Second, the buy and sell signals don’t always come early. The signals are usually timely enough tdgenerate a profit or keep losses to a minimum, but in some instances a lot is left on the table.

(Click To Enlarge)

Lastly the scum were aevelopea DI Investor’s Business Daily and are somewhat arbitrarv or different than what other analysts might deem correct. The 3.33% used in my study is not representative of 3.33% of the market and should not be interpreted as such. The number of issues in each sector is not equal, the market val- ues are different as well, and the economic impact of each sector can also change over time.

These weaknesses are obvious, but I feel thev are countered by the strengths. First, ha&g a mechanical trading system using New Highs and New Lows is useful. The charts are useful because they offer a quick reference for spotting changes in a particular group. The charts offer a way of seeing the upside leaders, as well as the downside leaders, more readily than bp just the cursory review of the daily New High/ New Low list.

Another strength is the aggressive trading results. The results were positive in 11 of the 12 (92%) aggressive trades and beat the S&P 500 in all 12 (100%). The average profit for the total combined aggressive trades was 10.9% versus 2.4% for the S&P 500. The average profit for the 2’7 trades was 7.‘7% versus 4.9% for the S&P 500. These returns do not include dividends. The addition of dil-idends would enhance the returns already mentioned, particularly in the case of the Utilities.

New Highs and New Lows offer internal insight into the markets. Charting them bp sector enables us to gain even greater insight. I welcome any suggestions and/or ideas for creating additional parameters for the RSH and the RMZ. I am currently working on overbought and orersold ideas, as well as some methods for mathematically combining the RSH and the R\Z for generating buy and sell signals.

References

  1. Martin J. Pring, Technical Analysis Explained, McGraw Hill, New York, 1985

  2. John J. Murphy, Technical Analysis of the Futures Markets, Sew York Institute of Finance, a Prentice-Hall company, New York, 1986

  3. Investor’s Business Daily, Los Angeles, CA, July 26,199l -August 30,1996


Frank Teixeira, CMT

Frank Teixeira recently joined the Technical Department at M’ellington Management Company in Boston. Frank was formerly a Vice President of Merrill Lynch and had been with the lfarket Analysis Department since 1991. At Merrill Lynch he published two research reports bi-weekly designed to give perspectives on specific sectors of the market. He also provided market commentary and advice to the retail and institutional sales forces throughout the Merrill Lynch system on a daily basis. Previously Frank spent two years with the compliance division of The xew York Stock Exchange. He received a B.A. in finance from St. John’s University and is an M.B.A. candidate at Hofstra University.

 

Return to Table of Contents

 

 

 

Market Technicians Association on Facebook - Technical Analysis - Chartered Market TechnicianMarket Technicians Association on LinkedIn - Technical Analysis - Chartered Market Technician - TwitterMarket Technicians Association on LinkedIn - Technical Analysis - Chartered Market Technician