Wednesday, December 19, 2007

Tracking the Order Book


As this article points out, trend following or using technical indicators in a vacuum can be doomed without analysis of the order book. When we want to buy into market movement, we want to make sure that the order book supports the direction indicated by technical signals and is not about to start a reversal.

Processing the order book from snapshot to shapshot, we can determine:
  • ratio of bid to ask interest
  • bid interest up/down
  • ask interest up/down
  • bid aggressed (using last traded price and reduction of bid size to determine)
  • ask aggressed (using last traded price and reduction of ask size to determine)
Going further one can look at the complexion of orders at each level, determining what sort of players are behind the orders. Knowing this can add further bias to the weighting of direction.

Saturday, December 15, 2007

Shannon's Investment Strategy

Was reading the book 'Fortune's Formula', which I highly recommend. Claude Shannon, the genius of Information theory fame, came up with an approach to investing in the market using a interesting variant of Kelly's betting approach.

Assuming a market with constant mean (no drift / trend over time):
  1. Invest 1/2 of your capital in an asset
  2. Periodically rebalance
  3. If the market went up, sell enough units of the asset to have exactly 1/2 of your capital invested
  4. If the market goes down, buy enough units of the asset to maintain 1/2 investment
This is an effective scheme (assuming no transaction costs). Why?
  1. rebalancing implicitly executes a mean reversion strategy
  2. losses reduce the capital in the market
  3. wins increase the capital in the market
In effect, this is a ratcheting investment approach. As was pointed out, most assets are not constant mean over time. This would imply a strategy that trades mean reversion around a longer term drift in the mean. How might such a strategy work?
  1. since drift might be upwards or downwards, fundamental position should be long or short
  2. rebalancing should take into account the expected movement of the mean so that the ratio of cash to position will depend on this
This is referred to as a Constant Rebalanced Portfolio (CRP). Thomas Covers, later extended on this concept with non-even distribution of allocations with his Constant Universal Portfolio (CUP).

Sunday, December 9, 2007

Price Path Probability

What is the probable path of a security over the next 1 second, 5 seconds, 30 seconds?

I attended a quantitative algorithmic trading seminar 3 weeks ago where one presenter was discussing fill probability (in general terms). The presenter claimed that their model predicts the price path over the next few minutes to determine how best to read a VWAP strategy.

While I don't believe it is possible to predict a specific price path, it is possible to determine the probability of any given price path. If we can determine the probability of any given path through time from the current price to some final price in N seconds or minutes, we can compute the expected probability of going through a price level within some period of time.

The expected probability through a node at time Tn at price level Pa on a multinomial tree will simply be the sum of the probability of all sub-paths from Ts to Tn going through Pa. That part may be simple, but accurately determining the likely paths / probabilities is a hard research problem.

Given that the number of paths is exponential with time, the farther out we look the more time it takes to compute a precise expectation. We must use a monte carlo analysis, sampling a calibrated timeseries equation, to approximate the expectation function.

Determining the timeseries function that accurately reflects the market is a very hard research problem. Alas, if I told you my approach would have to kill you ;)

Wednesday, November 21, 2007

Trading Signals

I'm putting together a framework to evolve, test, and optimize signals using a genetic algorithm approach. Signals with statistically significant results will be further combined into bayseian networks and fed back into this testing framework.

Determining the conditionality of one signal against another requires insight and guesswork, or evaluating permutations of networks. With a GA optimizer and enough computing power should be able to determine networks that successfully amplify the combination of signals to one that is correct more often than not (or at least is more successful in the profitable situations than the losing ones).

The universe of events and indicators that have potential to be significant is large, as are their parameterizations. Choosing the search space will be a challenge, as is sometimes having access to the required data.

I plan to overlay our tick UI with trading signal indicators indicating a probability weighting when a signal reaches a non-neutral threshold. Should be interesting to visualize the results.

Thursday, November 8, 2007

Long / Short

I noticed that BIDU, one of the stronger stocks in the chinese market has outperformed FXI consistently both with upside and downside market moves. In other words, when the market is in a strong downward trend (as it is now), BIDU has gone down less than FXI. When the market was appreciating BIDU outperformed FXI.

Assuming this relationship holds on the near term, speculative play: long BIDU, short FXI. Should yield return in an upward or downward trending market.

Wednesday, November 7, 2007

Spread Plays

The china vs taiwan spread trade has done well in the short term with 7% gains over the last two days, but not for the reasons I discussed in the previous post. I noticed that the markets are cointegrated, however moves in the china market (both positive and negative) have a steeper slope, so that positive and negative moves are bigger with FXI than for EWT.

Given the current downward trend for FXI, the FXI - EWT spread contracted, yielding 7%. Of course should the trend reverse and FXI recover, would expect the spread to flip back to a widening phase.

I think a better trade at this point is a view towards continued growth in the indian market accompanied with a deflation of the chinese market bubble. Speculative trade: short FXI, long INP.

Friday, November 2, 2007

Spread play: China vs HK market?

If one looks at the annual performance of FXI (China index) versus EWH (HK index), we see a widening spread over the last year, but a lot of simularity in the chart patterns. I suspect these markets are cointegrated, but with a scaling factor (a difference in slope).

This article indicates that chinese will soon be able to invest in the HK market. With the huge flow of speculative money in China soon to be able to find alternative venues (such as HK), should expect to see a move of some of this buying pressure into the HK market.

A speculative play: looking to see a contracting FXI - EWH spread. Trade: buy EHW, sell FXI.

A Bad Deal?

I was looking at investments offered in my offshore account and came across the following structure:
  • 5 year investment into china fund
  • capital preservation (built-in floor at initial investment level)
  • max 55% return total across 5 years, the bank pockets the excess above
For someone not in the financial business this may seem to be a good deal (seeing the 55% and capital preservation). I think it is a relatively poor deal though. The continuously compounded effective annual rate is only ~8% and that is only achieved if the market does indeed appreciate 55%.

I began thinking about how closely could replicate this structure on my own, but with a much higher max payoff. Though the payoff function I am going to indicate is not perfect (I can go under my initial capital if the timing of my protection is not right), would do as follows:

Initially
  • buy into FXI index
  • allow some appreciation and then buy the 1 month put option at the initial point of entry
On an ongoing basis:
  • roll put option at initial investment point + cost of option premiums thus far, maybe with longer maturity
  • if FXI drops below initial investment, sell FXI, sell option, coverage should be close to offsetting
  • as and if FXI approaches entry point buy in again and buy protection
  • repeat
Of course one can structure this more advantageously:
  • additional protection (by adjusting strike upwards as FXI gains)
  • reentering trade if FXI falls at lower level rather than initial investment level

The cost of the options is paid for out of the returns or in the worst case through the adjusted strike price. That said, increasingly, the options are going to be deeper and deeper out of the money if FXI continues to be a good investment (meaning cheaper hedging costs).

Wednesday, October 31, 2007

Tracking Liquidity

Encountered a problem in trading where we did not properly adjust the liquidity (depth) level coming back from the exchange. Basically when we push down an order would decrement available liquidity. The next tick from the exchange would reflect the new available liquidity (or so we naively thought).

The reality is that one or more ticks can be received before a order (which was filled) is reflected in the depth level. Given this, the affect of the order will not be represented in the depth.

To fix this, two different approaches could be taken:
  1. Estimate latency in hitting the depth x 2
    Hold the order impact across any ticks within this period

  2. Hold the order impact across ticks until trade confirm comes back
    This is the most conservative approach. On some venues trade confirms are sent back with lower priority, so can be an overly conservative method of measurement.
The net affect of not handling this properly was that we saw more depth than was actually there. This meant that for aggresive algos trying to pull some part of advertised depth, ended up with partial fills based on a flawed liquidity tracking model.

Have taken the conservative approach at this point.

Monday, October 29, 2007

Path Dependent Problem

Saw a fun problem posed on one of the Joel On Software forums:

On a empty chessboard, a knight starts from a point (say location x,y) and it starts moving randomly, but once it moves out of board, it can't come back inside. So what is the total probability that it stays within the board after N steps.

This problem is similar to a digital knockout barrier option, but where the path is the random movement of a chess piece across the board rather than the movement of a security price over time. The solution I wrote up was as follows:


This can be computed as the an expectation function across all possible paths (including paths outside of the 8x8 grid). Think of a 8-way tree where the root represents the starting position and each node has 8 children representing each of the 8 possible moves from the current position.

The joint probability of:

- making the move (1/8)
- within grid (0 or 1)

should be the value of any given node (basically 1/8 or 0). For any given path of length N, the joint probability of having taken the path and being fully within the grid is either (1/8)^N or 0. The expectation function simply sums across all possible N length paths.

The expectation function and paths can be expressed as a recursive sum (assuming i,j in chessboard for [1,8]):

E(i,j,N) =
  • 1/8[E(i-2,j-1,N-1) + E(i-2,j+1,N-1) + E(i+2,j-1,N-1) + ...] if 1 <= i,j <= 8
  • 1 if N <= 0
  • 0 otherwise



I would be curious to see other approaches for this problem. The recurrence relation with conditional function is interesting because can introduce a wide variety of conditions and payoffs. Would be easy to change the "barriers" and/or skews in the probability of moving in one direction versus another.

Sunday, October 28, 2007

Complex Variation

Victor Niederhoffer mentions something interesting in his blog, looking at the weekly differentials in price movement on the complex plane, with the angles and magnitude as a predictor. I can see that a more obtuse angle indicates a bigger swing in the lows and highs. At least an interesting way to visualize these relationships.

Saturday, October 27, 2007

Outliers (August Blow Up)

Interesting article on the MIT Technology review (here), pointing out that most quantitative models on Wall Street make assumptions about the relationship between instruments and are subject to the "black swan" problem, not properly recognizing "unexpected" outliers in their strategies.

In the context of automated strategies my view is that it is fine to work within the assumptions of "normal" market behavior, provided that one has a risk management strategy to contain losses from outlier events to an amount that will not significantly erode accrued profits. To not do so is an opportunity lost.

Applying Genetic Algorithms

I have done some experimentation with genetic algorithms in the past, but am now looking to incorporate as an optimization tool into my rules-based strategy framework.

The behavior of our strategies are often parameterized. There can be so many combinations of parameters, that to arrive at a sucessful or best performing set can be a matter of guesswork, testing and retesting.

Enter Genetic Algorithms (GA). GA allows us to efficiently arrive at a (nearly) best fit set of parameters based on a fitness function. For those not familiar with genetic algorithms (GA), GA is a biology-inspired gene/generational means of evolving a solution based on fitness criteria. A population of genes is produced on each generation, the fittest of which are chosen and recombined with other genes. Successive generations will tend to be more and more fit, arriving at a fitness maximum.

In truth GA, as exotic as it sounds, is just one of many optimization techniques where one is trying to minimize, maximize, or find a zero for a complex function. GA is easy to set up and can work with black-box functions, so is a good candidate.

I need to set up an environment to run the optimization in parallel given the computation required. For each day we evaluate a strategy will be looking at 200K - 500K ticks or more. So to evaluate over, say, 3 months of data is 30 million ticks. Multiply this by 1000 or more fitness evaluations, and one is looking to evaluate billions of ticks and rules.

Given that we have datasynapse available on hundreds of blades, may adapt JGAP and the strategy engine for this infrastructure.

Friday, October 26, 2007

Fooled by Randomness

I'm on a trip, in Japan right now. Bought "Fooled By Randomness" by Nassim Taleb, at the airport and have begun to read. Highly recommend this book for anyone trading or otherwise involved with trading.

The book is a discourse in objectivity, particularly in relation to trading decisions and performance. One of his assertions is that many traders build a career around what has worked for them emperically rather than statistically sound judgement (I agree). That a portion of these traders succeed has more to do with the short sample period (trading career is generally short).

I have seen this time and again on Wall Street. Trader does well for some years and then blows up. Wall Street firms presents a free option for traders. The trader locks in his profits on an annual basis with bonus, where as the firm absorbs the downside of:
  • trader losses
  • longer term performance of their portfolio after trader leaves
He cites 2 traders, one who has taken a conservative approach through his career, valuing all outcomes in his trading and another more typical trader who made 10x more than the former trader, but whose approach was more subject to randomness in the market and in the end loses his job / career.

One could say that the conservative approach requires two things:
  • rock-solid risk management, anticipating even low probability events
  • better evaluation of the "expectation function" by summing all possibilities with their associated probability

Setting the Stage

As this is my first post to this blog, figured would "set the stage" by describing what intend to explore.

I come from a comp sci / physics background, getting my first exposure to computers back in 1979 as a kid when I took a course at a college with a program for gifted youth. I was hooked from that point on. I went on to specialize in parallel processing and algorithms, which landed me my first job doing applied parallel processing research on Wall Street in the early 90's.

For the last 17 years have worked on Wall Street as a researcher, quantitative developer, architect, etc. in NY, Tokyo, and London. In the 80s and early 90s, technology was just becoming indispensable to the trading desks, and so there was plenty to do, plenty of technology to invent as we went along.

I can say that 5-10 years of that part of my career was involved in developing new technologies (parallel processing, visualization), quantitative libraries (for pricing derivatives), etc. That was all fun until it became a commodity and the business of IT on Wall Street started changing, bringing in a wider variety of programmers (basically the full bell curve rather than the tail) and associated management structure.

Along the way, one goal that had always intrigued me was the notion of automated trading. The concept resonated with me on a number of levels:
  • incredibly hard problem (a holy grail involving state of the art computer science, math, and ingenuity)
  • formalize the thought process involved in deciding to trade (entry and exit)
  • remove the emotional component that can cripple trader judgement
  • going to have a lot of fun along the way
I started investigating this in the mid 90's but there was nothing in the way of electronic execution platforms for anything but the equities market (and I was focused on Fixed Income and FX).

I left Wall Street for a few years to pursue entrepreneurial goals, to come back in 2004 to focus on automated trading strategies. And so begins my blog ...