Wednesday, October 31, 2007

Tracking Liquidity

Encountered a problem in trading where we did not properly adjust the liquidity (depth) level coming back from the exchange. Basically when we push down an order would decrement available liquidity. The next tick from the exchange would reflect the new available liquidity (or so we naively thought).

The reality is that one or more ticks can be received before a order (which was filled) is reflected in the depth level. Given this, the affect of the order will not be represented in the depth.

To fix this, two different approaches could be taken:
  1. Estimate latency in hitting the depth x 2
    Hold the order impact across any ticks within this period

  2. Hold the order impact across ticks until trade confirm comes back
    This is the most conservative approach. On some venues trade confirms are sent back with lower priority, so can be an overly conservative method of measurement.
The net affect of not handling this properly was that we saw more depth than was actually there. This meant that for aggresive algos trying to pull some part of advertised depth, ended up with partial fills based on a flawed liquidity tracking model.

Have taken the conservative approach at this point.

Monday, October 29, 2007

Path Dependent Problem

Saw a fun problem posed on one of the Joel On Software forums:

On a empty chessboard, a knight starts from a point (say location x,y) and it starts moving randomly, but once it moves out of board, it can't come back inside. So what is the total probability that it stays within the board after N steps.

This problem is similar to a digital knockout barrier option, but where the path is the random movement of a chess piece across the board rather than the movement of a security price over time. The solution I wrote up was as follows:


This can be computed as the an expectation function across all possible paths (including paths outside of the 8x8 grid). Think of a 8-way tree where the root represents the starting position and each node has 8 children representing each of the 8 possible moves from the current position.

The joint probability of:

- making the move (1/8)
- within grid (0 or 1)

should be the value of any given node (basically 1/8 or 0). For any given path of length N, the joint probability of having taken the path and being fully within the grid is either (1/8)^N or 0. The expectation function simply sums across all possible N length paths.

The expectation function and paths can be expressed as a recursive sum (assuming i,j in chessboard for [1,8]):

E(i,j,N) =
  • 1/8[E(i-2,j-1,N-1) + E(i-2,j+1,N-1) + E(i+2,j-1,N-1) + ...] if 1 <= i,j <= 8
  • 1 if N <= 0
  • 0 otherwise



I would be curious to see other approaches for this problem. The recurrence relation with conditional function is interesting because can introduce a wide variety of conditions and payoffs. Would be easy to change the "barriers" and/or skews in the probability of moving in one direction versus another.

Sunday, October 28, 2007

Complex Variation

Victor Niederhoffer mentions something interesting in his blog, looking at the weekly differentials in price movement on the complex plane, with the angles and magnitude as a predictor. I can see that a more obtuse angle indicates a bigger swing in the lows and highs. At least an interesting way to visualize these relationships.

Saturday, October 27, 2007

Outliers (August Blow Up)

Interesting article on the MIT Technology review (here), pointing out that most quantitative models on Wall Street make assumptions about the relationship between instruments and are subject to the "black swan" problem, not properly recognizing "unexpected" outliers in their strategies.

In the context of automated strategies my view is that it is fine to work within the assumptions of "normal" market behavior, provided that one has a risk management strategy to contain losses from outlier events to an amount that will not significantly erode accrued profits. To not do so is an opportunity lost.

Applying Genetic Algorithms

I have done some experimentation with genetic algorithms in the past, but am now looking to incorporate as an optimization tool into my rules-based strategy framework.

The behavior of our strategies are often parameterized. There can be so many combinations of parameters, that to arrive at a sucessful or best performing set can be a matter of guesswork, testing and retesting.

Enter Genetic Algorithms (GA). GA allows us to efficiently arrive at a (nearly) best fit set of parameters based on a fitness function. For those not familiar with genetic algorithms (GA), GA is a biology-inspired gene/generational means of evolving a solution based on fitness criteria. A population of genes is produced on each generation, the fittest of which are chosen and recombined with other genes. Successive generations will tend to be more and more fit, arriving at a fitness maximum.

In truth GA, as exotic as it sounds, is just one of many optimization techniques where one is trying to minimize, maximize, or find a zero for a complex function. GA is easy to set up and can work with black-box functions, so is a good candidate.

I need to set up an environment to run the optimization in parallel given the computation required. For each day we evaluate a strategy will be looking at 200K - 500K ticks or more. So to evaluate over, say, 3 months of data is 30 million ticks. Multiply this by 1000 or more fitness evaluations, and one is looking to evaluate billions of ticks and rules.

Given that we have datasynapse available on hundreds of blades, may adapt JGAP and the strategy engine for this infrastructure.

Friday, October 26, 2007

Fooled by Randomness

I'm on a trip, in Japan right now. Bought "Fooled By Randomness" by Nassim Taleb, at the airport and have begun to read. Highly recommend this book for anyone trading or otherwise involved with trading.

The book is a discourse in objectivity, particularly in relation to trading decisions and performance. One of his assertions is that many traders build a career around what has worked for them emperically rather than statistically sound judgement (I agree). That a portion of these traders succeed has more to do with the short sample period (trading career is generally short).

I have seen this time and again on Wall Street. Trader does well for some years and then blows up. Wall Street firms presents a free option for traders. The trader locks in his profits on an annual basis with bonus, where as the firm absorbs the downside of:
  • trader losses
  • longer term performance of their portfolio after trader leaves
He cites 2 traders, one who has taken a conservative approach through his career, valuing all outcomes in his trading and another more typical trader who made 10x more than the former trader, but whose approach was more subject to randomness in the market and in the end loses his job / career.

One could say that the conservative approach requires two things:
  • rock-solid risk management, anticipating even low probability events
  • better evaluation of the "expectation function" by summing all possibilities with their associated probability

Setting the Stage

As this is my first post to this blog, figured would "set the stage" by describing what intend to explore.

I come from a comp sci / physics background, getting my first exposure to computers back in 1979 as a kid when I took a course at a college with a program for gifted youth. I was hooked from that point on. I went on to specialize in parallel processing and algorithms, which landed me my first job doing applied parallel processing research on Wall Street in the early 90's.

For the last 17 years have worked on Wall Street as a researcher, quantitative developer, architect, etc. in NY, Tokyo, and London. In the 80s and early 90s, technology was just becoming indispensable to the trading desks, and so there was plenty to do, plenty of technology to invent as we went along.

I can say that 5-10 years of that part of my career was involved in developing new technologies (parallel processing, visualization), quantitative libraries (for pricing derivatives), etc. That was all fun until it became a commodity and the business of IT on Wall Street started changing, bringing in a wider variety of programmers (basically the full bell curve rather than the tail) and associated management structure.

Along the way, one goal that had always intrigued me was the notion of automated trading. The concept resonated with me on a number of levels:
  • incredibly hard problem (a holy grail involving state of the art computer science, math, and ingenuity)
  • formalize the thought process involved in deciding to trade (entry and exit)
  • remove the emotional component that can cripple trader judgement
  • going to have a lot of fun along the way
I started investigating this in the mid 90's but there was nothing in the way of electronic execution platforms for anything but the equities market (and I was focused on Fixed Income and FX).

I left Wall Street for a few years to pursue entrepreneurial goals, to come back in 2004 to focus on automated trading strategies. And so begins my blog ...