ESMA acknowledges however, that at present it may not currently be feasible to expect trading venues to synchronise their clocks or time stamp events to a granularity which is less than nanoseconds. As a result, ESMA has proposed capping the granularity and accuracy requirements at the nanosecond level.
ESMA is the European Securities and Markets Authority. Time accuracy below a nanosecond is not feasible in current technology – it’s a tough mark in a lab that is just measuring time. Time accuracy to the nanosecond is also not that practical. Just measuring time at nanosecond accuracy is tough. But clearly regulators are pushing a major improvement in the standard – which is pretty weak right now. I’m going to go over what is in the proposal here to see how much can work in practice and what needs to be worked out.
Granularity and accuracy are very different things. Measuring time at nanosecond granularity is not a big deal but synchronizing clocks to nanosecond accuracy is at the edge of the possible for modern computing devices. Perhaps hardware trading engines built from FPGAs might be capable in theory (although not in practice) but consider what happens on a server computer executing this code:
read packet
read time
stamp time on packet
On current generation Intel Xeons a level 3 cache miss is about 120 cycles, which for a 5GHz computer is, at best, 21 nanoseconds. Read the time. Cache miss. Write time – and if the time is totally accurate when read, it is 120 nanoseconds off when written. Of course, we didn’t do any computations between reading the time and stamping the time in this example and those would make timing also more complex. For a 5GHz processor running at full speed every 5 cycles takes a nanosecond. That’s not a lot of cycles to execute a transaction or even copy the timestamp into some record. An interrupt would introduce microseconds of error. A pagefault might take hundreds of microseconds or even milliseconds and if the operating system schedules the process out delays might be in tens of milliseconds. On the other hand, the stamp will still be valid as the time (up to time accuracy) that the process was about to do something with the packet, and that’s probably what the regulators really want.
The proposal also has a table of how accurate timestamps have to be in terms of the “gateway-to-gateway” times of the fastest connected exchange. ESMA wants market participants to synchronize clocks in terms of their fastest connected trading partners.
To clarify, the gateway-to-gateway latency time is the time it takes for the trading venue to acknowledge an order. This is the time from when the order message is received by the trading venue until the time that the order acknowledgment leaves the trading venue which will include any processing of the order message that the venue must conduct and the creation of the order acknowledgement message. Trading venues may list multiple gateway-to-gateway latency times for different percentiles. For the purposes of clock synchronisation, ESMA considers that trading venues should use the gateway-togateway latency time at the 99th percentile
Here’s the table – and it’s also pretty ambitious.
If an exchange acknowledges 99% of all orders in at 999 microseconds – not all that unusual – the vendors connecting to that exchange will have to synchronize their clocks down to 1 microsecond. Notice that this time is from “received” until “leaves” – which means transit times over connections to the trading venue do not count.
There are some missing pieces in the proposal. What’s the requirement for fault-tolerance or fault-detection? How is accuracy measured: worst case, standard deviation, average? How reliable do those numbers from venues have to be? The comments on the original draft show that there has been an expected pushback on the unrealistic numbers from the regulator. But the regulator is standing by what is going to be a much needed major upgrade of EU time synchronization. We are ready to help.