In the post-crash world we’ve been living in for the last few years, market manipulation has become even more of a focus than it was pre-crash. Regulators, pressurised by governments and popular opinion, first cracked down on the perceived practitioners (hedge funds, HFT firms, prop trading desks) and then on the practices themselves. What used to be an opaque, shadowy world was gradually brought into the light and terms like spoofing, layering and momentum ignition were brought into the everyday trading vernacular. Regulations like MiFID II and MAR have been drafted to curb these practices. With a deadline of 3rd July 2016 MAR focuses on the eradication of abuse from algorithms as well as seeking to punish naive and reckless algorithms which threaten market integrity. Under MiFID II non-live algorithm testing will be compulsory from 3rd January 2018.
Now that participants are being forced to put in place monitoring and reporting systems to catch any trading that might be considered as abuse under MAR, all would appear well. Dig a little deeper, however, and it’s clear that most participants are only scratching the surface. Many firms might consider that deliberate abuse is, for the most part, not a concern as they have adequate checks in place to prevent this, but not all abuse is deliberate. What happens when an algorithm reacts in an unexpected way to market conditions? What happens when it reacts unexpectedly to other algorithms?
Not all algorithms are created equal
In order to understand the number of firms affected by MAR’s focus on abuse one need only look at the stated definitions of both market abuse and what constitutes algorithmic trading. MAR includes in its definition of abuse a wide range of behaviours of varying degrees of specificity. This includes some types of conduct which “is likely to create unfair trading conditions”. The word “likely” appears liberally throughout the definitions of market manipulation rendering a mandatory suspicious order report tantamount to an admission of guilt. As of 3rd July 2016, everyone involved in “professionally arranging or executing transactions” will be making these reports immediately and before conducting a full investigation. Algorithms are also defined broadly, effectively as anything where any parameter (aside from venue alone) is set non-manually. This is a far cry from the sophisticated, autonomous algorithms that one would imagine deliberately creating unfair market conditions.
Whilst many trading firms might consider themselves above reproach when it comes to intentional market abuse, unintentional abuse has the potential to happen at any time. Firms will now need to assess trading strategies for the potential to operate in naïve or reckless ways that risk unintentionally being a contributor to disorderly or unfair trading in many different ways. These can include quite generic behaviours that could destabilise the market and overload an order book, as well as very specific abuse such as spoofing and layering. Algorithms are hugely influenced by the behaviour of other market participants and with more and more trading being done automatically, those other market participants are often other algorithms. If they interact in an unexpected way – for example by exacerbating and feeding a pre-existing trend which may include false or misleading signals – then, by MAR’s definition, abuse has occurred. The firms and individuals operating an algorithm implicated in such an event are thus liable to prosecution and, across most of the EU, even jail time.
Trading has reached a point when you are as likely to be trading against a computer as you are against a human being and situations such as ones described above are threatening to happen more regularly. The worrying thing is that despite current non-live testing procedures, many don’t know how algorithms will behave in the real world until the damage is already done and the event has been reported to the regulators under MAR (from 3 July 2016).
Is testing fit for purpose?
Although algorithms are tested against historic market data and put through exacting stress tests, the nature of such testing environments means that current methods are next to useless for preventing violations of fair and orderly trading. Because the data that is being fed into these algorithms is historic, it’s effectively a canned replay. It’s impossible for the historic “market” to react to what the algorithm does in the test. It’s impossible for individual participants in this historic market to actually react to market conditions that have changed as a result of an algorithm’s actions.
Such algorithm testing therefore falls significantly short of the testing requirements mandated for all trading firms under MiFID II and leaves firms exposed to releasing algos with unintended risks under MAR. Without the ability to simulate and test for disorder provocation against responsive emulations of actual instruments, and have those markets react in a realistic way, trading firms will not be unable to certify their algos under MiFID II and as from 3rd July 2016 are facing an existential threat under MAR, and yet many firms are so far simply ignoring it.
Under MAR penalties for abuse are severe – including EUR 15m or if higher 15% of turnover for firms, and EUR 5m for individuals. Across all of Europe (except the UK and Denmark, who may be even more severe) a four-year custodial sentence is legislated for individuals convicted of manipulation.
Eddie Thorn, Director at SQS which, together with TraderServe, provides the AlgoGuard algorithmic stability testing platform available on the Colt PrizmNet financial extranet.