Friday, 24 July 2015

Ashley Madison hack

First I thought, "Nice hack of an unsavoury service." Kind of an amusing comeuppance at first blush. The crypto nerd in me felt some glee in the whole thing until I meandered.

Then I thought better.

It's not just the invasion of privacy of those untroubled by their activities. Consider the effect on those who would be troubled. Perhaps most would be troubled, but that is just my prejudice. I think it fair to say, some AM clients may feel embarrassed or ashamed when well lit.

Would the hack be worth the life of one depressive or newly depressed individual?

I think not.

So, shame on the hack for targeting the citizenry and tempting the likely fate of awful collateral damage.

Bombing Dresden may have seemed like a good idea in the planning, but the consequences were not well thought out either. Yeah, not a great analogy in terms of magnitudes of morality as I skirt close to Godwin's Law.

Nevertheless, how would you balance the equation of:
      serving N assholes = M lives?

M = 0 is the only correct answer for me.


Tuesday, 14 July 2015

Top500 - is that a supercomputer in your pocket?

The new Supercomputer Top500 list is out today. Not much change at the top with one new entry at #7 being a Cray XC40 in the Kingdom of Saudi Arabia. Apparently it's the lowest turnover in the 500 strong list in a couple of decades. Stasis or Moore's law barriers?

There are now 68 system running at greater than 1 PetaFlop. The entry point was raised to 153.6 TFlops from 133.7 TFlops. You might want to plan for around 200 TFlops in your basement if you want to crack the next list in six months time.

Though it is not necessarily representative of many modern workloads, such as graph work, the list remains captivating.

I'm getting old. A piece of meandering I find most interesting and, frankly, a little challenging, is where the list was when it all began in 1993.

I remember working in prop trading in an investment bank in 1994 and getting a new beaut Pentium Pro dual 200MHz IBM microchannel beast for my desk. It could deal with time series with 800,000 Bund intraday points quite nicely which I found fun. Time ticks and my $35 Raspberry PI 2 can run rings around that old expensive workstation. Comparing supercomputers in 1993 to modern phones is even more mind boggling, to me at least.

Your pocket supercomputer

Let's look at the 1993 starting point. To get into that initial Top500 list you needed to do High Performance Linpack (HPL) at around 0.4 GFlops. The fastest supercomputer at HPL on planet Earth, and perhaps anywhere else in the solar system, was 59.7 GFlops.

Let's look at a slightly older phone, the LG Nexus 4. It uses an ARM Cortex A9 quad-core processor and an Adreno 320 GPU. Here is a 2014 paper, "A Case Study of OpenCL on an Android Mobile GPU" by Ross etal. It doesn't do an HPL for direct comparison, but we can see the ARM grinds out about 1.09 GFlops and the GPU around 15.2 GFlops (with 8,912 particles) on a single precision n-body simulation. Pretty good going. Not quite the 89.8 GFlops of a dual Xeon X5650 (12 x 2.67GHz cores) nor near the 1.362 TFlops of an AMD Radeon HD 6970 GPU (Cayman) on the same problem but comfortably within range of a plausible comparison for the first Top500 list of supercomputers in 1993.

1993 initial super computer list - double precision HPL
 #500 = 0.4 GFlops          #1 = 59.7 GFlops

LG Nexus 4 with ARM Cortex A9 & Adreno 320 - single precision n-body
CPUs = 1.09 GFlops              GPU = 15.2 GFlops

Awesome. You do have a supercomputer in your pocket. 

Now the Adreno 320 is no longer state of the art. Whilst I don't have HPL numbers we can look at the peak rates of a few more modern phone SoCs to meander about the space. It's fun to me at least.

From this list last updated in March 2015:

  • Qualcomm Adreno 320
      • Peak 57.6GFlops
  • Qualcomm Adreno 430 - Snapdragon 810
      • Peak 324 to 388.8 GFlops
  • Imagination PowerVR SGX554 MP4 - Apple A6X
      • Peak 76.8 GFlops
  • Imagination PowerVR GX6850 - Apple A8X
      • Peak 272.9 GFlops
  • Imagination PowerVR GT7900
      • Peak 819.2 GFlops
  • Nvidia Tegra 4
      • Peak 96.8 GFlops
  • Nvidia Tegra X1
      • Peak 512 GFlops

Whilst only some, or even no software at all, may be able to use part of this peak power for processing, the future is clearly bright for running fluid dynamics in your pocket.

It wasn't that far back, in June 2005, the lower limit for the Top500 exceeded 1TFlop. I wonder how long before truly available cell phone Flops exceed 1 TFlops? Perhaps not too long at all.


PS: You might find these interesting:

Sunday, 12 July 2015

IEX - not walking the talk

The IEX Discretionary Peg (DPEG) order type should have no place in a transparent market place.

Let me meander through my reasoning...

I have no real problem with IEX other than I didn't really see the need for it which is a continuing view I hold. The people involved all seem pretty well intentioned, the ownership model is different without necessarily being better, the initial order types were simple and nice, and, in the beginning, IEX was transparent about what it did. It is no longer transparent.

The initial thought I had about IEX was that it was unlikely to be wildly successful as its "shoebox" delay line was simply turning the platform into a slow matching system with a virtual co-location space the size of New Jersey. That is not a recipe for success. Slow exchanges lose to faster exchanges as natural liquidity hubs and risk management centres; all other things being equal.

Jumping the shark

IEX introduced the DPEG order type a while ago and in December 2014 it was 11% of IEX's volume.

What is it?

You'll find the description of DPEG from IEX here which I reproduced in full:

Discretionary Peg Order

Upon entry, a Discretionary Peg Order is priced by the System to be equal to the Midpoint Price. Unexecuted shares are posted to the Order Book priced equal to the primary quote and automatically adjusted by the System in response to changes in the NBB or NBO. Discretionary Peg Orders can exercise price discretion to the Midpoint Price and respond to quote stability signals from the System. Discretionary Peg Orders are not eligible for routing and must have a TIF of FOK, IOC, DAY, or GTT.

Price Discretion: Discretionary Peg Orders will exercise the least amount of price discretion necessary, from their resting price to the less aggressive of the Midpoint Price or the Discretionary Peg Order’s limit price, to meet the limit price of orders entering the Order Book. When exercising discretion, Discretionary Peg Orders maintain time priority at their resting price and are prioritized behind any resting orders at the discretionary price. Discretionary Peg Orders are eligible to Recheck the Order Book to the Midpoint Price.

Quote Stability: During periods of quote instability, Discretionary Peg Orders are not eligible for Book Recheck and will not exercise price discretion. Quote stability is determined by the System based on an IEX proprietary assessment of relative quoting activity of Protected Quotations over a given period of time.

The bit that has always annoyed me as disconcerting is in the final paragraph above that I've highlighted, "based on an IEX proprietary assessment of relative quoting activity." You don't know what your DPEG order is doing. You can never argue that IEX has done the right thing or the wrong thing. If your DPEG order wanders in after a hard night out and claims it did its best for you, you can't really question it. All you can do is blindly trust and not verify. This is not how a transparent market place should operate.

Now, I believe DPEG probably provides a useful service to clients and is well intentioned. However, I also believe DPEG has no place in the life of an exchange. IEX is not an exchange. Does it have a place in an ATS which has a lower threshold of oversight? I'm not sure but I think it OK if the users want it in an ATS but I'm sitting on the fence as being generally unsupportive. I am strongly of the view that such unverifiable, opaque orders do not have any role in a properly regulated, fully licensed exchange. I hope IEX will remove it before becoming an exchange.

This is nowhere near the same category of dubiousness of a Pipeline screwing over their customers in their pool. Nor is it in the same dodgy territory as ITG misleading their customers about being agency only and yet making millions of dollars by proprietary trading against their clients' orders.

It goes to the role of order types in an exchange. I think this remains one of the bigger regulatory problems all around the world, but in particular with Reg NMS in the US. Too many order types and too much weirdness. This issue has not really been addressed since the crap work of fiction "Flash Boys" revealed it as one of the only valid criticisms to be highlighted by Lewis. I'm kinda happy about this as it gives me room to exploit order types, but it shouldn't be the case. The regulators should be depriving me of this opportunity.

I'd perhaps support an exchange making more complex order types, like IEX's DPEG, if such orders were made out of atomic orders and behaviours with no-special advantage. Effectively, offering layered services. Not within the ring of the matching engine, but in the same co-location space that clients also have access too so there is no disadvantage to client relying on the standard order types.

The SEC should force exchanges to stick to simple order types that are completely transparent to the point where a client could simulate the behaviour of an exchange from external market data. It's in the interest of any regulator to be able to review properly and "regulate" the orders of an exchange after-all. If you want a complex order type with weird behaviour, use a broker that you can trust. 

IEX, please go back to your principles and make a transparent market place.

Happy trading,


Thursday, 9 July 2015

NYSE failure - handy wavy generalisations

I guess we'll learn later why the NYSE went down for a few hours yesterday but I do feel, when these things happen, somewhat surprised they don't happen more often.

In the old days, with Stratus FT machines, VAX VMS clusters, and lock-stepped Tandem NonStops, the core systems of some exchanges were remarkably resilient. Today, the resilience is not so well designed but also the network complexities and "other" infrastructure pieces involved are orders of magnitude more complicated. It's no longer biggish iron with relatively simple serial I/O. I really am surprised such failures don't happen more often.

It doesn't have to be this way though. Matching engines are inherently simple.

At the old Zeptonics, we wrote the ZeptoMatch prototype for a matching engine that never quite made it to a product stage. It operated on the equivalent of a day's Nasdaq data with a median wire to wire latency of 1.97 microseconds using Mellanox ConnectX-EN 10G cards. It could do over a million orders per seconds quite comfortably. There was a fair bit of work around the core matching engine for housekeeping, monitoring, et cetera, but the core matching was only a few hundred lines of C++ using straightforward data structures from the Standard Template Library. No great magic there. The "core" was about a month's work by one guy, albeit a clever one. Again, not really a big deal. The speed was just a result of carefully tuning the processor, Linux, and the Mellanox network stack.

For a matching engine, being simple should help with building a resilient exchange as it is then simpler to leverage further techniques. For example, you could do it twice, or N times with some N-versioning for better safety. You could also take the meritorious approach that the clever Prof Gernot Heiser's team took with the seL4 microkernel. That is, write a proof. seL4 is over 9000 lines of C code with some assembler thrown in. There was an initial functional correctness proof in Isabelle/HOL that was extended to include:

  • proofs for a high performance IPC fastpath; 
  • proofs for correct access-control enforcement; 
  • proofs for information-flow noninterference; 
  • a proof for user-level system initialisation; 
  • proof for refinement between the semantics of the kernel binary after compilation/linking and the C source code semantics used in the functional correctness proof; and, 
  • an automated static analysis of the seL4 binary to provide worst-case execution time for all system calls.

seL4 is much more complicated than a matching engine...

There are no guarantees in N-versioning / voting systems, nor with formal proofs, but you'd likely be in a better place. I'm not sure why an exchange doesn't bite the bullet and spend a handful of millions and about a year and do just this. You'd think the market may reward it given the publicity around such issues.

There is much more to an exchange than the double auction or call auction process. Despite the occasionally stuffed-up IPO, errors are more likely in the infrastructure or network. The thing I find most troubling in many exchange architectures is the homogeneity of the network systems used. Often one vendor, or just one vendor at a particular architectural point. Usually with redundancy, but homogeneous redundancy. I'd like to see redundancy in the design and/or vendor to improve resilience. For example, you could use a fast layer one replication market data broadcast for the ultimate in speed along with an additional traditional UDP multicast set of infrastructure as a nicely independent model. There would not be a lot of cost is doing it twice with modern equipment but much to gain in terms of reliability. I can't imagine there would be too many networking vendors willing to share when submitting an RFQ, which means it is up the exchanges and their architecture groups to puzzle this one out without necessarily relying on the vendors.

Still, rolling out technical reconfigurations without robust procedures will always kill you. Humans aren't great at pushing buttons.



[Update: Information release from NYSE via BI on the incident

Thursday, 2 July 2015

"If something's not impossible, there must be a way to do it"

Sir Nicholas Winton: a real-life super-hero passed away 1st July 2015.

A stockbroker who quietly rescued 669 children. After the war, Winton did not discuss his efforts with anyone; his wife found out what he had done only after she discovered a scrapbook in their attic in 1988, detailing the children's parents and the families that took them in.

New York Times
Wikipedia: Sir Nicholas Winton
Power of Good: story
Washington Post on Sir Nicholas Winton

Sadly, during a BBC interview in October 2014 he said, "I don't think we've learned anything ... the world today is in a more dangerous situation than it has ever been."