Advanced Computing

Quantum Computing – cubits and all.

D-wave in Canada have released their latest “quantum computer”. The D-wave 2X contains 1000 qubits double that of its previous second system, and operates below 15mK. Not quite a true quantum computer (or is it?). Put through a series of benchmark tests by solving optimisation problems it came out 15 times quicker than traditional optimisation running on “ordinary” computers.

D-wave 2 – a 512 qubit system, with one  bought by NASA’s Ames research centre, uses the quantum annealing process for finding global minimums in search functions. Conventional computers often get caught in a global minimum  whereas the quantum computer uses quantum tunnelling to cross barriers between local minima in a more efficient approach than the conventional computer. The controversy over D-wave stems from the fact that quantum computers are kept in a fragile quantum state throughout the calculation; D-wave’s approach involves transitions from quantum to classical systems when performing these calculations.

The first D-wave sale was to Lockheed Martin in 2011.

CAS-Alibaba Quantum Computing Laboratory in China goal is to develop a general purpose prototype quantum computer by 2030. This venture is supported by Alibaba’s (Chinese online retail giant) cloud-computing subsidiary Aliyun injecting $5M every year to 2030. An interim goal of coherent manipulation of 30 quantum qubits with calculation speeds equivalent to modern supercomputers is being put forward.

A Brief History of HPC

Look at this timeline from my old friends at EPCC.

Big Data

Big data seems to be one of those  buzzwords that are knocking around these days. Such data can be manipulated via machine learning techniques – noise reduction via information theory – a lot of this is in the Weka Java package, or R. Once noise is reduced supervised (decision boundary) or non-supervised (clustering) learning can be used to determine the patterns and insights hidden in all that data for example in the heat maps produced from DNA microarray data (see Technical Consulting).

Working with big data sets – again from EPCC consider the three V’s.

Volume (size of individual items and the complete set of data)

Variety (if using different source of data)

Velocity (speed of change of the data and the time to analyse – e.g. the daily Pershing output of financial activity of the FTSE) or genomic data which can be repeated at “leisure”.

All three V’s have a say in what computational techniques are used. For example can all the data be analysed at once or can be split into parallel or serial tasks depending on the compute power available. Can initial filtering by information theory reduce the data set. Can obvious outliers (false positives and negatives) be spotted by eye, intuition and/or experience before the button is pressed and the computers kick into gear without any thought.