New processing chip makes computations up to 18x faster

Posted Jul 4, 2016 by James Walker
MIT has unveiled a new processing chip design that could allow future devices to operate as much as 18 times faster as today's PCs. The chip relies heavily on multi-core technology and parallel processing to dramatically speed up programs.
MIT s Swarm processor  capable of making parallelised applications up to 18x faster
MIT's Swarm processor, capable of making parallelised applications up to 18x faster
Christine Daniloff/MIT
The chip, announced late last month, is called Swarm. It was developed by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) with the aim of making parallel programs more efficient, faster and easier to write.
Swarm speeds up processing performance, allows more operations to be run at one time and reduces the amount of code developers need to write by up to 90 percent. Programs developed for Swarm are typically three to 18 times faster than their standard versions.
The researchers ran six common algorithms through Swarm and compared the outcomes with the best existing results for running each algorithm using parallel processing. They ascertained that Swarm can be as much as eighteen times faster in some operations while operating on one-tenth as much written code.
An integral component of Swarm is its ability to reduce the complexity of written code. Traditionally, highly parallelised applications have been very difficult for developers, even experienced ones, to create. Because multiple processes can be running at any given time, synchronising the execution of each part of a program can be very difficult.
In an app that lets you check the news, the content of each article and the accompanying images may be retrieved sequentially, with the images loading after the article. To speed up the loading time, the developer could choose to render the images at the same time as the content. This presents an issue though: how does the program synchronise the two operations, so the user only sees the article when both sections have loaded?
Even in this simple example, the code required to wait for the two processes to finish and only then continue onward can quickly become messy and complex. Swarm provides mechanisms that make it much easier for developers to write parallelised code, letting them create applications that are able to take better advantage of multiple cores.
"Multicore systems are really hard to program. You have to explicitly divide the work that you're doing into tasks, and then you need to enforce some synchronisation between tasks accessing shared data," said Daniel Sanchez, an assistant professor at MIT's Department of Electrical Engineering and Computer Science and leader of the Swarm project. "What this architecture does, essentially, is to remove all sorts of explicit synchronisation, to make parallel programming much easier. There's an essentially hard set of applications that have resisted parallelization for many, many years, and those are the kinds of applications we've focused on in this paper."
Swarm has the potential to improve the efficiency of several of the most complex algorithms run on computers. With increased efficiency comes better performance, as well as greater utilisation of the capabilities of modern processors.
One of Swarm's most important applications is graph traversal. This involves a collection of nodes, illustrated as circles and edges, that are connected by lines. The nodes may have associated weightings to represent the strength of correlations between data points, the probability of an event occurring or other relevant information about the dataset.
Because of the huge array of options to traverse a graph of nodes, parallelising these applications is very difficult. They provide solutions to complex problems though and are used in programs ranging from geographic relationships to route calculations in satnav software. Swarm could finally allow developers to parallelise and speed up these applications, paving the way for even more complex algorithms in the future.