ARM has built a new set of processors from the ground up to power an artificially intelligent world.
Ahead of Mobile World Congress, the ubiquitous chip maker is pulling back the curtain on Project Trillium—its new suite of products designed to enable machine learning (ML) anywhere and everywhere.
Project Trillium consists of three components: a new ML processor rolling out to device makers and partners in mid-2018, a new object detection processor launching at the end of the quarter, and a set of neural network software libraries available to developers today.
ARM isn’t releasing full architectural details of the new ML and object detection chips, but it claims to have developed processors beyond the capabilities of current CPUs and GPUs. Built specifically to address machine learning workloads, the ML processor sports a new intelligent memory system the company says maintains processing performance without draining power. The object detection processor can process video feeds in real time at up to 60 frames per second, and detect objects in the frame as small as 50-60 pixels.
The bigger picture is the wide array of AI applications ARM envisions its new processors will enable. The ML chip is targeting the mobile market, meaning smartphones, self-driving cars, and Internet of Things (IoT) devices at the edge. Perhaps scarier is what the ML chip can do when combined with the object detection processor. ARM sees embedded possibilities in security cameras and smart cities, where “a completely new class of smart cameras” will support everything from facial identification and gesture recognition to ML-driven predictive analytics and mood analysis.
“We can do this processing in real time at HD resolution running at 60 frames per second,” says Jem Davies, VP, Fellow, and GM of ARM’s Machine Learning Group. “We’re able to detect objects further away very easily within frames including the trajectory, which way they’re facing, which way they’re going, and select part of the body for gesture and pose recognition. This is a development on our first-gen object detection processor, which is already released in consumer devices like the Hive security camera.”
However, ARM insists it’s not all Big Brother scenarios. Davies talked about employing AI object detection in smart cities to identify traffic congestion, pedestrian safety incidents, and locating lost children, or even cameras pointed at public wastebaskets to proactively detect when an area needs a garbage pickup.
ARM also hopes its AI processors can enabled smartphone-connected AR/VR experiences. Davies gave the example of an augmented scuba diving mask that would identify animals, plants, flora, and fauna in real time as a diver swam around 30 meters below the surface.
What’s in an AI Processor?
ARM built new machine learning processors because for the next generation of intelligent applications and devices, current cloud infrastructure and chip technology won’t cut it. Self-driving cars, Davies says, cannot momentarily stop recognizing signs or pedestrians because a mobile connection is lost or because there’s too much data processing latency with servers.
“The level of sophistication taking place in edge devices has moved much faster than anyone anticipated,” says Rene Haas, President of ARM’s IP Products Group. “Look at innovations like Google Assistant and Amazon Alexa to get a sense of the explosion of unique edge devices running on a simple power supply. This will only accelerate…and the applications are quite broad. From a simple application of keyword detection through image and voice recognition, up to autonomous driving and into the data center. Machine learning will be used in all these spaces.”
The ML processor is built using a completely new architecture that delivers performance of up to 4.6 TOPs/W, meaning tera-operations (trillion) per watt. The new memory system, Davies explains, avoids intermediate memory storage to create more efficient convolutional neural networks (CNNs). The ML processor works hand-in-hand with ARM’s new open-source neural network software, which integrates with a range of ML frameworks including popular options like TensorFlow and Caffe.
ARM’s new ML processor will launch to partners in mid-2018. Haas estimates we’ll begin to see consumer devices running the new processors around nine or so months later, meaning sometime in 2019. Project Trillium is not just about machine learning or object detection processors, but a broad framework to usher in a new era of AI applications and vision-based devices.
“We believe machine learning is one of the most significant changes hitting our computing landscape, and we’ve been investing a lot of tiem and energy into looking at everything from the the I/O standpoint to the software,” says Haas. “Machine learning is not just a suite of products. It’s the way compute will happen across all products in the future. Years from now, people will be looking at AI not as a category of how computers learn, but made into everything computers do.”