A Methodology for Full-System Energy-Accuracy Tradeoffs
A full-system approach to approximate computing delivers substantial energy savings (up to 7.5x) with minimal loss in application quality for computing-intensive tasks like image processing.
Approximate computing is a vital tool in areas such as machine learning and image processing because many of the applications require a great computing workload, but are error-tolerant. It allows for faster computation using non-precise numbers while still yielding accurate results. Even though the process of approximate computing is energy efficient in nature, energy can be even further optimized in systems. Prior work has focused on individual subsystems (computation, memory, sensor, or display) instead of the system as a whole. Focusing on a single subsystem does not take into account inter-component interactions, so the energy-saving potential that is available when viewing from a full-system perspective cannot be fully utilized.
Researchers at Purdue University have developed a method of approximate computing to optimize energy in computing systems by controlling the number of approximations in return for substantial energy savings. Instead of isolating individual subsystems, joint operations were performed across different subsystems. In an experiment applying image processing and computer vision using an imaging device, results showed that the system, on average, saved 7.5x more energy while demonstrating less than a 1% loss in application quality. Compared to approximating a single subsystem, the full system approximation saved an addition 3.5x – 5.5x more energy, again with minimal loss.
Advantages:
- More Energy Efficient
- Minimal Loss in Quality
Potential Applications:
- Image Processing
- Computer Vision
TRL: 5
Intellectual Property:
Provisional-Patent, 2018-06-18, United States | Utility Patent, 2019-06-18, United States
Keywords: Approximate computing, machine learning, image processing, energy efficiency, system optimization, full-system approximation, error-tolerant computation, joint operations, computer vision, computing workload