• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Microcontroller Tips

Microcontroller engineering resources, new microcontroller products and electronics engineering news

  • Products
    • 8-bit
    • 16-bit
    • 32-bit
    • 64-bit
  • Applications
    • 5G
    • Automotive
    • Connectivity
    • Consumer Electronics
    • EV Engineering
    • Industrial
    • IoT
    • Medical
    • Security
    • Telecommunications
    • Wearables
    • Wireless
  • Learn
    • eBooks / Tech Tips
    • EE Training Days
    • FAQs
    • Learning Center
    • Tech Toolboxes
    • Webinars/Digital Events
  • Resources
    • Design Guide Library
    • DesignFast
    • LEAP Awards
    • Podcasts
    • White Papers
  • Videos
    • EE Videos & Interviews
    • Teardown Videos
  • EE Forums
    • EDABoard.com
    • Electro-Tech-Online.com
  • Engineering Training Days
  • Advertise
  • Subscribe

How “green” is your Artificial Intelligence?

September 17, 2020 By Jeff Shepard Leave a Comment

Artificial intelligence (AI) systems face a set of conflicting goals: being accurate (consuming large amounts of computational power and electrical power) and being accessible (being lower in cost, less computationally intensive, and less power-hungry). Unfortunately, many of today’s AI implementations are environmentally unsustainable. Improvements in AI energy efficiency will be driven by several factors, including more efficient algorithms, more efficient computing architectures, and more efficient components.

It’s necessary to measure and track the energy consumption of AI systems to identify any improvements in energy efficiency. One example of the increasing awareness of the importance of energy consumption in AI systems is having is reflected in the fact that the ULPMark (ultra-low power) benchmark line from EEMBC is now adding ML inference and developing a new benchmark, the ULPMark-ML. The effort to standardize what is known as “tinyML” or lower power ML is underway, with a dozen companies participating in EEMBC’s effort. The goal of this benchmark is to create a standardized suite of tasks that will measure a device’s inference energy-efficiency as a single figure of merit. This is expected to be useful from the cloud to the edge.

More efficient algorithms

Dr. Max Welling, Vice President, Technology, with Qualcomm, believes that the benchmark for AI processing could soon change and that AI algorithms will be measured by the amount of intelligence they provide per joule. He cites two key reasons for this:

  • First, broad economic viability requires energy-efficient AI since the value created by AI must exceed the cost to run the service. To put this in perspective, the economic feasibility of applying AI per transaction may require a cost as low as a micro dollar (1/10,000th of a cent). An example of this is using AI for personalized advertisements and recommendations.
  • Second, on-device AI processing in sleek, ultra-light mobile form factors requires power efficiency. Processing always-on compute-intensive workloads in power- and thermal-constrained form factors that require all-day battery life is part of making AI adopted by consumers more broadly. The same power efficiency attributes are needed for other classes of devices, such as autonomous cars, drones, and robots.

To date, much of the research related to AI has been independently working to improve algorithms, software, and hardware. In the future, it will be necessary to optimize all three dimensions simultaneously. That will require new AI computing architectures as well as improved algorithms.

Lower power AI computing architectures and memories

“In-memory computing” or “computational memory” is an emerging concept that uses the physical properties of memory devices for both storing and processing information. This is counter to current von Neumann systems and devices, such as standard desktop computers, laptops, and even cellphones, which shuttle data back and forth between memory and the computing unit, thus making them slower and less energy efficient.

green artificial intelligence
Figure 1: IBM researchers used phase-change memory (PCM) devices made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. (Image: IBM Research)

Scientists at IBM Research have demonstrated that an unsupervised machine-learning algorithm, running on one million phase change memory (PCM) devices, successfully found temporal correlations in unknown data streams. When compared to state-of-the-art classical computers, this prototype technology is expected to yield 200x improvements in both speed and energy efficiency, making it highly suitable for enabling ultra-dense, low-power, and massively-parallel computing systems for applications in AI.

Figure 2: A schematic illustration of the in-memory computing algorithm. (Image: IBM Research)

The PCM devices were made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. When the scientists apply a tiny electric current to the material, they heat it, which alters its state from amorphous (with a disordered atomic arrangement) to crystalline (with an ordered atomic configuration). The IBM researchers have used the crystallization dynamics to perform computation in place.

AI and ML systems consume large quantities of computer memory. Low power magnetoresistive random access memories (MRAMs) are a relatively new technology that can support the memory needs of AI and ML systems, especially at the Edge. Since it’s currently slower than DRAM or SRAM, MRAM may not find immediate application in the cloud, but for edge devices, it could provide a boost for AI implementations. MRAM combines the attributes of low latencies, low power consumption, high endurance, and memory persistence that are well suited for Edge AI systems.

Internet of Things (IoT) nodes that include ML could be enabled by MRAM’s persistence (data retention capability is greater than 20 years at 85°C). With MRAM, ML algorithms do not have to be reloaded every time the device comes out of sleep mode, saving both time and energy. This could enable IoT nodes to analyze sensor data and make quick high-level decisions locally in real-time. More in-depth analysis and refined decisions would rely on a connection to the cloud.

The speed of MRAM is beneficial for implementing machine learning in edge devices in factory automation, automotive, and other systems. In these systems, data is analyzed, and intermediate patterns are identified and shared with adjacent domains. The edge architecture requires a speed of processing and persistent memory, which can be supported by MRAM.

That wraps up this three-part series on artificial intelligence and machine learning. You might also enjoy reading part one, “Artificial Intelligence, Machine Learning, Deep Learning, and Cognitive Computing” and part two, “Benchmarking AI from the Edge to the Cloud.”

References:

Green AI, Allen Institute for AI, Carnegie Mellon University and University of Washington
How algorithmic advances make power-efficient AI possible, Qualcomm
IBM Scientists Demonstrate In-memory Computing with 1 Million Devices for Applications in AI, IBM

You may also like:

  • artificial intelligence benchmarks
    Benchmarking AI from the edge to the cloud
  • artificial intelligence
    Artificial Intelligence, Machine Learning, Deep Learning, and Cognitive Computing

  • Emerging memory technologies – Virtual Roundtable (part 1 of 2)
  • Memory basics
    Memory basics – volatile, non-volatile and persistent
  • AI design
    The three approaches to AI design

Filed Under: Applications, Artificial intelligence, FAQ, Featured, Machine learning, Neural Networking Tagged With: FAQ

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Featured Contributions

Five challenges for developing next-generation ADAS and autonomous vehicles

Securing IoT devices against quantum computing risks

RISC-V implementation strategies for certification of safety-critical systems

What’s new with Matter: how Matter 1.4 is reshaping interoperability and energy management

Edge AI: Revolutionizing real-time data processing and automation

More Featured Contributions

EE TECH TOOLBOX

“ee
Tech Toolbox: 5G Technology
This Tech Toolbox covers the basics of 5G technology plus a story about how engineers designed and built a prototype DSL router mostly from old cellphone parts. Download this first 5G/wired/wireless communications Tech Toolbox to learn more!

EE Learning Center

EE Learning Center

EE ENGINEERING TRAINING DAYS

engineering
“bills
“microcontroller
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.

DesignFast

Design Fast Logo
Component Selection Made Simple.

Try it Today
design fast globle

Footer

Microcontroller Tips

EE World Online Network

  • 5G Technology World
  • EE World Online
  • Engineers Garage
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • DesignFast
  • EDA Board Forums
  • Electro Tech Online Forums
  • EV Engineering
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips

Microcontroller Tips

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us

Copyright © 2025 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy