EnCharge AI Raises $21.7 Million to Produce AI Hardware for Edge Computing

Funds will go to research and development and new client engagements.

Stealth startup EnCharge AI announced Wednesday, Dec. 14, that it has raised $21.7 million in Series A investment led by Anzu Partners. It included contributions from AlleyCorp, Scout Ventures, Silicon Catalyst Angels, Schams Ventures, E14 Fund, and Alumni Ventures.

In an email to TechCrunch, company co-founder and CEO Naveen Verma said that the funds will be used to fund research and development (R&D) of hardware and software, as well as to facilitate new client engagements.

The technology has been thoroughly verified via earlier R&D all the way up the computing stack, Verma said, justifying the timing of the funding round.

The Concept

Verma, Echere Iroaga, and Kailash Gopalakrishnan conceived the concept for Encharge AI. While Gopalakrishnan was a fellow at IBM for over 18 years, Verma is the head of Princeton University's Keller Center for Innovation in Engineering Education.

Iroaga, on the other hand, was a vice president and general manager at Macom, a semiconductor manufacturer, before joining the organization.

EnCharge began with government funding that Verma and University of Illinois at Urbana-Champaign colleagues obtained in 2017. Verma headed an $8.3-million initiative to research novel non-volatile memory technologies as part of DARPA's Electronics Resurgence Initiative.

Non-volatile memory has the potential to be more energy efficient than the conventional "volatile" memory used in modern computers since it can store information even when the power is turned off.

Typical forms of non-volatile memory include flash memory and magnetic storage systems like hard drives and floppy disks.

DARPA also supported Verma's work on in-memory processing for machine learning computations. In this context, "in-memory" refers to doing these computations in RAM to avoid the delay provided by storage devices.

To bring Verma's findings to market, EnCharge was founded to provide hardware in the standard PCIe form size.

Verma said EnCharge's unique plug-in technology may speed up artificial intelligence (AI) applications on servers and "network edge" equipment through in-memory computing while using less power than traditional computer processors.

Problem and Solution

The EnCharge team faced several technical obstacles throughout hardware iterations. Computing that occurs entirely in memory is very vulnerable to power surges and other environmental stresses.

Therefore, EnCharge built their chips to use capacitors rather than transistors because capacitors, which hold an electrical charge, are more amenable to precise manufacturing and are less sensitive to variations in voltage.

EnCharge had to develop software that clients could use to modify their own AI systems to work with the specialized hardware. Once complete, the software will enable EnCharge's hardware to function with various neural networks like collections of AI algorithms, as stated by Verma.

When it comes to energy efficiency and performance, EnCharge products give orders-of-magnitude advantages.

As Verma put it, "This is enabled by a highly robust and scalable next-generation technology, which has been demonstrated in generations of test chips, scaled to advanced nodes and scaled-up in architectures."

Market Competition

The market for AI accelerator hardware is already rather competitive, and EnCharge must compete with well-funded rivals.

According to reports, accelerating AI workloads in memory is a focus for both Axelera and GigaSpaces.

In October of last year, NeuroBlade received $83 million to develop its in-memory inference technology for use in cloud infrastructure and edge devices.

Moreover, Syntiant offers AI-powered edge processors that can store and interpret large amounts of speech data in memory.

Trisha Andrada
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Tags:AI
Join the Discussion
Real Time Analytics