Flex Logix raises $55M to design AI chips for edge enterprise applications

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

Flex Logix, a startup designing reconfigurable AI accelerator chips, today announced that it closed a $55 million funding round led by Mithril Capital Management. CEO Geoff Tate says the funding will enable the company to build out its software, engineering, and customer support teams to accelerate the availability of its hardware and software for edge enterprise applications.

AI accelerators are a type of specialized hardware designed to speed up AI applications, particularly neural networks, deep learning, and various form of machine learning. They focus on low-precision arithmetic or in-memory computing, which can boost the performance of large AI algorithms and lead to state-of-the-art results in natural language processing, computer vision, and other domains. That’s perhaps why they’re forecast to have a growing share of edge computing processing power, making up a projected 70% of it by 2025, according to a recent survey by Statista.

Mountain View, California-based Flex Logix, which was founded in 2014, claims its AI inference chip — InferX X1 — is among the fastest and most efficient. The InferX1 outperforms Nvidia’s Xavier NX on the popular computer vision benchmark YOLOv3 and “real customer models,” according to Flex Logix, and the company says it’s targeting a price-to-performance ratio 10 to 100 times better than existing edge inference solutions.

“Flex Logix set out to be for FPGA what Arm is for processors,” Tate told VentureBeat via email. “We believe this original eFPGA business can grow to be as big as Arm’s over time, while our second line of business is driving edge AI Inference capabilities into high volume applications, thus growing the market to the billions of dollars that market forecasters predict.”

The InferX X1 also features what Flex Logix calls a reconfigurable tensor processor, nnMax, containing 64 processors coupled with SRAM that can be reprogrammed in 4 millionths of a second. In machine learning, a tensor is a generalization of vectors and matrices — representations of the data inputs, outputs, and transformations within neural networks. Flex Logix asserts that the nnMax is 3 to 18 times more efficient in terms of throughput per millimeter squared than the average Nvidia GPU.

“[The nnMax] reconfigures the 64 [processors] and RAM resources to efficiently implement a layer with a full bandwidth, dedicated data path, like an ASIC, then repeats this layer by layer,” Flex Logix explains on its website. “[We use] a new breakthrough interconnect architecture with less than half the silicon area of traditional mesh interconnect, fewer metal layers, higher utilization, and higher performance … We can easily scale up our architectures to deliver compute capacity of any size … using a patented tiling architecture with interconnects at the edge of the tiles that automatically form a larger array of any size.”

On the software side, Flex Logix’s compiler takes models from machine learning frameworks including Google’s TensorFlow and ONNX and optimizes them for its nnMax and InferX1 architectures. A performance modeler is available now and in use by “dozens” of customers, and Flex Logix eventually plans to make available software drivers for operating systems commonly used in server and real-time scenarios.

Flex Logix’s products have yet to come to market, but when they do, the company says they’ll be available in PCIe card and M.2 format for edge servers and gateways. A PCIe board containing the InferX1, X1P1, is expected to kick off production in Q2 2021 priced between $399 and $499, depending on the processor speed. A less powerful variant of the chip, InferX1 1KU, will cost between $99 and $199, with volume pricing reaching as low as $34 to $69.

Flex Logix has competition in a market that’s anticipated to reach $91.18 billion by 2025. In March 2020, Hailo, a startup developing hardware designed to speed up AI inferencing at the edge, nabbed $60 million in venture capital. California-based Mythic has raised $85.2 million to develop custom in-memory compute architecture. Graphcore, a Bristol, U.K.-based startup creating chips and systems to accelerate AI workloads, has a war chest in the hundreds of millions of dollars. And Baidu’s growing AI chip unit was recently valued at $2 billion after funding.

But Flex Logix investor Ajay Royan points to Tate’s pedigree as one reason for his continued confidence. Tate previously managed AMD’s microprocessor and logic group, and he took his first startup, chip licensing firm Rambus, from four people and $2 million in equity to a Nasdaq IPO and multibillion-dollar market cap. Flex Logix says its revenue in 2020 was in the “double digits” millions and is expected to grow 50% to 100% this year.

“We are impressed with the … architecture that Flex Logix has developed based on unique intellectual property that gives it a sustainable competitive advantage in a very high growth market,” Royan said in a press release. “This technology advantage positions Flex Logix for rapid growth in edge enterprise inference in applications such as medical, retail, industrial, robotics and more. It is even more impressive that they have done this with so little capital and at the same time built a cash-flow positive … business with large growth potential as system-on-chip designers look to incorporate reconfigurability into their communications and data centers.”

Lux Capital, Eclipse Ventures, and the Tate Family Trust also participated in Flex Logix’s latest fundraising round, a series D. It brings the company’s total raised to date to $82 million.

VentureBeat

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Source: Read Full Article