MOUNTAIN Look at, Calif., April 15, 2022 /PRNewswire/ — Flex Logix® Technologies, Inc., supplier of speedy and efficient edge AI inference accelerators and the leading provider of eFPGA IP, declared today that it will be talking at two vital field displays in April: The Linley Spring Processor Conference on April 20-21st and the Computer system Eyesight Summit on April 27th. The talks will concentration around the company’s InferX™ AI inference accelerator, production boards and software package answers, which deliver the most efficient AI inference acceleration for highly developed edge AI workloads these as Yolov5.
Linley Spring Processor Convention Presentation 1:
- Presentation title: Assembly the Real Worries of AI
- Monitor: Session 1 Edge-AI Structure
- Speaker: Randy Allen, Vice President of Computer software for Flex Logix
- Summary: Device Understanding was very first described in its current variety in 1952. Its latest re-emergence is not the end result of technical breakthroughs, but in its place of out there computation energy. The ubiquity of ML, even so, will be established by the amount of computational cycles we can productively apply issue to the constraints of latency, power, spot, and price tag. That has established to be a complicated problem. This communicate will explore ways to producing parallel heterogeneous processing techniques that can meet up with the obstacle.
- When: Wednesday, April 20th
- Site: Hyatt Regency Lodge, Santa Clara
- Time: 10:20am-12:20pm
Linley Spring Processor Meeting Presentation 2:
- Presentation title: Higher-Efficiency Edge Eyesight Processing Applying Dynamically Reconfigurable TPU Technologies
- Track: Session 5 Edge AI Silicon
- Speaker: Cheng Wang, CTO and Co-Founder of Flex Logix
- Abstract: To reach high precision, edge computer system vision calls for teraops of processing to be executed in fractions of a 2nd. Also, edge techniques are constrained in phrases of power and cost. This chat will existing and show the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and distinction it to present-day GPU, TPU and other strategies to delivering the teraops performance required by edge vision inferencing. We will look at latency, throughput, memory utilization, electrical power dissipation and all round option charge. We are going to also clearly show how existing experienced models can be simply ported to operate on the InferX X1 accelerator.
- When: Thursday, April 21st
- Spot: Hyatt Regency Lodge, Santa Clara
- Time: 1:05pm-2:45pm
Personal computer Eyesight Summit Presentation 1:
- Presentation title: The Evolving Silicon Foundation for Edge AI Processing
Speaker: Sam Fuller, Head of AI Inference Product or service Management for Flex Logix
Summary: To reach significant accuracy, edge AI demands teraops of processing to be executed in fractions of a next. Also, edge programs are constrained in phrases of power and expense. This discuss will current and exhibit the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and distinction it to latest GPU, TPU and other ways to delivering the teraops computing essential by edge eyesight inferencing. We will examine latency, throughput, memory utilization, electric power dissipation and all round solution price tag. We will also exhibit how existing experienced designs can be quickly ported to run on the InferX X1 accelerator. - When: Wednesday, April 27th
- Spot: San Jose Marriott
- Time: 10:00am
Computer system Eyesight Summit Presentation 2:
- Panel Dialogue: Establishing Scalable AI Answers
Speaker: Sam Fuller, Head of AI Inference Product Administration for Flex Logix
Summary: In this session, panelists will focus on the problem of rolling out CV programs to have actual affect. - When: Wednesday, April 27th
- Spot: San Jose Marriott
- Time: 12:00pm
About Flex Logix
Flex Logix is a reconfigurable computing business supplying AI inference and eFPGA options primarily based on software program, techniques and silicon. Its InferX X1 is the industry’s most-economical AI edge inference accelerator that will deliver AI to the masses in high-quantity purposes by providing much bigger inference throughput for each greenback and for each watt. Flex Logix’s eFPGA system enables chips to flexibly take care of switching protocols, standards, algorithms, and shopper requires and to put into practice reconfigurable accelerators that speed critical workloads 30-100x as opposed to standard reason processors. Flex Logix is headquartered in Mountain Check out, California and has offices in Austin, Texas. For extra data, visit https://flex-logix.com.
MEDIA CONTACTS
Kelly Karr
Tanis Communications
[email protected]
+408-718-9350
Copyright 2022. All legal rights reserved. Flex Logix is a registered trademark and InferX is a trademark of Flex Logix, Inc.
Source Flex Logix Technologies, Inc.