ADLINK Technology Inc., today introduced the industry’s first embedded MXM-based graphics modules based on NVIDIA’s Turing architecture, to accelerate edge AI inference in SWaP-constrained applications. GPUs are increasingly used to provide AI inferencing at the edge, where size, weight and power (SWaP) are key considerations. The embedded MXM graphics modules offer high-compute power required to transform data at the edge into actionable intelligence, and come in a standard format for systems integrators, ISVs and OEMs, increasing choice in both power and performance.
ADLINK’s embedded MXM graphics modules accelerate edge computing and edge AI in a myriad of compute-intensive applications, particularly in harsh or environmentally challenging applications such as those with limited or no ventilation, or corrosive environments. Examples include medical imaging, industrial automation, biometric access control, autonomous mobile robots, transportation and aerospace and defense. The need for high-performance, low-power GPU modules is increasingly critical as AI at the edge becomes more prevalent.
The ADLINK embedded MXM graphics modules:
● Provide acceleration with NVIDIA CUDA, Tensor and RT Cores
● Are one-fifth the size of full-height, full-length PCI Express graphics cards
● Offer more than three times the lifecycle of non-embedded graphics
● Consume as low as 50 watts of power
Leave a Reply