MX3-2280-M-4-C

MemryX
89-MX3-2280-M-4-C
MX3-2280-M-4-C

Mfr.:

Description:
System Memory Accelerators The M.2 Module enables high performance, yet power-efficient AI inference for edgedevices and edge servers.

Lifecycle:
New At Mouser

Availability

Stock:
0

You can still purchase this product for backorder.

On Order:
2
Expected 2/19/2026
Factory Lead Time:
15
Weeks Estimated factory production time for quantities greater than shown.
Minimum: 1   Multiples: 1
Unit Price:
-,-- RON
Ext. Price:
-,-- RON
Est. Tariff:
This Product Ships FREE

Pricing (RON)

Qty. Unit Price
Ext. Price
1.067,92 RON 1.067,92 RON
982,76 RON 9.827,60 RON
949,97 RON 23.749,25 RON
925,97 RON 46.298,50 RON
100 Quote

Product Attribute Attribute Value Select Attribute
MemryX
Product Category: System Memory Accelerators
M.2 AI Acceleration Modules
M.2 2280-M
0 C
+ 70 C
80 mm
22 mm
3.3 V
Brand: MemryX
Dimensions: 80 mm x 22 mm
Moisture Sensitive: Yes
Product Type: System Memory Accelerators
Factory Pack Quantity: 1
Subcategory: Memory & Data Storage
Unit Weight: 720 g
Products found:
To show similar products, select at least one checkbox
Select at least one checkbox above to show similar products in this category.
Attributes selected: 0

TARIC:
8471800000
CAHTS:
8471809900
USHTS:
8471809000
JPHTS:
847180000
BRHTS:
84718000
ECCN:
EAR99

M.2 AI Acceleration Module

MemryX M.2 AI Acceleration Module is a cutting-edge solution that brings advanced Artificial Intelligence (AI) processing to edge devices and systems. Powered by four MX-3™ "digital at-memory compute" AI ASICs, this module delivers unparalleled efficiency and performance by processing AI workloads directly within memory, minimizing data movement and latency. This innovative architecture enables real-time inference for complex deep learning models while maintaining low power consumption, making it ideal for applications in industrial automation, smart surveillance, healthcare, and automotive systems. The module's compact M.2 form factor ensures seamless integration into space-constrained devices, providing scalable AI acceleration for various edge computing needs.