NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card

جزئیات محصول:

نام تجاری: Mellanox
شماره مدل: MCX555-ECAT
مدرک: CONNECTX-5 infiniband.pdf

پرداخت:

مقدار حداقل تعداد سفارش: 1 عدد
قیمت: Negotiate
جزئیات بسته بندی: جعبه بیرونی
زمان تحویل: بر اساس موجودی
شرایط پرداخت: T/T
قابلیت ارائه: تهیه توسط پروژه/دسته
بهترین قیمت مخاطب

اطلاعات تکمیلی

وضعیت محصولات: سهام برنامه: سرور
وضعیت: نو و اصل تایپ کنید: سیمی
حداکثر سرعت: edr و 100gbe کانکتور اترنت: QSFP28
برجسته کردن:

NVIDIA ConnectX-5 InfiniBand adapter,100Gb/s QSFP28 network card,PCIe 3.0 x16 Mellanox card

,

100Gb/s QSFP28 network card

,

PCIe 3.0 x16 Mellanox card

توضیحات محصول

NVIDIA ConnectX-5 MCX555A-ECAT InfiniBand Adapter Card
Single-Port QSFP28 | 100Gb/s InfiniBand & Ethernet | PCIe 3.0 x16 | RDMA Enabled

High-performance, low-latency 100Gb/s network adapter designed for HPC, AI, and cloud data centers. Featuring advanced offloads including NVMe over Fabrics, GPUDirect RDMA, and tag matching for MPI workloads — delivering industry-leading throughput and CPU efficiency.

Product Overview

The NVIDIA ConnectX-5 MCX555A-ECAT is a single-port 100Gb/s InfiniBand adapter card in a low-profile PCIe form factor. Leveraging the proven ConnectX-5 architecture, it delivers up to 100Gb/s throughput with sub-microsecond latency and a high message rate. The card supports both InfiniBand (up to EDR) and 100GbE, providing versatile connectivity for high-performance computing, storage, and virtualized environments.

Built with an embedded PCIe switch and advanced RDMA capabilities, the MCX555A-ECAT offloads critical communication tasks from the CPU — enabling higher application performance, lower power consumption, and reduced total cost of ownership. It is fully compatible with PCIe 3.0 x16 slots and supports a wide range of operating systems and acceleration frameworks.

Key Features
  • Up to 100Gb/s connectivity per port (InfiniBand EDR / 100GbE)
  • Single QSFP28 connector for optical or copper cables
  • PCIe 3.0 x16 host interface (auto-negotiates to x8, x4, x2, x1)
  • RDMA, send/receive semantics with hardware-based reliable transport
  • Tag matching and rendezvous offloads for MPI and SHMEM
  • NVMe over Fabrics (NVMe-oF) target offloads for efficient storage
  • GPUDirect RDMA (PeerDirect) acceleration for GPU communication
  • Hardware-based congestion control & adaptive routing support
  • SR-IOV virtualization: up to 512 virtual functions
  • RoHS compliant, low-profile form factor (tall bracket included)
Advanced Technology & Offloads

The ConnectX-5 architecture integrates a range of hardware acceleration engines that reduce CPU intervention and improve application scalability:

  • MPI Tag Matching & Rendezvous Offload: Offloads message matching and rendezvous protocol processing, dramatically improving MPI performance for HPC clusters.
  • Out-of-Order RDMA with Adaptive Routing: Enables efficient use of multiple network paths while maintaining ordered completion semantics, maximizing fabric utilization.
  • NVMe-oF Target Offloads: Allows NVMe storage systems to serve remote access with near-zero CPU overhead, ideal for disaggregated storage architectures.
  • Dynamically Connected Transport (DCT): Provides extreme scalability for large compute and storage systems by eliminating connection setup overhead.
  • ASAP2 Accelerated Switching & Packet Processing: Hardware offload for Open vSwitch (OVS) and overlay network tunneling (VXLAN, NVGRE, GENEVE).
  • On-Demand Paging (ODP): Supports virtual memory paging for RDMA operations, simplifying application development.
Typical Deployments
  • High-Performance Computing (HPC): Ideal for supercomputing clusters, MPI-based simulations, and scientific research workloads requiring low latency and high message rates.
  • AI & Deep Learning Training: Combined with GPUDirect RDMA, enables fast GPU-to-GPU communication across nodes, accelerating training times.
  • NVMe-oF Storage Systems: Deploy as storage targets or initiators in NVMe over Fabrics environments for high-throughput, low-latency block storage access.
  • Cloud & Virtualized Data Centers: SR-IOV and virtualization offloads support multi-tenant environments with guaranteed QoS and secure isolation.
  • High-Frequency Trading (HFT): Ultra-low latency and hardware timestamping (IEEE 1588v2) meet the demands of financial services applications.
Compatibility & Interoperability

The MCX555A-ECAT is designed for broad compatibility with NVIDIA InfiniBand switches (e.g., Quantum, Spectrum) and third-party 100GbE switches. It supports both passive copper DACs and active optical cables via QSFP28 ports.

Operating Systems & Software Stacks:

  • RHEL / CentOS, Ubuntu, Windows Server, FreeBSD, VMware ESXi
  • OpenFabrics Enterprise Distribution (OFED) / WinOF-2
  • NVIDIA HPC-X, OpenMPI, MVAPICH2, Intel MPI, Platform MPI
  • Data Plane Development Kit (DPDK) for kernel bypass
Technical Specifications
Parameter Specification
Model MCX555A-ECAT
Form Factor PCIe Low-Profile (14.2cm x 6.9cm without bracket), Tall bracket pre-installed, short bracket included
Port Speed & Type 1x QSFP28, up to 100Gb/s InfiniBand (EDR) and 100GbE
Host Interface PCI Express 3.0 x16 (compatible with x8, x4, x2, x1)
InfiniBand Support IBTA 1.3 compliant, 100Gb/s EDR, FDR, QDR, DDR, SDR; 8 virtual lanes + VL15; 16 million I/O channels
Ethernet Support 100GbE, 50GbE, 40GbE, 25GbE, 10GbE, 1GbE; IEEE 802.3cd, 802.3bj, 802.3by, 802.3ba, 802.3ae
RDMA Capabilities RDMA over Converged Ethernet (RoCE), hardware reliable transport, out-of-order RDMA, atomic operations
Storage Offloads NVMe over Fabrics target offload, iSER, SRP, NFS RDMA, SMB Direct, T10 DIF signature handover
Virtualization SR-IOV (up to 512 virtual functions), VMware NetQueue, NPAR, PCIe Access Control Services (ACS)
CPU Offloads TCP/UDP/IP stateless offload, LSO/LRO, checksum offload, RSS/TSS, VLAN/MPLS tag insertion/stripping
Overlay Networks Hardware offload for VXLAN, NVGRE, GENEVE encapsulation/decapsulation
Management NC-SI over MCTP, PLDM for monitor/control and firmware update, I2C, SPI, JTAG
Remote Boot Remote boot over InfiniBand, Ethernet, iSCSI; UEFI, PXE support
Power Consumption Not publicly specified; typical sub-20W range – please confirm for your system
Operating Temperature 0°C to 55°C (typical environment)
Compliance RoHS, REACH, FCC, CE, VCCI, ICES, RCM

Note: Specifications derived from NVIDIA ConnectX-5 product documentation. For the latest details and firmware support, refer to official NVIDIA release notes.

Selection Guide – ConnectX-5 Family
Ordering Part Number Ports / Speed Host Interface Form Factor Key Features
MCX555A-ECAT 1x QSFP28, 100Gb/s PCIe 3.0 x16 Low-profile PCIe Standard single-port, EDR InfiniBand / 100GbE
MCX556A-ECAT 2x QSFP28, 100Gb/s PCIe 3.0 x16 Low-profile PCIe Dual-port, EDR/100GbE
MCX556A-EDAT 2x QSFP28, 100Gb/s PCIe 4.0 x16 Low-profile PCIe ConnectX-5 Ex, enhanced PCIe Gen4
MCX556M-ECAT-S25 2x QSFP28, 100Gb/s 2x PCIe 3.0 x8 Socket Direct Dual-socket server connection via harness
MCX545B-ECAN 1x QSFP28, 100Gb/s PCIe 3.0 x16 OCP 2.0 Type 1 Open Compute Project form factor

For OCP or Multi-Host variants, please contact sales. All cards support backward compatibility to lower speeds.

Why Choose ConnectX-5 MCX555A-ECAT
  • Superior Application Performance: Hardware offloads for MPI, NVMe-oF, and overlays free CPU cores for business logic.
  • Scalable RDMA Fabric: DCT, XRC, and out-of-order RDMA deliver linear scalability for thousands of nodes.
  • GPU Acceleration Ready: GPUDirect RDMA enables direct memory access between GPUs and network adapters, eliminating CPU bottlenecks in AI clusters.
  • Flexible Deployment: Single QSFP28 port simplifies cabling and is ideal for 100Gb/s leaf-spine architectures.
  • Investment Protection: Support for both InfiniBand and Ethernet allows seamless transition between protocols as needs evolve.
Service & Support

Hong Kong Starsurge Group provides complete lifecycle support for NVIDIA ConnectX-5 adapters, including pre-sales configuration assistance, firmware update guidance, and warranty service. Our technical team can help with:

  • Compatibility verification with your server and switch infrastructure
  • Performance tuning for HPC or storage workloads
  • Custom bracket options and bulk packaging requirements
  • RMA processing and advanced replacement services

Contact our sales engineers for volume pricing and lead time information.

Frequently Asked Questions
Q: What is the difference between MCX555A-ECAT and MCX556A-ECAT?
A: MCX555A-ECAT has a single QSFP28 port, while MCX556A-ECAT has dual ports. Both support 100Gb/s per port and PCIe 3.0 x16. Choose single-port for simpler cabling or dual-port for higher density.
Q: Can this card be used in a PCIe 3.0 x8 slot?
A: Yes, the card auto-negotiates to x8, x4, x2, or x1 lane widths, though maximum throughput may be limited by available bandwidth.
Q: Does it support RoCE (RDMA over Converged Ethernet)?
A: Yes, ConnectX-5 supports RoCE for Ethernet fabrics, providing low-latency RDMA services on standard Ethernet networks.
Q: What cables are compatible?
A: The card works with QSFP28 passive copper DACs (up to 5m) and active optical cables (AOC) for longer reaches, as well as optical transceivers for fiber connectivity.
Q: Is this card supported in VMware ESXi?
A: Yes, VMware ESXi is supported with native drivers and SR-IOV capabilities. Please check the VMware compatibility guide for specific versions.
Handling & Installation Precautions

Electrostatic Discharge (ESD): Always use ESD-safe practices when handling the adapter. Store in anti-static packaging until installation. Cooling Requirements: Ensure adequate airflow in the server chassis to maintain operating temperature within specified range. Firmware Updates: Use NVIDIA official firmware tools (MFT) and verify compatibility with your OS and driver version before updating. Cable Bending: Follow QSFP28 cable bend radius guidelines to avoid signal degradation.

This is a Class A product. In a residential environment it may cause radio interference. Ensure proper shielding and grounding per local regulations.

About Hong Kong Starsurge Group Co., Limited
NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card 0

Founded in 2008, Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration solutions. Serving customers worldwide with products including network switches, NICs, wireless access points, controllers, and high-speed cabling, Starsurge combines deep technical expertise with a customer-first approach. The company supports industries such as government, healthcare, manufacturing, education, finance, and enterprise, offering IoT solutions, network management systems, custom software development, and multilingual global delivery. With a focus on reliable quality and responsive service, Starsurge helps clients build efficient, scalable, and dependable network infrastructure.

Key Facts – NVIDIA MCX555A-ECAT
100Gb/s
Port speed (InfiniBand & Ethernet)
PCIe 3.0 x16
Host interface bandwidth
512 VFs
Maximum SR-IOV virtual functions
NVMe-oF
Hardware offloaded storage target
Compatibility Matrix
Component / System Compatibility Status Notes
NVIDIA Quantum InfiniBand Switches Certified EDR, HDR compatibility when using appropriate firmware
NVIDIA Spectrum Ethernet Switches Certified 100GbE, 50GbE, 25GbE modes supported
Third-party 100GbE switches Compatible Requires IEEE standards compliance; tested with major vendors
GPU servers (NVIDIA DGX, HGX) Certified with GPUDirect RDMA acceleration for multi-GPU communication
Storage arrays with NVMe-oF Supported Target offload enables efficient NVMe fabric access
Buyer Checklist – 100Gb/s InfiniBand Adapter
  • ☑ Confirm server has an available PCIe 3.0 x16 (or x8) slot with adequate clearance.
  • ☑ Determine port count: single-port (MCX555A-ECAT) vs dual-port (MCX556A-ECAT).
  • ☑ Choose cable type: passive copper DAC for short distances (≤5m) or optical for longer runs.
  • ☑ Verify operating system and driver support (OFED, Windows, VMware).
  • ☑ For GPU clusters, ensure GPUDirect RDMA compatibility with your GPU model and driver version.
  • ☑ Check if tall or short bracket is required for your server chassis.
Related Products
  • NVIDIA MCX556A-ECAT – Dual-port 100Gb/s ConnectX-5 adapter
  • NVIDIA MCX556A-EDAT – ConnectX-5 Ex with PCIe 4.0 support
  • NVIDIA Quantum-2 QM9700 40-port 800Gb/s InfiniBand Switch
  • Mellanox QSFP28 passive DAC cables (1m, 2m, 3m)
  • NVIDIA Spectrum-4 SN5600 100GbE/400GbE Ethernet switches
Related Guides & Resources
  • NVIDIA ConnectX-5 InfiniBand Adapter Card User Manual
  • RDMA over Converged Ethernet (RoCE) Deployment Guide
  • GPUDirect RDMA Best Practices for AI Clusters
  • NVMe over Fabrics with ConnectX-5 – Configuration Guide
  • OFED Installation and Tuning Guide


می خواهید اطلاعات بیشتری در مورد این محصول بدانید
NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card آیا می توانید جزئیات بیشتری مانند نوع ، اندازه ، مقدار ، مواد و غیره برای من ارسال کنید
با تشکر!