NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter

جزئیات محصول:

نام تجاری: Mellanox
شماره مدل: MCX653106A-HDAT
مدرک: connectx-6-infiniband.pdf

پرداخت:

مقدار حداقل تعداد سفارش: 1 عدد
قیمت: Negotiate
جزئیات بسته بندی: جعبه بیرونی
زمان تحویل: بر اساس موجودی
شرایط پرداخت: T/T
قابلیت ارائه: تهیه توسط پروژه/دسته
بهترین قیمت مخاطب

اطلاعات تکمیلی

وضعیت محصولات: سهام برنامه: سرور
وضعیت: نو و اصل تایپ کنید: سیمی
حداکثر سرعت: حداکثر 200 گیگابایت در ثانیه کانکتور اترنت: QSFP56
مدل: MCX653106A-HDAT

توضیحات محصول

Flagship Smart Adapter In-Network Computing NVIDIA Quantum InfiniBand
NVIDIA ConnectX-6 InfiniBand Adapter
MCX653106A-HDAT – Dual-Port 200Gb/s
Ultra-low latency • RDMA, NVMe-oF offload • Block-level encryption • PCIe Gen 4.0

Engineered for demanding HPC, AI, and hyperscale cloud infrastructures, the NVIDIA® ConnectX®-6 MCX653106A-HDAT smart adapter card delivers up to 200Gb/s bandwidth per port with In-Network Computing acceleration. Offloading computation from the CPU, it dramatically improves efficiency, scalability, and security — from deep neural network training to real-time data analytics.

Product Overview

As a core component of the NVIDIA Quantum InfiniBand platform, ConnectX-6 enables end-to-end RDMA, hardware-based reliable transport, and advanced congestion control. The MCX653106A-HDAT model features dual-port QSFP56, supporting both InfiniBand and Ethernet (up to 200Gb/s). It integrates block-level XTS-AES encryption, NVMe over Fabrics (NVMe-oF) offloads, and GPUDirect RDMA acceleration — making it the ideal choice for GPU-accelerated clusters, software-defined storage, and virtualized networks.

Key Features
⬩ 200Gb/s per port Max bandwidth, up to 215M messages/sec
⬩ Ultra-low latency Sub-microsecond RDMA & send/receive semantics
⬩ In-Network Computing Collective operations offload, tag matching, rendezvous protocol
⬩ Hardware security Block-level XTS-AES 256/512-bit, FIPS compliant options
⬩ NVMe-oF offloads Target and initiator offload for efficient NVMe storage
⬩ PCIe Gen 4.0/3.0 x16 Host interface with 32 lanes support (2x16)
⬩ ASAP² & Open vSwitch offload Flexible pipeline & encapsulation offload (VXLAN, NVGRE, Geneve)
⬩ GPUDirect RDMA & PeerDirect Accelerated GPU communication without CPU overhead
Technology: In-Network Computing & RDMA Fabric

NVIDIA ConnectX-6 extends Remote Direct Memory Access (RDMA) beyond conventional limits. By implementing hardware offloads for MPI tag matching, out-of-order RDMA supporting Adaptive Routing, and Dynamically Connected Transport (DCT), it ensures efficient scaling across thousands of nodes. The adapter’s In-Network Memory capability enables registration-free RDMA memory access, reducing software overhead. Combined with PCIe Gen 4.0, data moves directly between memory and network, freeing CPU cycles for application logic.

With support for RoCE (RDMA over Converged Ethernet) and overlay network tunneling offloads, ConnectX-6 provides a unified smart fabric for both InfiniBand and Ethernet environments.

Typical Deployments
  • High Performance Computing (HPC): Large-scale simulations, weather modeling, and research clusters requiring low latency and high message rate.
  • AI & Machine Learning: Accelerate distributed training of deep neural networks with GPUDirect RDMA and high-throughput 200Gb/s links.
  • NVMe-oF Storage Arrays: Build high-performance NVMe/TCP or NVMe/RDMA storage targets with hardware offloads, reducing CPU load.
  • Hyperscale Cloud & NFV: Efficient service chaining, OVS offload (ASAP²), and SR-IOV for up to 1K virtual functions per adapter.
  • Big Data Analytics: In-network computing acceleration for streaming engines and distributed databases.
Compatibility

ConnectX-6 MCX653106A-HDAT is compatible with a wide range of servers, switches, and OS environments. It supports InfiniBand switches up to 200Gb/s (HDR) and Ethernet switches up to 200Gb/s with auto-negotiation. The adapter works across x86, Power, Arm, GPU, and FPGA-based platforms.

Category Supported Options / Standards
Operating Systems RHEL, SLES, Ubuntu, other major Linux distributions, Windows Server, FreeBSD, VMware vSphere
InfiniBand Spec IBTA 1.3 compliant, 200/100/50/25/10Gb/s, 8 virtual lanes + VL15
Ethernet Standards 200/100/50/40/25/10/1GbE, IEEE 802.3bj, 802.3by, PFC, ETS, DCB, 1588v2
CPU offloads & virtualization SR-IOV (1K VFs), NPAR, DPDK, ASAP² OVS offload, Tunneling (VXLAN, NVGRE, Geneve)
Management & Boot NC-SI, MCTP over SMBus/PCIe, PLDM (DSP0248/DSP0267), UEFI, PXE, iSCSI remote boot
Technical Specifications
Parameter Detail
Product Model MCX653106A-HDAT
Form Factor PCIe Stand-up, low-profile bracket included, tall bracket mounted, short bracket accessory
Network Ports 2x QSFP56 (dual-port)
Supported Speeds InfiniBand: 200/100/50/25/10 Gb/s; Ethernet: 200/100/50/40/25/10/1 Gb/s
Host Interface PCIe Gen 3.0/4.0 x16 (also supports x8, x4, x2, x1)
Maximum Bandwidth 200Gb/s per port
Message Rate Up to 215 million messages per second
Latency Extremely low (sub-microsecond RDMA)
Hardware Encryption XTS-AES 256/512-bit block-level encryption, FIPS capable
Storage Offloads NVMe-oF target/initiator, T10-DIF, SRP, iSER, NFS RDMA, SMB Direct
Virtualization SR-IOV (up to 1K VFs), VMware NetQueue, QoS per VM
Remote Boot InfiniBand, Ethernet, iSCSI, UEFI, PXE
Dimensions (without bracket) 167.65mm x 68.90mm
RoHS & Compliance RoHS compliant, ODCC compatible
Selection Guide – ConnectX-6 Family
Ordering Part Number (OPN) Ports / Speed Host Interface Key Features
MCX653106A-HDAT 2x QSFP56, up to 200Gb/s PCIe 3.0/4.0 x16 Dual-port, Crypto, standard bracket
MCX653105A-HDAT 1x QSFP56, 200Gb/s PCIe 3.0/4.0 x16 Single-port, crypto support
MCX653106A-ECAT 2x QSFP56, 100Gb/s PCIe 3.0/4.0 x16 100Gb/s variant, no crypto
MCX653436A-HDAT (OCP 3.0) 2x QSFP56, 200Gb/s PCIe x16 OCP 3.0 small form factor
MCX654106A-HCAT 2x QSFP56, Socket Direct 2x PCIe 3.0 x16 Dedicated per-CPU access, Socket Direct

Note: For variants with cold plate for liquid-cooled Intel Server System D50TNP, please contact Starsurge for customized ordering.

Why Choose Starsurge for ConnectX-6?
✔ Genuine & Certified Stock
100% authentic NVIDIA ConnectX-6 adapters, batch traceable.
✔ Global Logistics & Fast Delivery
Warehouses & partner hubs serving Americas, EMEA, APAC.
✔ Technical Pre-Sales & Integration
Firmware configuration, RDMA tuning, NVMe-oF validation.
✔ Competitive OEM Pricing
Long-term relationship with NVIDIA partners, cost effective.
✔ 3-Year Warranty + RMA Support
Hassle-free replacement and advanced cross-ship available.
✔ Multi-Language & Tailored Solutions
English, Chinese, and custom integration support.
Service & Support

Hong Kong Starsurge Group provides end-to-end support: from compatibility checks, firmware customization, to on-site deployment guidance. We offer dedicated technical account managers for data center upgrades and proof-of-concept (PoC) testing. All adapters are shipped with anti-static packaging and optional installation kits.
✔ 24h engineering support ticketing system ✔ Advanced replacement for business-critical environments ✔ Driver & software stack assistance (OFED, WinOF-2, DPDK).

Frequently Asked Questions
What is the difference between MCX653106A-HDAT and standard ConnectX-6 cards?

This model offers dual-port 200Gb/s capability, full crypto offload (XTS-AES), and supports both InfiniBand and Ethernet on the same adapter. It is optimized for high-density servers requiring maximum throughput.

Does this adapter support GPUDirect RDMA?

Yes, it fully supports NVIDIA GPUDirect RDMA (PeerDirect) enabling direct GPU-to-network communication, eliminating unnecessary memory copies and reducing latency for AI training.

Is the card compatible with PCIe Gen 3 slots?

Absolutely — it is backward compatible with PCIe Gen 3.0, Gen 2.0, and Gen 1.1, although maximum throughput may be limited compared to Gen 4.0 host.

What cable types are supported for 200Gb/s?

Passive copper cables with ESD protection, active optical cables, and powered connectors. For InfiniBand, HDR compliant breakouts are supported.

Can this be used for NVMe-oF target offload?

Yes, the ConnectX-6 features NVMe over Fabrics offloads for both target and initiator, drastically reducing CPU overhead and improving IOPS scalability.

Precautions & Ordering Notes
  • Confirm server mechanical clearance: standard height PCIe bracket included; low-profile bracket also provided as accessory.
  • For liquid-cooled platforms (Intel D50TNP), verify cold plate option availability before ordering.
  • Please confirm driver compatibility with your Linux distribution version — NVIDIA OFED recommended.
  • Not publicly specified: Exact power consumption per port at full 200Gb/s load — refer to NVIDIA user manual or contact Starsurge for typical values (approx 15-18W total).
  • FIPS certification is hardware-capable but may require specific firmware — notify sales team if FIPS compliance is mandatory.
About Hong Kong Starsurge Group
NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter 0

Founded in 2008, Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration solutions. With a global customer base spanning government, healthcare, manufacturing, education, finance, and enterprise sectors, Starsurge delivers high-performance networking equipment including switches, NICs, wireless solutions, and tailored software. The company combines experienced sales and technical teams to support complex infrastructure projects, IoT deployments, and network management systems. Customer-first approach, reliable quality, and responsive global delivery make Starsurge a trusted partner for next-generation data centers.

Contact Starsurge Team →

Key Facts – NVIDIA ConnectX-6 MCX653106A-HDAT
Fact Value
Max Throughput 200Gb/s per port (aggregate 400Gb/s theoretical)
On-chip Acceleration Tag matching, rendezvous offload, collective offloads, burst buffer
Virtual Functions Up to 1024 VFs per adapter
Encryption Standard XTS-AES 256/512-bit, offloaded from CPU
Adaptive Routing Out-of-order RDMA support
Compatibility Matrix (Pre-validated Platforms)
Server / Platform CPU Architecture Tested OS
Dell PowerEdge R750 Intel Xeon Scalable (Gen 4) RHEL 8.6, Ubuntu 22.04
HPE ProLiant DL380 Gen10 Plus Intel Xeon SLES 15 SP4, VMware ESXi 7.0
Supermicro GPU SuperServer AMD EPYC 7003 Ubuntu 20.04, NVIDIA HPC SDK
Lenovo ThinkSystem SR650 Intel Xeon Windows Server 2022
NVIDIA DGX / HGX base NVIDIA Arm / x86 Ubuntu with MLNX_OFED
Buyer Checklist – Before Ordering ConnectX-6
  • ☑ PCIe slot type: x16 mechanical (electrical x16/x8/x4 supported)
  • ☑ Required port speed: 200Gb/s or lower; cable type (QSFP56 passive/active)
  • ☑ OS driver availability: Check MLNX_OFED or WinOF-2 version
  • ☑ Encryption requirement: FIPS mode or standard AES-XTS
  • ☑ Cooling and bracket: standard or cold plate option needed?
  • ☑ Quantity and lead time: Stock confirmation with Starsurge sales
Related Products (NVIDIA Ecosystem)
NVIDIA Quantum InfiniBand Switches
QM8700 / QM9700 series, 40 ports HDR 200Gb/s
ConnectX-6 Lx Smart NIC
Optimized for Ethernet and RoCE, 200Gb/s
NVIDIA BlueField-3 DPU
InfiniBand/Ethernet with programmable data path
LinkX Cables & Transceivers
DAC, AOC, and active fiber cables for 200G
Related Guides & Resources
  • ▸ NVIDIA ConnectX-6 User Manual (Firmware & Configuration)
  • ▸ RDMA over Converged Ethernet (RoCE) Deployment Guide
  • ▸ NVMe-oF with ConnectX-6: Best Practices
  • ▸ Performance Tuning for MPI and GPUDirect
  • ▸ Block-level Encryption Setup for FIPS environments

* Specifications and features are based on published NVIDIA datasheet and may be updated. For exact technical details, please refer to official NVIDIA documentation or contact Starsurge pre-sales engineering.

می خواهید اطلاعات بیشتری در مورد این محصول بدانید
NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter آیا می توانید جزئیات بیشتری مانند نوع ، اندازه ، مقدار ، مواد و غیره برای من ارسال کنید
با تشکر!