NVIDIA ConnectX-7 MCX755106AS-HEAT 400G InfiniBand Adapter – Single-Port NDR, PCIe 5.0, Hardware-Accelerated Security & Storage for Hyperscale Workloads

جزئیات محصول:

نام تجاری: Mellanox
شماره مدل: MCX755106AS-HEAT (900-9X7AH-0078-DTZ)
مدرک: Connectx-7 infiniband.pdf

پرداخت:

مقدار حداقل تعداد سفارش: 1 عدد
قیمت: Negotiate
جزئیات بسته بندی: جعبه بیرونی
زمان تحویل: بر اساس موجودی
شرایط پرداخت: T/T
قابلیت ارائه: تهیه توسط پروژه/دسته
بهترین قیمت مخاطب

اطلاعات تکمیلی

مدل شماره: MCX755106AS-HEAT (900-9X7AH-0078-DTZ) پورت ها: 2-پورت
تکنولوژی: بی نهایت نوع رابط: OSFP56
مشخصات: 16.7cm x 6.9cm مبدا: هند / اسرائیل / چین
نرخ انتقال: 200 گرم رابط میزبان: gen3 x16
برجسته کردن:

NVIDIA ConnectX-7 InfiniBand adapter,400G InfiniBand network card,PCIe 5.0 Mellanox adapter

,

400G InfiniBand network card

,

PCIe 5.0 Mellanox adapter

توضیحات محصول

NVIDIA ConnectX-7 MCX755106AS-HEAT 400Gb/s Single-Port InfiniBand & Ethernet Smart Adapter

High-performance single-port 400Gb/s adapter for InfiniBand NDR and 400GbE networks—featuring PCIe 5.0 x16, inline hardware security (IPsec/TLS/MACsec), NVIDIA In-Network Computing engines, and NVMe-oF offloads for AI, HPC, and enterprise data centers.

  • Single QSFP112 port supporting 400Gb/s InfiniBand (NDR) and 400/200/100/50/25/10GbE
  • PCIe Gen 5.0 x16 (backward compatible with Gen 4.0/3.0) | Ultra-low latency and 215+ million messages/sec
  • Hardware offloads: NVMe-oF target/initiator, XTS-AES 256/512-bit encryption, MPI tag matching
  • Inline security engines: IPsec, TLS 1.3, MACsec with AES-GCM 128/256-bit
  • PCIe half-height half-length (HHHL) form factor, RoHS compliant, advanced timing (PTP/SyncE)
Characteristics
  • 400Gb/s Throughput: Single port operating at up to 400Gb/s InfiniBand (NDR) or 400GbE with full bidirectional bandwidth.
  • In-Network Computing: Offloads collective operations (MPI, NCCL, SHMEM) using NVIDIA SHARP technology.
  • Inline Security: Hardware encryption/decryption for IPsec, TLS 1.3, and MACsec at line rate; secure boot with root-of-trust.
  • NVMe-oF Offloads: Target and initiator offloads for NVMe over Fabrics (including NVMe/TCP), reducing CPU utilization.
  • Precision Timing: IEEE 1588v2 PTP with 12ns accuracy, SyncE, and configurable PPS in/out.
Technology & Standards

The MCX755106AS-HEAT integrates NVIDIA In-Network Computing engines (SHARP), RDMA (IBTA 1.5), RoCE, and NVMe-oF. It supports PCIe Gen 5.0 (x16), PAM4 (100G) and NRZ (10G/25G) SerDes, and advanced features like Dynamically Connected Transport (DCT), On-Demand Paging (ODP), and Adaptive Routing. Overlay offloads for VXLAN, GENEVE, NVGRE are hardware-accelerated. Compliant with IEEE 802.3ck, 802.3bj, and InfiniBand Trade Association specifications.

Working Principle: Smart Offload & Inline Security

ConnectX-7 offloads communication, storage, and security tasks from the host CPU to the adapter hardware. For MPI collectives, the adapter processes data in transit using SHARP, reducing endpoint traffic. For storage, NVMe-oF commands are processed directly on the adapter, freeing CPU cores. Inline encryption engines (IPsec/TLS/MACsec) encrypt/decrypt packets at wire speed without CPU involvement. The result is lower latency, higher message rate, and improved application scalability—critical for 400G environments.

Applications & Deployment
  • AI Training Nodes: GPU-to-GPU communication with GPUDirect RDMA and NCCL collectives.
  • HPC Compute Nodes: MPI-based simulations requiring ultra-low latency and high message rate.
  • NVMe-oF Storage: Target/initiator offload for high-performance NVMe storage access.
  • Secure Cloud Servers: Inline IPsec/TLS for multi-tenant security without CPU overhead.
  • Financial Trading: Precision PTP timing for high-frequency trading and timestamping.
Technical Specifications & Ordering Options
Model Ports & Speed Host Interface Form Factor Security Offloads Protocols OPN
ConnectX-7 1x QSFP112 (400Gb/s NDR/400GbE) PCIe 5.0 x16 PCIe HHHL IPsec, TLS 1.3, MACsec, AES-XTS InfiniBand, Ethernet, NVMe-oF MCX755106AS-HEAT
ConnectX-7 2x QSFP112 (400Gb/s) PCIe 5.0 x16 PCIe HHHL IPsec/TLS/MACsec IB/Eth MCX75310AAS-NEAT
ConnectX-7 1x QSFP112 (200Gb/s) PCIe 5.0 x16 OCP 3.0 IPsec/TLS/MACsec IB/Eth MCX755106AS-HEAT (OCP)

Note: MCX755106AS-HEAT supports 400Gb/s InfiniBand (NDR) and 400/200/100/50/25/10GbE. Dimensions: 167.65mm x 68.90mm (HHHL). Includes tall and low-profile brackets. Power consumption < 20W typical.

Advantages & Differentiators
  • vs. ConnectX-6: Double the bandwidth (400Gb/s vs. 200Gb/s), PCIe 5.0, inline IPsec/TLS/MACsec, and advanced PTP with 12ns accuracy.
  • vs. Competitor NICs: True hardware offload for NVMe-oF, MPI collectives, and full security suite—all at line rate.
  • Single-Port Efficiency: Ideal for leaf nodes where dual-port is not required, reducing cost and power.
  • Integrated Security: Eliminates need for external encryption appliances; FIPS compliance ready.
Service & Support

We offer 24/7 technical consultation, RMA services, and integration support for ConnectX-7 adapters. Each card is backed by a 1-year warranty (extendable). Our team provides driver validation for major Linux distributions (RHEL, Ubuntu), Windows Server, and VMware. Pre-sales configuration assistance for NDR InfiniBand fabric design is available. All cards are shipped from our 10M+ inventory with same-day dispatch.

Frequently Asked Questions (FAQ)
Q: Is the MCX755106AS-HEAT compatible with Quantum-2 400Gb/s switches?

A: Yes, it is fully interoperable with NVIDIA Quantum-2 QM9700/QM9790 switches using NDR mode at 400Gb/s.

Q: Can this adapter be used for Ethernet as well as InfiniBand?

A: Yes, it supports both InfiniBand and Ethernet protocols. The firmware auto-detects the switch type and configures the appropriate mode.

Q: Does it support RoCE (RDMA over Converged Ethernet)?

A: Yes, ConnectX-7 fully supports RoCE, providing low-latency RDMA in Ethernet environments.

Q: What security features are included?

A: Inline hardware engines for IPsec (AES-GCM 128/256), TLS 1.3, MACsec, and block-level XTS-AES 256/512-bit encryption. Also features secure boot with hardware root-of-trust.

Q: Is the card compatible with PCIe Gen 4.0 slots?

A: Yes, it is backward compatible with PCIe Gen 4.0 and Gen 3.0 slots, though bandwidth will be limited to the slot's capability (approx. 200Gb/s in Gen 4.0).

Precautions & Compatibility Notes
  • PCIe Slot Requirement: For full 400Gb/s performance, install in a PCIe Gen 5.0 x16 slot. Gen 4.0 slots will limit throughput to ~200Gb/s.
  • Cooling: Ensure adequate airflow in server chassis; passive cooling requires minimum 300 LFM at 400G operation.
  • Cabling: Use QSFP112 passive/active copper or optical modules rated for 400Gb/s (NDR).
  • Driver Support: Use latest NVIDIA MLNX_OFED for Linux or WinOF-2 for Windows.
  • Operating Temperature: 0°C to 70°C; store between -40°C and 85°C.
Company Introduction
NVIDIA ConnectX-7 MCX755106AS-HEAT 400G InfiniBand Adapter – Single-Port NDR, PCIe 5.0, Hardware-Accelerated Security & Storage for Hyperscale Workloads 0

With over a decade of experience, we operate a large-scale factory backed by a strong technical team. Our extensive customer base and domain expertise enable us to offer competitive pricing without compromising on quality. As authorized distributors for Mellanox, Ruckus, Aruba, and Extreme, we stock original network switches, network card (nic card) solutions, wireless Access Points, controllers, and cabling. We maintain a 10 million USD inventory to ensure rapid fulfillment across diverse product lines. Every shipment is verified for accuracy, and we provide 24/7 consultation and technical support. Our professional sales and technical teams have earned a high reputation in global markets—partner with us for reliable infrastructure solutions.

می خواهید اطلاعات بیشتری در مورد این محصول بدانید
NVIDIA ConnectX-7 MCX755106AS-HEAT 400G InfiniBand Adapter – Single-Port NDR, PCIe 5.0, Hardware-Accelerated Security & Storage for Hyperscale Workloads آیا می توانید جزئیات بیشتری مانند نوع ، اندازه ، مقدار ، مواد و غیره برای من ارسال کنید
با تشکر!