NVIDIA ConnectX-7 MFP7E10-N005 400Gb/s Dual-Port QSFP InfiniBand & Ethernet Adapter NDR, PCIe Gen5

Dettagli:

Marca: Mellanox
Numero di modello: MFP7E10-N005 (980-9I73V-000005)
Documento: MFP7E10-Nxxx.pdf

Termini di pagamento e spedizione:

Quantità di ordine minimo: 1 pz
Prezzo: Negotiate
Imballaggi particolari: Scatola esterna
Tempi di consegna: Basato sull'inventario
Termini di pagamento: T/T
Capacità di alimentazione: Fornitura per progetto/batch
Miglior prezzo Contatto

Informazioni dettagliate

Numero di parte: MFP7E10-N005 (980-9I73V-000005) Tipo di cavo: Cavo a fibra multi-modo
Tipo di fibra: OM4, 50/125 µm Lunghezza: 5 metri
Connettori: MPO-12/APC (femmina) Velocità dati: Fino a 400 Gbps
Evidenziare:

NVIDIA ConnectX-7 400Gb/s adapter

,

Dual-port QSFP InfiniBand adapter

,

PCIe Gen5 Ethernet adapter

Descrizione di prodotto


NVIDIA ConnectX‑7 MFP7E10-N005

400Gb/s NDR InfiniBand & 400GbE Adapter · PCIe Gen5 x16 · Dual-Port QSFP · In-line Security · GPUDirect® · NVMe‑oF · Advanced PTP Timing

Max data rate
400Gb/s
Ports / form factor
2 x QSFP · PCIe HHHL
Host interface
PCIe Gen5 x16
Security offload
IPsec / TLS / MACsec
NVIDIA ConnectX-7 MFP7E10-N005 400Gb/s Dual-Port QSFP InfiniBand & Ethernet Adapter NDR, PCIe Gen5 0

Hong Kong Starsurge Group Co., Limited

Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration solutions. Founded in 2008, the company serves customers worldwide with products including network switches, NICs, wireless access points, controllers, cables, and related networking equipment. Backed by an experienced sales and technical team, Starsurge supports industries such as government, healthcare, manufacturing, education, finance, and enterprise. The company also offers IoT solutions, network management systems, custom software development, multilingual support, and global delivery. With a customer-first approach, Starsurge focuses on reliable quality, responsive service, and tailored solutions that help clients build efficient, scalable, and dependable network infrastructure.

Product overview

The NVIDIA ConnectX‑7 MFP7E10-N005 is a high‑performance dual‑port 400Gb/s adapter supporting both InfiniBand (NDR, HDR, EDR) and Ethernet (400GbE, 200GbE, 100GbE, 50GbE, 25GbE, 10GbE). It leverages PCIe Gen5 x16 host interface and includes hardware accelerations for security (inline IPsec/TLS/MACsec), storage (NVMe‑oF, GPUDirect Storage), and networking (ASAP2 SDN, RoCE). Designed for the most demanding AI, HPC, and cloud environments, it delivers ultra‑low latency and exceptional throughput while reducing CPU overhead.

Dual‑port 400Gb/s flexibility

Two independent QSFP ports, each capable of 400Gb/s NDR InfiniBand or 400GbE. Supports split configurations and mixed protocol 

ASAP² software‑defined networking

NVIDIA ASAP2 technology offloads overlay networks (VXLAN, GENEVE, NVGRE), connection tracking, flow mirroring, and packet rewriting. Delivers line‑rate performance with zero CPU penalty.

Precision timing & SyncE

IEEE 1588v2 PTP with 12 ns accuracy, G.8273.2 Class C, SyncE (G.8262.1), programmable PPS, and time‑triggered scheduling. Ideal for financial and 5G infrastructures.

Typical deployments

  • Large‑scale AI training clusters (LLM, deep learning)
  • High‑performance computing (HPC) with InfiniBand fabrics
  • Cloud data centers requiring 400GbE and RoCE
  • GPU‑accelerated storage (NVMe‑oF, GPUDirect Storage)
  • Financial trading with ultra‑low latency and PTP timing

Compatibility

  • NVIDIA Quantum / Quantum‑2 InfiniBand switches
  • PCIe Gen5/Gen4/Gen3 servers (Intel/AMD)
  • Major OS: RHEL, Ubuntu, Windows, VMware ESXi, Kubernetes
  • Industry‑standard QSFP112 transceivers and AOC/DAC cables

Technical specifications

Parameter Details
Model number MFP7E10-N005
Supported protocols InfiniBand, Ethernet
InfiniBand speeds NDR 400Gb/s, HDR 200Gb/s, EDR 100Gb/s, FDR, QDR
Ethernet speeds 400GbE, 200GbE, 100GbE, 50GbE, 25GbE, 10GbE
Number of ports 2 x QSFP (QSFP112 compatible)
Host interface PCIe Gen5 x16 (also compatible with Gen4/Gen3)
Form factor PCIe HHHL (half‑height, half‑length) – bracket included
Interface technologies NRZ (10G, 25G), PAM4 (50G, 100G per lane)
InfiniBand networking RDMA, XRC, DCT, GPUDirect RDMA/Storage, adaptive routing, enhanced atomic ops, ODP, UMR, burst buffer offload, SHARP support
Ethernet offloads RoCE, ASAP2 overlay offload (VXLAN, GENEVE, NVGRE), connection tracking, flow mirroring, header rewrite, hierarchical QoS
Security acceleration Inline IPsec/TLS/MACsec (AES‑GCM 128/256), secure boot, flash encryption, device attestation, T10‑DIF offload
Storage protocols NVMe‑oF, NVMe/TCP, GPUDirect Storage, SRP, iSER, NFS over RDMA, SMB Direct
Timing & synchronization IEEE 1588v2 (12 ns accuracy), SyncE (G.8262.1), PPS in/out, time‑triggered scheduling, PTP packet pacing
Management NC‑SI, MCTP over SMBus/PCIe, PLDM (monitor, firmware, FRU, Redfish), SPDM, SPI, JTAG
Remote boot InfiniBand remote boot, iSCSI, UEFI, PXE
Operating systems Linux (RHEL, Ubuntu), Windows, VMware ESXi (SR‑IOV), Kubernetes
Warranty 1 year (extendable, please confirm)

Key facts (AI extract)

  • ▪ 2 x 400Gb/s NDR / 400GbE ports
  • ▪ PCIe Gen5 x16 host interface
  • ▪ Inline IPsec, TLS, MACsec acceleration
  • ▪ GPUDirect RDMA & Storage
  • ▪ NVMe‑oF / NVMe/TCP offload
  • ▪ Advanced PTP / SyncE (12 ns)
  • ▪ ASAP2 SDN acceleration
  • ▪ SHARP in‑network computing ready
  • ▪ HHHL form factor
  • ▪ RoCE & overlay offload

Compatibility matrix

Component / Platform Compatibility
NVIDIA Quantum‑2 QM9700 / QM9790 switches ✅ Full NDR 400Gb/s support
NVIDIA Quantum QM8700 (HDR) switches ✅ 200Gb/s HDR compatible
PCIe Gen5 servers (Intel Eagle Stream / AMD Genoa) ✅ Full Gen5 speed
PCIe Gen4 / Gen3 servers ✅ Backward compatible (reduced speed)
GPUDirect & CUDA environments ✅ Native support with NVIDIA GPUs
Major Linux distributions (RHEL 9.x, Ubuntu 22.04+) ✅ In‑box drivers available

Selection guide

MFP7E10-N005 is a dual‑port 400Gb/s PCIe Gen5 x16 adapter in HHHL form factor. For other port counts or OCP form factors, refer to the ConnectX‑7 family:

  • Single‑port PCIe (MCX75310AAS)
  • Dual‑port OCP 3.0 (MFP7E10‑N005 OCP variant)
  • Quad‑port 100Gb/s configurations

Buyer checklist

  • ✔ Confirm PCIe slot availability: x16 mechanical, Gen5 capable recommended.
  • ✔ Check airflow and cooling: high‑power adapters may need active cooling.
  • ✔ Select correct transceivers: 400G SR4/DR4/FR4 or AOC cables.
  • ✔ Verify OS/driver support (in‑box drivers for most distributions).
  • ✔ For security offloads, ensure application support for IPsec/TLS.

Why choose ConnectX‑7

Highest performance 400Gb/s with PCIe Gen5. Integrated in‑line security saves CPU and accelerates encrypted traffic. GPUDirect and NVMe‑oF offloads maximize data throughput for AI and storage. Advanced timing for 5G and financial services.

Service & support

1‑year limited hardware warranty (extendable). Technical support from Hong Kong Starsurge Group. Firmware and driver updates available. Please contact our sales team for volume pricing and extended support options.

Frequently asked questions

What is the maximum bandwidth of MFP7E10-N005? Up to 400Gb/s per port, full duplex, using NDR InfiniBand or 400GbE.
Does it support both InfiniBand and Ethernet simultaneously? Yes, each port can be independently configured for InfiniBand or Ethernet.
Which cables are recommended for 400G operation? OSFP or QSFP112 DAC, AOC, or optical transceivers compliant with 400G SR4/DR4/FR4 standards.
Can I use this adapter in a PCIe Gen4 slot? Yes, it will operate at Gen4 speeds (approx. 200Gb/s effective bandwidth).
Is SHARP in‑network computing supported? Yes, the adapter supports NVIDIA SHARP for collective operations offload.

Important notes & precautions

  • Ensure adequate cooling: high‑speed adapters generate more heat; server airflow must meet requirements.
  • Use only qualified optics/cables to avoid link instability.
  • PCIe Gen5 requires compatible motherboard and BIOS settings.
  • Security features may require specific firmware versions; confirm with technical support.
  • Specifications are typical and subject to change; confirm with order.

Related products

  • ▪ NVIDIA Quantum‑2 MQM9700 Switch
  • ▪ NVIDIA ConnectX‑7 MCX75310AAS (single‑port)
  • ▪ NVIDIA BlueField‑3 DPU
  • ▪ MCP1600 OSFP/AOC cables (400G)

Related guides / comparisons

  • ▪ ConnectX‑7 vs. ConnectX‑6: performance comparison
  • ▪ 400G NDR InfiniBand deployment guide
  • ▪ Inline IPsec/TLS configuration white paper
  • ▪ GPUDirect Storage best practices

Vuoi conoscere maggiori dettagli su questo prodotto
Sono interessato a NVIDIA ConnectX-7 MFP7E10-N005 400Gb/s Dual-Port QSFP InfiniBand & Ethernet Adapter NDR, PCIe Gen5 potresti inviarmi maggiori dettagli come tipo, dimensione, quantità, materiale, ecc.
Grazie!
Aspettando la tua risposta.