MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA
Dettagli:
| Marca: | Mellanox |
| Numero di modello: | MCX516A-CCAT |
| Documento: | connectx-5-en-card.pdf |
Termini di pagamento e spedizione:
| Quantità di ordine minimo: | 1 pz |
|---|---|
| Prezzo: | Negotiate |
| Imballaggi particolari: | Scatola esterna |
| Tempi di consegna: | Basato sull'inventario |
| Termini di pagamento: | T/T |
| Capacità di alimentazione: | Fornitura per progetto/batch |
|
Informazioni dettagliate |
|||
| Stato dei prodotti: | Azione | Applicazione: | Server |
|---|---|---|---|
| Tipo di interfaccia:: | Rete | Porti: | Doppio |
| Velocità massima: | 25gbe | Tipo di connettore: | SFP28 |
| Tipo: | Cablato | Condizione: | Nuovo e originale |
| Tempo di garanzia: | 1 anno | Modello: | MCX516A-CCAT |
| Nome: | Scheda di rete Mellanox CX516A ConnectX-5 100GBE MCX516A-CCAT Doppia-Porta QSFP28 Adattatore PCI-E | Parola chiave: | scheda di rete Mellanox |
Descrizione di prodotto
Dual-port QSFP28 100GbE Ethernet adapter card — delivering up to 100Gb/s per port, 750ns latency, 200 million messages per second, and advanced application offloads. Ideal for Web 2.0, cloud, storage, AI, and telecommunications platforms requiring highest bandwidth and low latency.
The NVIDIA ConnectX-5 EN MCX516A-CCAT is a dual-port 100GbE Ethernet adapter card designed for the most demanding data center workloads. Built on the ConnectX-5 architecture, this adapter supports multiple speeds including 100GbE, 50GbE, 40GbE, 25GbE, 10GbE, and 1GbE, providing seamless migration paths and infrastructure flexibility. With 750ns latency, up to 200 million messages per second (Mpps), and PCIe 3.0 x16 host interface, the MCX516A-CCAT delivers industry-leading throughput and CPU efficiency. Key capabilities include RoCE (RDMA over Converged Ethernet), SR-IOV virtualization with up to 512 Virtual Functions, ASAP2 accelerated switching and packet processing for vSwitch/vRouter offloads, NVMe over Fabric target offloads, T10-DIF Signature Handover, and comprehensive overlay network offloads (VXLAN, NVGRE, GENEVE). This adapter is available in a low-profile PCIe form factor with enhanced host management features.
Two QSFP28 ports supporting 100/50/40/25/10/1GbE speeds. Backward compatible with lower-speed infrastructure.
750ns latency, up to 200 Mpps message rate, and 197 Mpps with DPDK for kernel bypass applications.
Low-latency RDMA services over Layer 2 and Layer 3 networks for storage and compute workloads.
Hardware offload of Open vSwitch (OvS) and vRouter data plane, preserving control plane flexibility while achieving wire-speed performance.
Hardware-accelerated NVMe-oF target offloads enabling efficient NVMe storage access with near-zero CPU intervention.
Up to 512 Virtual Functions (VFs) and 8 Physical Functions per port, with guaranteed QoS and VM isolation.
Hardware encapsulation and de-encapsulation for VXLAN, NVGRE, GENEVE, MPLS, and NSH tunnels.
Flexible parser and match-action tables enabling hardware offloads for current and future protocols.
NC-SI over MCTP, BMC interface, PLDM for monitoring and firmware update, PXE and UEFI remote boot.
The ConnectX-5 EN ASIC delivers record-setting performance with advanced acceleration engines. Key technological innovations include:
- PeerDirect (GPUDirect) – Eliminates unnecessary PCIe data copies between GPU and CPU, accelerating HPC, AI, and machine learning workloads.
- Adaptive Routing on Reliable Transport – Enables out-of-order RDMA and adaptive routing for optimized fabric utilization.
- Tag Matching and Rendezvous Offloads – Hardware offload of MPI tag matching and rendezvous protocol, reducing CPU overhead in HPC clusters.
- Burst Buffer Offloads – Hardware acceleration for background checkpointing in large-scale simulations and ML training.
- Embedded PCIe Switch – Supports up to 8 bifurcations, enabling host chaining and elimination of backend switches in storage racks.
- On-Demand Paging (ODP) – Registration-free RDMA memory access, simplifying application development.
- Extended Reliable Connected (XRC) and Dynamically Connected Transport (DCT) – Scales RDMA to tens of thousands of nodes.
- T10-DIF Signature Handover – Hardware-based data integrity protection for storage workloads at wire speed.
High-density virtualization, overlay networks, and vSwitch offloads reduce CPU utilization while maintaining wire-speed performance.
NVMe-oF target offloads, T10-DIF, and RoCE enable high-performance block storage with sub-microsecond latency.
PeerDirect GPUDirect, adaptive routing, and burst buffer offloads accelerate distributed training workloads.
ASAP2 vSwitch offloads, service chaining, and hairpin hardware capability enable efficient Network Function Virtualization.
Ultra-low latency (750ns) and high message rate (200 Mpps) meet the most demanding financial applications.
Embedded PCIe switch enables servers to interconnect without top-of-rack switches, reducing TCO.
The MCX516A-CCAT is compatible with a wide range of operating systems: RHEL/CentOS, Ubuntu, Windows Server, FreeBSD, VMware ESXi, and Citrix XenServer. It supports standard 100GbE QSFP28 optics, passive DAC cables, active optical cables (AOC), and breakout cables (100GbE to 4x25GbE or 2x50GbE). The adapter integrates seamlessly with NVIDIA Spectrum switches and any standards-based 25GbE/40GbE/50GbE/100GbE infrastructure. Software support includes OFED (OpenFabrics Enterprise Distribution), DPDK, and WinOF-2 for Windows.
| Category | Specification |
|---|---|
| Model | MCX516A-CCAT |
| Form Factor | Low-profile PCIe add-in card. Ships with tall bracket mounted, short bracket included. |
| Ports | 2x QSFP28 (100/50/40/25/10/1GbE) |
| Supported Speeds | 100GbE, 50GbE, 40GbE, 25GbE, 10GbE, 1GbE |
| Host Interface | PCIe 3.0 x16 (compatible with x8, x4, x2, x1; auto-negotiated) |
| Message Rate | Up to 200 million messages per second (Mpps); 197 Mpps with DPDK |
| Latency | 750ns (typical cut-through) |
| Virtualization | SR-IOV: up to 512 Virtual Functions, 8 Physical Functions per port |
| RoCE Support | Yes – RDMA over Converged Ethernet (RoCE) |
| Overlay Offloads | VXLAN, NVGRE, GENEVE, MPLS, NSH hardware encapsulation/de-encapsulation |
| vSwitch/vRouter Offloads | ASAP2 – Open vSwitch (OvS) and vRouter data plane offload with flexible match-action tables |
| Storage Offloads | NVMe-oF target offloads, T10-DIF Signature Handover, SRP, iSER, NFS RDMA, SMB Direct |
| Enhanced Features | Tag matching, rendezvous offload, adaptive routing, burst buffer offload, embedded PCIe switch, ODP, XRC, DCT |
| CPU Offloads | TCP/UDP stateless offloads, LSO/LRO, checksum offload, RSS/TSS, HDS, VLAN/MPLS tag insertion/stripping |
| Management Interfaces | NC-SI over MCTP (SMBus/PCIe), BMC interface, PLDM (monitoring and firmware update), SDN eSwitch management, SPI, JTAG |
| Remote Boot | PXE, UEFI, iSCSI remote boot |
| Power Consumption | Not publicly specified – please confirm before ordering |
| Operating Temperature | 0°C to 55°C (typical) |
| Standards | IEEE 802.3bj/3bm (100GbE), 802.3by (25/50GbE), 802.3ba (40GbE), 802.3ae (10GbE), 802.1Qbb PFC, 802.1Qaz ETS, 802.1Qau QCN, 1588v2, PCIe Gen 3.0 |
| RoHS | Compliant |
| OPN (Ordering Part Number) | Ports | Max Speed | Interface | Host Interface | Key Feature |
|---|---|---|---|---|---|
| MCX516A-CCAT | 2 | 100GbE | QSFP28 | PCIe 3.0 x16 | Dual-port 100GbE, enhanced host management |
| MCX516A-CDAT | 2 | 100GbE | QSFP28 | PCIe 4.0 x16 | ConnectX-5 Ex enhanced performance, PCIe Gen 4.0 |
| MCX512A-ACAT | 2 | 25GbE | SFP28 | PCIe 3.0 x8 | Dual-port 25GbE, UEFI enabled |
| MCX516A-GCAT | 2 | 50GbE | QSFP28 | PCIe 3.0 x16 | Dual-port 50GbE, enhanced host management |
| MCX516B-CCAT | 2 | 100GbE | QSFP28 | PCIe 3.0 x16 | Dual-port 100GbE variant |
Future-proof your data center with 100GbE connectivity while maintaining backward compatibility to 50/40/25/10/1GbE.
200 Mpps enables the highest packet processing density for telco NFV, vSwitch, and high-frequency trading.
NVMe-oF, T10-DIF, ASAP2, and RoCE offloads dramatically reduce CPU utilization and improve application performance.
Hong Kong Starsurge offers competitive pricing, warranty support, and fast worldwide delivery.
Hong Kong Starsurge provides end-to-end support for NVIDIA/Mellanox adapters, including compatibility verification, firmware updates, and technical troubleshooting. Standard warranty aligns with NVIDIA's limited hardware warranty (1 year return-and-repair). Extended support options are available upon request. Our team can assist with driver installation, performance tuning, RoCE configuration, and integration into existing server, storage, and network environments.
| Category | Supported Options |
|---|---|
| Operating Systems | RHEL/CentOS 7/8/9, Ubuntu 18.04+, Windows Server 2016/2019/2022, FreeBSD 12+, VMware ESXi 6.7/7.0/8.0, Citrix XenServer |
| Switches | NVIDIA Spectrum SN3000/SN3700 series, Cisco Nexus 3000/9000, Arista 7000 series, Juniper QFX series, any standards-based 25/40/50/100GbE switch |
| Cables and Optics (100GbE) | QSFP28 passive DAC (up to 5m), QSFP28 AOC, 100GBASE-SR4 (MPO, 100m), 100GBASE-LR4 (LC, 10km), 100GBASE-ER4 (LC, 40km) |
| Cables and Optics (Lower Speeds) | QSFP28 to SFP28 breakout cables (100G to 4x25G), QSFP+ (40G), SFP28 (25G), SFP+ (10G) with appropriate adapters |
| Management Protocols | NC-SI, MCTP over PCIe/SMBus, PLDM for monitoring and firmware update, SDN eSwitch management |
- Confirm server has an available PCIe x16 slot – Gen 3.0 or higher (PCIe 3.0 x16 provides adequate bandwidth for dual-port 100GbE).
- Determine required cable type: passive DAC (short distance), active optical (medium distance), or optical transceivers (long distance) for 100GbE operation.
- Verify operating system driver availability from NVIDIA/Mellanox official site (latest OFED or inbox drivers).
- Ensure your switch supports 100GbE QSFP28 ports (most modern spine switches do).
- For RoCE deployments, confirm switch support for DCB (PFC, ETS, ECN) and congestion notification.
- For NVMe-oF target offloads, verify your storage software stack compatibility.
- For BMC integration, verify your motherboard supports NC-SI over SMBus or PCIe.
ConnectX-5 Ex dual-port 100GbE with PCIe 4.0 x16 for enhanced performance.
32x 200GbE spine switch for high-density 100GbE/200GbE aggregation.
Passive copper direct-attach cables for 100GbE connections up to 5 meters.
48x 25GbE + 12x 100GbE top-of-rack switch for leaf/spine fabrics.
- RoCE Deployment Guide for ConnectX-5 Series
- ASAP2 Open vSwitch Offload Configuration Guide
- NVMe over Fabric with ConnectX-5 Best Practices
- SR-IOV Configuration on VMware ESXi with Mellanox Adapters
- 100GbE Migration: Planning and Implementation
Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration since 2008. Serving government, healthcare, manufacturing, finance, education, and enterprise clients worldwide. We deliver switches, NICs, wireless solutions, IoT systems, and custom software with multilingual support and global delivery. With a customer-first approach, Starsurge ensures reliable quality, responsive service, and tailored network infrastructure solutions.







