English
Language : 

AOC-CIBF-M1 Datasheet, PDF (1/1 Pages) List of Unclassifed Manufacturers – Compact and Powerful InfiniBand FDR Adapter
AOC-CIBF-m1
Compact and Powerful InfiniBand FDR Adapter
The AOC-CIBF-m1 is the most compact, yet powerful, InfiniBand adapter in the market. Based on Mellanox® ConnectX-3 with
Virtual Protocol Interconnect (VPI), it provides the highest performing and most flexible interconnect solution for servers
used in Enterprise Data Centers and High-Performance Computing. The AOC-CIBF-m1 simplifies the system development by
serving both InfiniBand and Ethernet fabrics in one hardware design. The AOC-CIBF-m1 is designed in a small microLP form
factor to fit Supermicro Twin server systems.
Key Features
• Single QSFP Connector
• MicroLP Form Factor
• PCI Express 3.0 (up to 8GT/s)
• Virtual Protocol Interconnect (VPI)
• 1µs MPI ping latency
• Up to 56Gbps InfiniBand or 40Gbps Ethernet
• CPU offload of transport operations
• Application offload
• GPU communication acceleration
• End-to-end QoS and congestion control
• Hardware-based I/O virtualization
• Ethernet encapsulation (EoIB)
• RoHS compliant 6/6
Specifications
General
-- Mellanox® ConnectX-3 FDR controller
-- Compact size microLP form factor
-- Single QSFP port and dual USB 2.0 ports
-- PCI-E 3.0 x8 (8GT/s) interface
Connectivity
-- Interoperable with InfiniBand or 10/40GbE
switches
-- Passive copper cable with ESD protection
-- Powered connectors for optical and active
cable support
InfiniBand
-- IBTA Specification 1.2.1 compliant
-- Hardware-based congestion control
-- 16 million I/O channels
-- 256 to 4Kbyte MTU, 1Gbyte messages
Enhanced InfiniBand
-- Hardware-based reliable transport
-- Collective operations offloads
-- GPU communication acceleration
-- Hardware-based reliable multicast
-- Extended Reliable Connected transport
-- Enhanced Atomic operations
Ethernet
-- IEEE Std 802.3ae 10 Gigabit Ethernet
-- IEEE Std 802.3ba 40 Gigabit Ethernet
-- IEEE Std 802.3ad Link Aggregation and
Failover
-- IEEE Std 802.3az Energy Efficient Ethernet
-- IEEE Std 802.1Q, .1p VLAN tags and priority
-- IEEE Std 802.1Qau Congestion Notification
-- IEEE P802.1Qaz D0.2 ETS
-- IEEE P802.1Qbb D1.0 Priority-based Flow
Control
-- Jumbo frame support (9.6KB)
Hardware-based I/O Virtualization
-- Single Root IOV
-- Address translation and protection
-- Dedicated adapter resources
-- Multiple queues per virtual machine
-- Enhanced QoS for vNICs
-- VMware NetQueue support
Manageability Features:
-- Additional CPU Offloads
-- RDMA over Converged Ethernet
-- TCP/UDP/IP stateless offload
-- Intelligent interrupt coalescence
Flexboot™ Technology
-- Remote boot over InfiniBand
-- Remote boot over Ethernet
-- Remote boot over iSCSI
Protocol Support
-- Open MPI, OSU MVAPICH, Intel MPI, MS
-- MPI, Platform MPI
-- TCP/UDP, EoIB, IPoIB, SDP, RDS
-- SRP, iSER, NFS RDMA
-- uDAPL
Operating Systems/Distributions
-- Novell SLES, Red Hat Enterprise Linux (RHEL),
and other Linux distributions
-- Microsoft Windows Server 2008/CCS 2003,
HPC Server 2008
-- OpenFabrics Enterprise Distribution (OFED)
-- OpenFabrics Windows Distribution (WinOF)
-- VMware ESX Server 3.5, vSphere 4.0/4.1
Physical Dimensions
-- Card PCB dimensions (without end brackets):
-- 12.32cm (4.85in) x 3.90cm (1.54in) (LxW)
Operating Condition
-- Operating Temperature: 0ºC to 55ºC
Optional Accessories
-- CBL-0490L: 39.4” (100cm) QSFP to QSFP
InfiniBand FDR 56Gbs Passive Copper
-- CBL-0496L: 118.1” (300cm) QSFP to QSFP
InfiniBand FDR 56Gbs Passive Copper
Compliance/Environmental
• RoHS Compliant 6/6, Pb Free
RoHS
compliant
Supported Platforms
-- Supermicro Twin Server Systems with microLP expansion slot
Please note that this product is only available as an integrated solution with Supermicro server systems
For the most current product information, visit:
www.supermicro.com