English
Language : 

AOC-UIBQ-M2 Datasheet, PDF (1/1 Pages) List of Unclassifed Manufacturers – Dual-port InfiniBand QDR UIO Adapter Card with PCI-E 2.0
AOC-UIBQ-m2
Dual-port InfiniBand QDR UIO Adapter Card with PCI-E 2.0 and Virtual Protocol Interconnect™ (VPI)
AOC-UIBQ-m2 InfiniBand card with Virtual Protocol Interconnect (VPI) provides the highest performing and most flexible
interconnect solution for performance-driven server and storage clustering applications in Enterprise Data Centers, High-
Performance Computing, and Embedded environments. Clustered databases, parallelized applications, transactional services
and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced
completion time and lower cost per operation. AOC-UIBQ-m2 simplifies network deployment
by consolidating clustering, communications, storage, and management I/O and
by providing enhanced performance in virtualized server environments.
Key Features
• Virtual Protocol Interconnect™ (VPI)
• 1us MPI ping latency
• Selectable 40Gb/s InfiniBand or 10GbE port
• PCI Express 2.0 (up to 5GT/s)
• CPU offload of transport operations
• End-to-end QoS and congestion control
• Hardware-based I/O virtualization
• TCP/UDP/IP stateless offload
• Fibre Channel encapsulation (FCoIB or FCoE)
Specifications
• InfiniBand:
- Mellanox® ConnectX-2 IB QDR
- Dual InfiniBand QSFP ports
- 40Gb/s per port
- RDMA, Send/Receive semantics
- Hardware-based congestion control
- Atomic operations
• Interface:
–– PCI Express 2.0 x8 (5GT/s)
–– UIO low-profile half-length form factor
• Connectivity:
- Interoperable with InfiniBand or 10GbE switches
- QSFP connector
- 7m+ (40Gb/s) of passive copper cable
- External optical media adapter and active cable support
• Hardware-based I/O Virtualization:
- Single Root IOV
- Address translation and protection
- Multiple queues per virtual machine
- VMware NetQueue support
- PCI-SIG IOV compliant
• CPU Offloads:
- TCP/UDP/IP stateless offload
- Intelligent interrupt coalescence
- Compliant to Microsoft RSS and NetDMA
Compliance/Environmental
–– RoHS Compliant 6/6, Pb Free
• Storage Support:
- T10-compliant Data Integrity Field support
- Fibre Channel over InfiniBand or Ethernet
• Operating Systems/Distributions:
- Novell SLES, RedHat, Fedora and others
- Microsoft® Windows Server 2003/2008/CCS 2003
- OpenFabrics Enterprise Distribution (OFED)
- OpenFabrics Windows Distribution (WinOF)
- VMWare ESX Server 3.5, Citrix XenServer 4.1
• Operating Conditions:
–– Operating temperature: 0°C to 55°C
• Physical Dimensions:
–– Card PCB dimensions: 14.29cm (5.63in) x 6.35cm (2.50in) (LxW)
–– Height of end brackets: standard – 12cm (4.725in),
low-profile – 7.94 (3.13in)
• Optional Accessories:
–– CBL-0417L: 39.37” (100cm) QSFP to QSFP InfiniBand QDR PBF
–– CBL-0325L: 78.74” (200cm) QSFP to QSFP InfiniBand QDR
–– CBL-0446L: 118.11” (300cm) QSFP to QSFP InfiniBand QDR
–– CBL-0422L: 196.85” (500cm) QSFP to QSFP InfiniBand QDR
–– CBL-0467L: 7M QSFP to QSFP InfiniBand QDR Passive Copper Cable
–– CBL-0468L: 15M QSFP to QSFP InfiniBand QDR Fiber Active Optical
Cable
Supported Platforms
–– Supported Motherboards: All Supermicro UIO Motherboards
–– Supported Servers: All Supermicro UIO Servers
For the most current product information, visit:
www.supermicro.com