|
AOC-UIBQ-M1 Datasheet, PDF (1/1 Pages) List of Unclassifed Manufacturers – Single-port InfiniBand QDR UIO Adapter Card with PCI-E 2.0 | |||
|
AOC-UIBQ-m1
Single-port InfiniBand QDR UIO Adapter Card with PCI-E 2.0 and
Virtual Protocol Interconnect⢠(VPI)
AOC-UIBQ-m1 InfiniBand card with Virtual Protocol Interconnect (VPI) provides the
highest performing and most flexible interconnect solution for performance-driven
server and storage clustering applications in Enterprise Data Center, High-Performance
Computing, and Embedded environments. Clustered databases, parallelized applications,
transactional services and high-performance embedded I/O applications will achieve
significant performance improvements resulting in reduced processing time and lower
costs per operation. AOC-UIBQ-m1 simplifies network deployment by consolidating
clustering, communications, storage, and management I/O and by providing enhanced
performance in virtualized server environments.
Key Features
⢠Virtual Protocol Interconnect⢠(VPI)
⢠1µs MPI ping latency
⢠Selectable 40Gb/s InfiniBand or 10GbE port
⢠PCI Express 2.0 (up to 5GT/s)
⢠CPU offload of transport operations
⢠End-to-end QoS and congestion control
⢠Hardware-based I/O virtualization
⢠TCP/UDP/IP stateless offload
⢠Fibre Channel encapsulation (FCoIB or FCoE)
Specifications
⢠InfiniBand:
ââ Mellanox® ConnectX®-2 IB QDR (MT25408B0-FCC-QIS)
ââ Single InfiniBand QSFP port
ââ 40Gb/s per port
ââ RDMA, Send/Receive semantics
ââ Hardware-based congestion control
ââ Atomic operations
⢠Interface:
ââ PCI Express 2.0 x8 (5GT/s)
ââ UIO low-profile half-length form factor
⢠Connectivity:
ââ Interoperable with InfiniBand or 10GbE switches
ââ QSFP connector
ââ 7m (40Gb/s) maximum copper cable length
ââ External optical media adapter and active cable support
⢠Hardware-based I/O Virtualization:
ââ Single Root I/O
ââ Address translation and protection
ââ Multiple queues per virtual machine
ââ VMware NetQueue support
ââ PCISIG IOV compliant
⢠CPU Offloads:
ââ TCP/UDP/IP stateless offload
ââ Intelligent interrupt coalescence
ââ Microsoft RSS and NetDMA compliant
Compliance/Environmental
ââ RoHS Compliant 6/6, Pb Free
⢠Storage Support
ââ T10-compliant Data Integrity Field support
ââ Fibre Channel over InfiniBand or Ethernet (FCoIB or FCoE)
⢠Operating Systems/Distributions:
ââ Novell SLES, RedHat, Fedora and others
ââ Microsoft® Windows Server 2003/2008/CCS 2003
ââ OpenFabrics Enterprise Distribution (OFED)
ââ OpenFabrics Windows Distribution (WinOF)
ââ VMware ESX Server 3.5, Citrix XenServer 4.1
⢠Operating Conditions:
ââ Operating temperature: 0°C to 55°C
⢠Physical Dimensions:
ââ Card PCB dimensions:
14.29cm (5.63â) x 6.35cm (2.50â) (LxW)
ââ Height of end brackets:
standard â 12cm (4.725in), low-profile â 7.94cm (3.13in)
⢠Optional Accessories:
ââ CBL-0417L: 39.37â (100cm) QSFP to QSFP InfiniBand QDR PBF
ââ CBL-0325L: 78.74â (200cm) QSFP to QSFP InfiniBand QDR
ââ CBL-0446L: 118.11â (300cm) QSFP to QSFP InfiniBand QDR
ââ CBL-0422L: 196.85â (500cm) QSFP to QSFP InfiniBand QDR
ââ CBL-0467L: 7M QSFP to QSFP InfiniBand QDR Passive Copper Cable
ââ CBL-0468L: 15M QSFP to QSFP InfiniBand QDR Fiber Active Optical
Cable
Supported Platforms
ââ Supported Motherboards: All Supermicro UIO Motherboards
ââ Supported Servers: All Supermicro UIO Servers
For the most current product information, visit:
www.supermicro.com
|