English
Language : 

AOC-UIBQ-M1 Datasheet, PDF (1/1 Pages) List of Unclassifed Manufacturers – Single-port InfiniBand QDR UIO Adapter Card with PCI-E 2.0
AOC-UIBQ-m1
Single-port InfiniBand QDR UIO Adapter Card with PCI-E 2.0 and
Virtual Protocol Interconnect™ (VPI)
AOC-UIBQ-m1 InfiniBand card with Virtual Protocol Interconnect (VPI) provides the
highest performing and most flexible interconnect solution for performance-driven
server and storage clustering applications in Enterprise Data Center, High-Performance
Computing, and Embedded environments. Clustered databases, parallelized applications,
transactional services and high-performance embedded I/O applications will achieve
significant performance improvements resulting in reduced processing time and lower
costs per operation. AOC-UIBQ-m1 simplifies network deployment by consolidating
clustering, communications, storage, and management I/O and by providing enhanced
performance in virtualized server environments.
Key Features
• Virtual Protocol Interconnect™ (VPI)
• 1µs MPI ping latency
• Selectable 40Gb/s InfiniBand or 10GbE port
• PCI Express 2.0 (up to 5GT/s)
• CPU offload of transport operations
• End-to-end QoS and congestion control
• Hardware-based I/O virtualization
• TCP/UDP/IP stateless offload
• Fibre Channel encapsulation (FCoIB or FCoE)
Specifications
• InfiniBand:
–– Mellanox® ConnectX®-2 IB QDR (MT25408B0-FCC-QIS)
–– Single InfiniBand QSFP port
–– 40Gb/s per port
–– RDMA, Send/Receive semantics
–– Hardware-based congestion control
–– Atomic operations
• Interface:
–– PCI Express 2.0 x8 (5GT/s)
–– UIO low-profile half-length form factor
• Connectivity:
–– Interoperable with InfiniBand or 10GbE switches
–– QSFP connector
–– 7m (40Gb/s) maximum copper cable length
–– External optical media adapter and active cable support
• Hardware-based I/O Virtualization:
–– Single Root I/O
–– Address translation and protection
–– Multiple queues per virtual machine
–– VMware NetQueue support
–– PCISIG IOV compliant
• CPU Offloads:
–– TCP/UDP/IP stateless offload
–– Intelligent interrupt coalescence
–– Microsoft RSS and NetDMA compliant
Compliance/Environmental
–– RoHS Compliant 6/6, Pb Free
• Storage Support
–– T10-compliant Data Integrity Field support
–– Fibre Channel over InfiniBand or Ethernet (FCoIB or FCoE)
• Operating Systems/Distributions:
–– Novell SLES, RedHat, Fedora and others
–– Microsoft® Windows Server 2003/2008/CCS 2003
–– OpenFabrics Enterprise Distribution (OFED)
–– OpenFabrics Windows Distribution (WinOF)
–– VMware ESX Server 3.5, Citrix XenServer 4.1
• Operating Conditions:
–– Operating temperature: 0°C to 55°C
• Physical Dimensions:
–– Card PCB dimensions:
14.29cm (5.63”) x 6.35cm (2.50”) (LxW)
–– Height of end brackets:
standard – 12cm (4.725in), low-profile – 7.94cm (3.13in)
• Optional Accessories:
–– CBL-0417L: 39.37” (100cm) QSFP to QSFP InfiniBand QDR PBF
–– CBL-0325L: 78.74” (200cm) QSFP to QSFP InfiniBand QDR
–– CBL-0446L: 118.11” (300cm) QSFP to QSFP InfiniBand QDR
–– CBL-0422L: 196.85” (500cm) QSFP to QSFP InfiniBand QDR
–– CBL-0467L: 7M QSFP to QSFP InfiniBand QDR Passive Copper Cable
–– CBL-0468L: 15M QSFP to QSFP InfiniBand QDR Fiber Active Optical
Cable
Supported Platforms
–– Supported Motherboards: All Supermicro UIO Motherboards
–– Supported Servers: All Supermicro UIO Servers
For the most current product information, visit:
www.supermicro.com