Cisco 7008 Manuel d'utilisateur

Naviguer en ligne ou télécharger Manuel d'utilisateur pour Serveurs Cisco 7008. Cisco HPC Network Solutions for Microsoft Windows Compute Manuel d'utilisatio

  • Télécharger
  • Ajouter à mon manuel
  • Imprimer
  • Page
    / 9
  • Table des matières
  • MARQUE LIVRES
  • Noté. / 5. Basé sur avis des utilisateurs
Vue de la page 0
All contents are Copyright © 1992–2006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 1 of 9
Solution Overview
Cisco HPC Network Solutions for
Microsoft Windows Compute Cluster Server 2003
Cisco
®
InfiniBand and Ethernet Network Fabric Solutions for Microsoft Windows Compute Cluster Server (CCS) 2003
CHALLENGE
High-performance computing (HPC) clusters of industry-standard servers have emerged in recent years as the preferred method for
implementing a supercomputer for computationally intensive tasks. Enterprises, research institutions, and governments use HPC
clusters for a variety of purposes ranging from financial risk analysis to computational fluid dynamics, to weather and climate modeling,
to analyzing underground oil and gas reservoirs. The applications and uses of HPC clusters are quite varied. The requirements of
the underlying network are also quite varied. Customers are faced with a number of challenges. Among these are:
What network interconnect offers the best performance or best value for my Microsoft Windows CCS application?
What network solutions, products, and topologies are validated for Microsoft Windows CCS?
SOLUTION
Cisco Systems
®
uniquely manufactures and supports multiple interconnects for HPC clusters, providing enterprises with a superior level
of flexibility in terms of price, latency, and performance. A Gigabit Ethernet interconnect using a Cisco Catalyst
®
switch is quite suitable
for loosely coupled, highly parallelized and parametric applications (Figure 1). Tightly coupled applications with high inter-nodal traffic
rates are typically bandwidth- and latency-sensitive. These applications will benefit from the low latency, high bandwidth, and native
Remote Direct Memory Access (RDMA) capabilities of InfiniBand using the Cisco SFS 7000 Series InfiniBand Server Switches.
Customers can be assured of an appropriate interconnect solution for their HPC cluster application requirements.
Figure 1. Example HPC Cluster Based on Microsoft CCS and End-to-End Cisco Ethernet and InfiniBand Interconnects
Vue de la page 0
1 2 3 4 5 6 7 8 9

Résumé du contenu

Page 1 - Solution Overview

All contents are Copyright © 1992–2006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 1 of 9 Solution Ov

Page 2

All contents are Copyright © 1992–2006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2 of 9 NETWORK INTER

Page 3

Topology Example 1: Dedicated Private Network; Public Network to Head Node Only Figure 2 shows a sample topology. This is a simple, high-performance

Page 4

Topology 3: Dedicated MPI Network; Public Network Connection to Head Node A dedicated MPI network offers the best performance of any network scenario

Page 5

Topology 4: Dedicated MPI Network with all Nodes Connected to Public Network Topology 4 also uses a dedicated MPI network, so offers similar performa

Page 6

Topology 5: Single Public Network Connection to all Nodes Topology 5 (Figure 8) is the simplest of all supported Windows CCS topologies with all node

Page 7

Figure 9. Example Large Ethernet HPC Cluster Network Design Using Cisco Catalyst Switches Figure 10 shows a larger multi-rack InfiniBand MPI netw

Page 8

All contents are Copyright © 1992–2006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 8 of 9 Choosing a To

Page 9

Printed in USA C11-354223-00 06/06 All contents are Copyright © 1992–2006 Cisco Systems, Inc. All rights reserved. This document is

Commentaires sur ces manuels

Pas de commentaire