Instruction/ maintenance manual of the product t2808-90006 HP (Hewlett-Packard)
Go to page of 104
HP Serviceguard Extended Distance Cluste r for Li nux A.01 .00 Deploy ment Guide Manufacturing P art Number: T2808-9 0006 May 2008 Sec ond Edition.
2 Legal Notices © Copyrig ht 2006-20 08 Hewle tt-Packard Developmen t Company , L.P . Publicat io n Date: 2008 V alid licen se fro m HP require d for po ssessio n, use, or co pying.
3 Cont ent s 1. Disaster T olerance and Recovery in a Ser viceguard Cluster Evaluati ng the Need for Disaster T olerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 What is a Disaste r T olerant Architecture? . . . . . . . . .
Con tent s 4 Creating a Multiple Di sk Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 T o Create and Assemble an MD Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Creating V olume Groups and Configuring VG Exc lusive Activa tion on the MD Mirror .
5 Cont ent s.
Con tent s 6.
7 Printing History The printing date and part number indicate the current edition. The printing date changes when a new edit ion is printed. (Minor corrections and up date s which are in corp orate d at re print d o no t caus e the date t o chan ge.) The pa rt number changes when ex tensive tec hnical chan ges are incorporat ed.
8 HP Prin ting Di vision: Business Critical Computing Business U nit Hewlett-Packard Co. 19111 Pruneridge Ave. Cupertino, CA 95014.
9 Preface This guide i ntroduces t he concept of E xtended Dist ance Clust ers (XDC). It describes how to configure and manage HP Servi ceguard Extended Distance Clusters for Linux and the associated Software RAID functio nality .
10.
11 Preface Related Publicatio ns The followin g docume nts contain addit ional useful informa t ion: • Clusters f or High A vailability: a Primer of HP Solution s , Second Edition .
12.
Disaster T olerance and Recov ery in a S er viceguard Cluster Chapter 1 13 1 Disaster T olerance and Recovery in a Serviceguard Cluster This chap ter introd uces a v ariety of Hew lett-Packard high availab ility cluster tech nologies that provide disaster t olerance fo r your mission-criti cal applicatio ns.
Disaster T olerance and Recov er y in a S er viceguard Cluster Evaluat ing the Need f or Disaster T o lerance Chapter 1 14 Evaluating the Need for Disaster T olerance Disast er tole rance is the abi lity to rest ore applic ations an d data within a reasonable period of time after a disaster .
Disaster T olerance and Recov ery in a S er viceguard Cluster Evaluat ing the Need f or Disaster T o lerance Chapter 1 15 line inoperable as well as the computers. In this case d isaster recovery would be moot, and local failover is probably the more appropriat e level of prot ection.
Disaster T olerance and Recov er y in a S er viceguard Cluster What is a Disaster T olerant Architecture? Chapter 1 16 What is a Disaster T olerant Architecture? In a Serviceguard c luster configuration, high a vailability is ac hieved by using red undant h ardware to el iminat e single points of fail ure.
Disaster T olerance and Recov ery in a S er viceguard Cluster What is a Disaster T olerant Ar chitecture? Chapter 1 17 impact. F or these types o f inst allati ons, and many mo re like t hem, it i s i.
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 18 Understanding Types of Disas ter T oler ant Clust ers T o protect agai.
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 19 architecture are followed.
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 20 F igure 1-3 Extended D istance Cluste r In the above configuration the net work and FC links betwee n the data center s a re combined an d se nt over commo n DWDM links .
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 21 F igure 1-4 Two Data Cent er Setup Figure 1-4 sho ws a config urat ion th at is supp orted with separ ate network and FC links between the da ta centers .
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 22 Also note that the network ing in the configuration sho wn is the minimum. Added network connect ions for additional hea rtbeats are recommended.
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 23 Cluster Extension (CLX) Cluster A Linux CLX cluster is simila r to an HP-UX metropolitan cluster and is a cluster that has alt ernate nodes located in different parts of a city or in nearby cities.
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 24 Figure 1-5 shows a CLX for a Linux S ervice guard cluster architecture.
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 25 Benefi ts of CLX • CLX offe rs a more resili ent so lution t han Ext .
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 26 • D isk resy nchron ization is indep endent o f CPU failure (th at is, if the hosts a t the primar y site fail b ut the disk rema ins up, the disk kno ws it does not h ave to be resync hronized).
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 27 "objective" can be set for the recove ry point such that if data is upda ted for a pe rio d le ss th an the ob je cti ve, auto mat ed f ai love r can occur and a package w ill start.
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 28 F igure 1-6 Continental Cluster Contine ntalclus ters p rovi des the flexi bility to wo rk with any data replica tion mec hanism.
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 29 Benefi ts of Continenta lclusters • Y ou can virtua lly build data center s anywhere and still have the data centers prov ide disast er tolerance for ea ch other .
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 30 replica te the data between t wo data cente rs . HP provid es a suppo rted inte gration toolki t for Ora cle 8i St andby DB in t he Enterprise Cluster Management T oolkit (ECMT).
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 31 T able 1-1 Comparison of Disas ter T olerant Cluster Solutions Attributes Extended Distance Cluster CLX Contin entalclusters (HP-UX only) Key Be nefit Excelle nt in “no rmal” operations , and part ial failure .
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 32 Key Limita tion N o abilit y to che ck the state o f the da ta before starting up th e appli cat ion. If the volume group (v g) can be act iva ted, th e appli cat ion will b e started.
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 33 Maximum Distance 100 K ilome ters Shorte st of the di stan ces betwe en: •C l u s t e r n e t w o r k laten cy (not to exceed 200 ms).
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 34 Applicatio n F ailover type Automatic (no manual interv ention requi r ed). Automatic (no manual intervention required). Semi-a utoma tic ( user must “pus h the button” to init iate recovery).
Disaster T olerance and Recov ery in a S er viceguard Cluster Understandin g T ypes o f Disaster T olerant Clusters Chapter 1 35 Data Replicatio n Link Dark Fiber Dark Fiber Continuous Access over IP .
Disaster T olerance and Recov er y in a S er viceguard Cluster Unders tanding T ypes of Disaster T oler ant Clusters Chapter 1 36 DTS Software/ Licen ses Required SGLX + XDC SGLX + CLX XP or CLX EV A .
Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 37 Disaster T olerant Architec ture Guidelines Disa ster t olera nt arc hitec tures repr esen t a shif t away from the mass ive central data centers and tow ards more distributed data processing facilitie s .
Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 38 Protecting Data through Replication The mo s t sig nifi cant l osses d uring a d isas ter are the lo ss of ac cess to data, an d the lo ss of d ata it self.
Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 39 depending on the volume of data. So me applicat ions, depending on the role they play in the bus iness, ma y need to have a faster recovery t ime, within hours or eve n minutes.
Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 40 F igure 1-7 Physical Data Replication MD Software RAID is an example of phys ic.
Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 41 • Th e logical order of data w rites is not al ways maintained in synchronou s repl ication.
Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 42 • Because there a re multiple read devices , tha t is , the node ha s access to both copies of data , there ma y be imp r ovements in read performance .
Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 43 F igure 1-8 Logical Data R eplication Advantages of using logical replic ation are : • The distance betwee n nodes is limited only by the networking technol ogy .
Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 44 • I f the pri mary data base fails and is co rrupt, which res ults in t he repli ca t aking over , t hen the p ro cess for r est ori ng t he pr im ary dat abas e so t hat i t can be use d as th e re plic a is co mple x.
Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 45 F igure 1-9 Alternative P ower Sources Housing re mote n odes in anothe r buildin.
Disaster T olerance and Recov er y in a S er viceguard Cluster Disaster T olerant Ar chitecture Guid elines Chapter 1 46 Disas ter T olerant Loc al Area N etwo rking Ethernet networks can also be used.
Disaster T olerance and Recov ery in a S er viceguard Cluster Disaster T olerant Ar chitecture Guidelines Chapter 1 47 Disaster T olerant Cluster Limitations Disast er tolera nt clusters h ave limitatio ns, some of which ca n be mitigated b y good planni ng.
Disaster T olerance and Recov er y in a S er viceguard Cluster Managi ng a Disaster T olerant En vir onment Chapter 1 48 Managing a Di saster T olera nt Environment In addition to the changes in hardware and so ft ware to create a disaster tolerant archite cture, t here are also c hanges in the w ay you mana ge the environmen t.
Disaster T olerance and Recov ery in a S er viceguard Cluster Managi ng a Disaster T olerant En vir onment Chapter 1 49 Even if recovery is automated, you may choose to , or need to recover from some types of di sasters with manual r ecovery .
Disaster T olerance and Recov er y in a S er viceguard Cluster Additional Disaster T olerant Solutions Inf ormation Chapter 1 50 Additional Disaster T olerant Solutions Information On- line v ersio ns o f HA doc ument atio n are a vail able a t http:// docs.
Building an Extended Distance Cluster Using Ser viceguard and Software RAID Chapter 2 51 2 Building an Extended Distance Cluster Using Serviceguard and Software RAID Simple Se rviceguard clusters are .
Building an Extended Distance Cluster Using Ser vic eguard and Software RAID T ypes of Data Link f or Stor age and Netwo rking Chapter 2 52 Types of Data Link for Storage and Networking Fib re Channel.
Building an Extended Distance Cluster Using Ser viceguard and Software RAID T wo Data Center and Qu orum Service Location Ar chitectures Chapter 2 53 Two Data Center and Quorum Service Location Archit.
Building an Extended Distance Cluster Using Ser vic eguard and Software RAID T wo Data Center and Quorum Service Locat ion Ar chitectures Chapter 2 54 • Fibre Channel Direct F a bric Attach (DF A) i.
Building an Extended Distance Cluster Using Ser viceguard and Software RAID T wo Data Center and Qu orum Service Location Ar chitectures Chapter 2 55 F igure 2-1 Two Data Centers and Third Location wi.
Building an Extended Distance Cluster Using Ser vic eguard and Software RAID T wo Data Center and Quorum Service Locat ion Ar chitectures Chapter 2 56 There are no requirement s for the dis t ance bet.
Building an Extended Distance Cluster Using Ser viceguard and Software RAID Rules f or Separate N etwork and D ata Links Chapter 2 57 Rules for Separate Network and Data Lin ks • Th ere must be less than 200 milli seconds o f latenc y in t he netwo rk betwe en the d ata ce nter s.
Building an Extended Distance Cluster Using Ser vic eguard and Software RAID Guidelines on D W DM Links fo r Network and Data Chapter 2 58 Guidelines on DWDM Links for Netwo rk and Data • Th ere must be less than 200 milli seconds o f latenc y in t he netwo rk betwe en the d ata ce nter s.
Building an Extended Distance Cluster Using Ser viceguard and Software RAID Guidelines on D WDM Links for Netw ork and Data Chapter 2 59 • Fibre Channel switc hes must be used in a DWDM configura tion; Fibre Cha nnel hubs are not su pport ed. Direct F abric Attach mode must be used for the ports connected to the DWDM link.
Building an Extended Distance Cluster Using Ser vic eguard and Software RAID Guidelines on D W DM Links fo r Network and Data Chapter 2 60.
Configur ing your En vironment f or Software RAID Chapter 3 61 3 Configuring your Environment for Software RAID The previ ous c hapters di scussed conce ptual informatio n on disaster tolerant a rchitect ures and proced ural in formation on creatin g an extended dista nce cl uster .
Configuring your Environment f or Software RAID Under standing Softwar e RAID Chapter 3 62 Underst an d ing So ftwa re R AI D Redundant Array of Independent Disks (RAID) is a mechanism that provides storage fault tolerance and , occasiona lly , better performance .
Configur ing your En vironment f or Software RAID Installing t he Extended Distance Cluster Software Chapter 3 63 Installing t he Extended Distance Cluster Software This se ction disc usses the s upported opera ting sys tems , prereq uisite s and the pro cedures for instal ling the Exten ded Distance C luster softwar e.
Configuring your Environment f or Software RAID Installing the Ext ended Distance Cluster Sof tware Chapter 3 64 Compl ete the fo llow ing p roce dur e to in stall XDC: 1. Insert the pr oduct CD into th e driv e and mou n t the CD . 2. Open the command l ine inte rface.
Configur ing your En vironment f or Software RAID Installing t he Extended Distance Cluster Software Chapter 3 65 In the outpu t, the produ ct name , xdc -A.01 .00-0 will be listed. The presence of th is file verifies that the instal latio n is succe ssful .
Configuring your Environment f or Software RAID Configur ing the En vironment Chapter 3 66 Configuring the Environment After setting up the hardw are as described in the Extend ed Distance Cluster Arc.
Configur ing your En vironment f or Software RAID Configur ing the En vironment Chapter 3 67 that a re of ide n tical si zes . Diff erences i n disk se t size re sults in a mirror being cre ated of a s ize equal to the smal ler of the t wo disks . Be sure to create the mirror us ing the p ersistent d evice name s of th e component devices .
Configuring your Environment f or Software RAID Configur ing the En vironment Chapter 3 68 • Ensure that th e Quorum Serv er link i s close to th e Ethernet l inks in your setup. In cases of fail ures of all Et hernet and Fibre channel links, the node s can easily access the Quorum Server for arbitration.
Configur ing your En vironment f or Software RAID Configuring Mult iple Pa ths to Storage Chapter 3 69 Configuring Multiple P aths to Sto rag e HP require s that yo u configu re multi ple pa ths to the stora ge de vice using the QLogic HBA driver as it has inbuilt multipath capabilitie s.
Configuring your Environment f or Software RAID Configur ing Multiple P aths t o Stora ge Chapter 3 70 The QLogic cards are configured t o ho ld up any disk access and essential ly hang for a time period whic h is greater than the c luster reformation time w hen access to a disk is l ost.
Configur ing your En vironment f or Software RAID Using Persi stent Device Names Chapter 3 71 Using P ersistent Device Names When there i s a disk rel ated failure a nd subseq uent reboot, th ere is a possi bility tha t the d evi ces are rena med. Linux name s di sks i n th e orde r they are f ound.
Configuring your Environment f or Software RAID Creating a Multiple Disk De vice Chapter 3 72 Creating a Multiple Disk Device As mention ed earlier , the first step for enab ling Softw are RAID in your environment is to create the Multiple Disk (MD) device using two underlying componen t disks.
Configur ing your En vironment f or Software RAID Creating a Multip le Disk Device Chapter 3 73 2. Assemble the MD device on the o ther node by running the following command: # mdadm -A -R /dev/md 0 /dev/h pdev/sde1 /dev/hp dev/sd f1 3.
Configuring your Environment f or Software RAID Creating V olume Gr oups and Configuring V G Exc lusive Activ ation on the MD Mirr or Chapter 3 74 Creating V olume Groups and Conf iguring VG Exclusive Ac tivation on the MD Mi rror Once you cre ate the MD mirror d evice , you ne ed to create vo lume g roups and logical v olumes o n it.
Configur ing your En vironment f or Software RAID Creating V olume Gr oups and Con figuring V G Exclusive Ac tivati on on the MD Mirr or Chapter 3 75 Found d uplica te PV 9w 3TIxKZ6l FRqWUmQm9 tlV5nsdU kTi4i: using /dev/sd e not /dev/sdf Wi th this error , you cannot create a new volume group on /dev/md0 .
Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 76 Configuring the P ackage Control Script an d RAID Configuration F .
Configur ing your En vironment f or Software RAID Configur ing the P ac kage Contr ol Script and RAI D Configuration File Chapter 3 77 # Specify the method of activation and deactivation for md.
Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 78 T o Edit the XDC_CONFIG FILE parameter In additio n to modifying the DATA _REP variable, you must also s et XDC_CON FIG_FI LE to spec ify th e raid.
Configur ing your En vironment f or Software RAID Configur ing the P ac kage Contr ol Script and RAI D Configuration File Chapter 3 79 more time elapse s than w hat is sp ecifie d for RPO_TARG ET , the package is prevente d from start ing on the remote no de (assuming that the node stil l has acce ss only to its o wn half of the mirror).
Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 80 F or example, let us as sume tha t the dat a storag e links in Figu re 1-4 fail befo re the hear tbeat lin ks fail .
Configur ing your En vironment f or Software RAID Configur ing the P ac kage Contr ol Script and RAI D Configuration File Chapter 3 81 Now conside r an XDC configurat ion such as that shown in F igure 1-3 (DWDM links betw een data cent ers).
Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 82 Again, if the network is set up in suc h a way that when the links.
Configur ing your En vironment f or Software RAID Configur ing the P ac kage Contr ol Script and RAI D Configuration File Chapter 3 83 • RAID_MONITOR_INTERV AL This parameter defines the time interv.
Configuring your Environment f or Software RAID Configur ing the P ac kage Contr ol Script and R AID Configur ation File Chapter 3 84.
Disaster Scenarios and Their Handling Chapter 4 85 4 Disaste r Sc enarios and Their Handling The pr evious cha pters provi ded informat ion on depl oying Soft ware RAID in your en vironmen t. In th is chapter , you w ill find i nformation on ho w Softw are RAID addr esses variou s disaster sc enarios .
Disaster Scenar ios and Their Handling Chapter 4 86 The fo llow ing ta ble li sts al l the disas ter sce nario s that are h andle d by the Ext ended Dist ance Clust er softw are . All the scenari os assume that the s etup is th e same a s the one describ ed in “Exte nded Dis tance Clust ers” o n page 18 of this d ocument .
Disaster Scenarios and Their Handling Chapter 4 87 A packag e (P1) is runni ng on a node ( Node 1 ). The package uses a mirror (md0) that consis ts of two stor ag e com pone nt s - S1 (local to Node 1 - /dev/hpdev/mylink-sde ) and S2 (local to Node 2).
Disaster Scenar ios and Their Handling Chapter 4 88 A packag e (P1) is runni ng on a node ( Node 1 ). The package uses a mirror (md0) that consis ts of two stor ag e com pone nt s - S1 (local to Node 1 - /dev/hpdev/mylink-sde ) and S2 (local to Node 2) Data center 1 that c onsists of Node 1 and P1 ex periences a failu re.
Disaster Scenarios and Their Handling Chapter 4 89 This i s a mult iple fail ure scena rio wh ere the f ailur es occur i n a particular se quence in the con figuratio n that correspo nds to figure 2 where Ethernet and FC l inks do no t go over DWDM . The pac kage (P1) is running o n a node (N1) .
Disaster Scenar ios and Their Handling Chapter 4 90 This i s a mult iple fail ure scena rio wh ere the f ailur es occur i n a particular se quence in the con figuratio n that correspo nds to figure 2 where Ethernet and FC l inks do no t go over DWDM .
Disaster Scenarios and Their Handling Chapter 4 91 This fa ilure is t he same as the previo us f ailu re excep t tha t the packag e (P 1) i s c onfi gur e d wit h RPO_TARGET set to 60 seconds . In thi s case, initia lly the packag e (P1) is runni ng on N 1.
Disaster Scenar ios and Their Handling Chapter 4 92 In this case, the package (P1) runs with RPO _TARGET set to 60 seco nds. P acka ge P1 is runnin g on node N1. P1 uses a mirro r md0 consisting of S1 (loc al to node N1, for exa mple /dev/hpdev/mylink-sde ) and S2 (local to no de N2).
Disaster Scenarios and Their Handling Chapter 4 93 This s cenario is a n extensi on of the pre vio u s fail ure s cena rio. In the previo us sce nario, when the package f ails ov er to N2, it does not start as the value of RPO_TARGET would have been exceed ed.
Disaster Scenar ios and Their Handling Chapter 4 94 In this case, the package (P1) runs with RPO -TARGET set to 60 seco nds. In thi s case, initia lly the pack age ( P1) i s r unn ing on n ode N1. P1 uses a mirro r md0 consisting of S1 (loc al to node N1, for exa mple /dev/hpdev/mylink-sde ) and S2 (local to no de N2).
Disaster Scenarios and Their Handling Chapter 4 95 In thi s case, initia lly the pack age ( P1) i s r unn ing on n ode N1. P1 uses a mirro r md0 consisting of S1 (loc al to node N1, for exa mple /dev/hpdev/mylink-sde ) and S2 (local to no de N2). The first failure occu rs with all Eth ernet links between the two data center s failing .
Disaster Scenar ios and Their Handling Chapter 4 96.
Managing an MD De vice Appendix A 97 A Managing an MD Devi ce This chapte r includes addition al inf ormation on how to manage t he MD device . F or the latest information on how to manage and MD device, see The S oftware-RAID HOWTO ma nual availa ble at: http:// www.
Managing an MD De vice Vie wing the Status of the MD Device Appendix A 98 V iewing the Status of the MD Device After creating an MD device, yo u can view its status. By doing so , you can remain info rmed of wheth er the device is clean, up and running , or i f there ar e an y erro r s.
Managing an MD De vice Stoppin g the MD Device Appendix A 99 Stopping the MD Device After you creat e an MD dev ice, it begins to run. Y ou need to s top the device an d add the config uration into the rai d.
Managing an MD De vice Starting the MD Device Appendix A 100 Starting the MD Device After you c reate an MD devi ce, yo u would need t o s top and start t he MD device to en sure that it is active. Y ou woul d not need to start the MD device in a ny othe r scenari o as this is han dled by the XDC s oftware.
Managing an MD De vice Removing and Adding an MD Mirr or Co mponent Disk Appendix A 101 Removi ng and Adding an MD Mirror Component Di sk There are ce rtain failure scenarios , where y ou would need t o manually remove the mirror component of an MD d evice and add it again later .
Managing an MD De vice Remo ving and Add ing an MD M irror C omponent D isk Appendix A 102 Example A-3 Removing a fail ed MD component d isk from /dev /md0 array T o remove a f ailed MD co mpon ent d .
Index 103 A asynchronous data replication , 39 C clus te r exten ded di stance , 22 FibreCha nnel , 52 metropolitan , 23 wide area , 27 cluster main tenan ce , 49 configuring , 46 disaster t olerant E.
Index 104 persistent device names , 66 physica l data replicati on , 39 power sources redundant , 44 Q QLogic cards , 70 R RAID Mo nitor ing Se rvice Configure , 78 raid.
An important point after buying a device HP (Hewlett-Packard) t2808-90006 (or even before the purchase) is to read its user manual. We should do this for several simple reasons:
If you have not bought HP (Hewlett-Packard) t2808-90006 yet, this is a good time to familiarize yourself with the basic data on the product. First of all view first pages of the manual, you can find above. You should find there the most important technical data HP (Hewlett-Packard) t2808-90006 - thus you can check whether the hardware meets your expectations. When delving into next pages of the user manual, HP (Hewlett-Packard) t2808-90006 you will learn all the available features of the product, as well as information on its operation. The information that you get HP (Hewlett-Packard) t2808-90006 will certainly help you make a decision on the purchase.
If you already are a holder of HP (Hewlett-Packard) t2808-90006, but have not read the manual yet, you should do it for the reasons described above. You will learn then if you properly used the available features, and whether you have not made any mistakes, which can shorten the lifetime HP (Hewlett-Packard) t2808-90006.
However, one of the most important roles played by the user manual is to help in solving problems with HP (Hewlett-Packard) t2808-90006. Almost always you will find there Troubleshooting, which are the most frequently occurring failures and malfunctions of the device HP (Hewlett-Packard) t2808-90006 along with tips on how to solve them. Even if you fail to solve the problem, the manual will show you a further procedure – contact to the customer service center or the nearest service center