Hello eberybody,
after a failed disk was revovered with a replacement I got these alerts
Code:
Component --Identifier-- -------------------------------------------------------------------------------Description-------------------------------------------------------------------------------
Alert sw_ld:5:log1.0 Log LD 5 (log1.0) has a failed raid set: 3. Reason pd 44 ch 370 is stale (media valid, disk missing, pderr 1) pd 92 ch 370 is stale (media valid, disk missing, pderr 1)
Alert sw_ld:4:log0.0 Log LD 4 (log0.0) has a failed raid set: 3. Reason pd 45 ch 370 is stale (media valid, disk missing, pderr 1) pd 93 ch 370 is stale (media valid, disk missing, pderr 1)
I wanted to fix it with checkld. Maybe I did something wrong with this command but no the disk has the state since yesterday
Code:
5 log1.0 1 checking 1/- 20480 0 log 0 --- Y N
and these alerts additionally appeared
Code:
LD ld:log1.0 Detailed State: Checking
LD -- Number of logging LDs does not match number of nodes in the cluster
Does the check needed that much time ?
Should I be patient ?
Or is there a whay to fix this ?
Regards
Stephan