Hi Everyone.
I am new to 3par and I just received a used 7450 4node 3par system and I have no support.
Currently, the system has no drives in it. I connected by console to the system through the MFG port using putty and the special adapter. The system is cabled correctly but all the interconnect nodes are amber, so I figured there must be a communication problem between the nodes. I then logged into each node (cli when I could using 3paradm 3pardata) one at a time and checked out the node identifier and netshow information. This is what I had.
16xxxx5 is the serial number
Node in slot 0 is 16xxxx5-0 ipaddress 130.175.93.202
Node in slot 1 is 16xxxx5-2 ipaddress 192.168.1.206
Node in slot 2 is 16xxxx5-0 ipaddress 192.168.1.206
Node in slot 3 is 16xxxx5-1 ipaddress 192.168.1.206
Based on the above information, I moved the node in slot 2 (16xxxx5-0) to slot 0, node in slot 3 (16xxxx5-1) to slot 1, node in slot 1 (16xxxx5-2) to slot 2, and node in slot 0 (16xxxx5-0) to slot 4 (because it had a different IP address then the rest). When I rebooted the system all the interconnect connections are green except the ones from node 3 ( the node with the different IP address).
Issues I can identify: There is no cluster; It appears two nodes think they are node 0; And one of the 0 nodes has a different IP address.
Currently, I can log into nodes 0, 1, and 2 with 3paradm 3pardata, but I can not log into node 3 with 3paradm 3pardata even though i could before.
How do I get the node currently in slot 3 16xxxx5-0 ipaddress 130.175.93.202 to have the same IP address as the others and become 16xxxx5-3?
I would try a node rescue but the node in slot 3 thinks it's node 0 and has a different ipaddress. That's why I want to change the ipaddress and make it node 16xxxx5-3
Will a cluster form without drives in the systems?
Pretty much all the console (console cmp43pd) commands hang or do nothing on node 3, so I can't change the IP address there.
Thank you for any help and please let me know if I need to clarify anything.
3par 7450 issues
Re: 3par 7450 issues
I've never had a system without drives before, but if you remove node 2 and 3 it should be a supported config (except missing drives) so what you see there is what you should expect.
One thing I would try is to get the system up with 3 nodes (remove "node3") and try to rescue that node after its up with 3 nodes. Not sure if it will work, but the system will not act normal with 2x node0 so it might work.
One thing I would try is to get the system up with 3 nodes (remove "node3") and try to rescue that node after its up with 3 nodes. Not sure if it will work, but the system will not act normal with 2x node0 so it might work.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
Re: 3par 7450 issues
Is there a way to get into the whack menu during bootup? I want to try 'prom edit' and change the node ID to 3 as found in another thread on this forum.
Re: 3par 7450 issues
In case someone else runs into this problem. I roughly did the below procedure. I take no responsibility if this doesn't work for you or if you lose data.
I did move the 3 similar (ipaddress) nodes to their correct slot id node id positions. I put the odd ip addressed node 0 to slot 3 went into whack (cntrl-W during boot) and changed the node id and ip address. Next I powered up the three bottom nodes, did a deinstallation (setsysmgr wipe 16****5) and restarted the system. I then issued a startnoderescue -node 3 command. I immediately plugged node 3 back in to power up and start the rescue. I plugged the console into node 3 and I could see the rescue while it progressed. Rebooted. Ran setsysmgr wipe again and then all the nodes joined the cluster and all nodes have the correct serial number and node id.
I did move the 3 similar (ipaddress) nodes to their correct slot id node id positions. I put the odd ip addressed node 0 to slot 3 went into whack (cntrl-W during boot) and changed the node id and ip address. Next I powered up the three bottom nodes, did a deinstallation (setsysmgr wipe 16****5) and restarted the system. I then issued a startnoderescue -node 3 command. I immediately plugged node 3 back in to power up and start the rescue. I plugged the console into node 3 and I could see the rescue while it progressed. Rebooted. Ran setsysmgr wipe again and then all the nodes joined the cluster and all nodes have the correct serial number and node id.