Because of hurricane Matthew, our business shutdown all servers for just two times.

Because of hurricane Matthew, our business shutdown all servers for just two times.

Among the servers had been an ESXi host by having a connected HP StorageWorks MSA60.

facebook dating review

We noticed that none of our guest VMs are available (they’re all listed as “inaccessible”) when free interracial dating apps UK we logged into the vSphere client,. So when we consider the equipment status in vSphere, the array controller and all sorts of connected drives look as “Normal”, nevertheless the drives all reveal up as “unconfigured disk”.

We rebooted the server and attempted going in to the RAID config energy to see just what things appear to be after that, but we received the message that is following

“An invalid drive motion ended up being reported during POST. Adjustments to your array setup after an invalid drive motion can lead to lack of old setup information and articles of this initial rational drives”.

Needless to state, we are really confused by this because absolutely absolutely nothing had been “moved”; absolutely nothing changed. We simply powered up the MSA as well as the host, and also been having this problem from the time.

I’ve two questions/concerns that are main

Since we did absolutely nothing significantly more than energy the products down and right back on, just what could’ve triggered this to occur? We needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the potential for this taking place once more (especially it) since I have no idea what caused.

Can there be a snowball’s possibility in hell that I’m able to recover our guest and array VMs, rather of experiencing to reconstruct every thing and restore our VM backups?

I’ve two primary questions/concerns:

  1. The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the potential for this taking place once again (especially it) since I have no idea what caused.

A variety of things. Do you really schedule reboots on all of your gear? If you don’t you should really for only this explanation. Usually the one server we now have, XS decided the array was not prepared with time and didn’t install the storage that is main on boot. Constantly good to learn these plain things in advance, right?

  1. Will there be a snowball’s opportunity in hell that I am able to recover our guest and array VMs, rather of getting to rebuild every thing and restore our VM backups?

Perhaps, but i have never ever seen that specific mistake. we are speaking really experience that is limited. Dependent on which RAID controller it really is linked to the MSA, you could be in a position to browse the array information through the drive on Linux utilizing the md utilities, but at that point it really is faster in order to restore from backups.

A variety of things. Do you really schedule reboots on all of your gear? If you don’t you want to for only this explanation. The only host we now have, XS decided the array was not prepared over time and don’t install the storage that is main on boot. Constantly nice to understand these things in advance, right?

I really rebooted this host numerous times about a month ago whenever I installed updates onto it. The reboots went fine. We additionally entirely powered that server down at across the exact same time because I added more RAM to it. Once again, after powering every thing straight right back on, the server and raid array information had been all intact.

A variety of things. Do you really schedule reboots on your entire gear? Or even you should really just for this explanation. The main one host we now have, XS decided the array wasn’t prepared with time and did not mount the storage that is main on boot. Constantly good to learn these things in advance, right?

We really rebooted this host numerous times about a month ago whenever I installed updates about it. The reboots went fine. We additionally entirely driven that server down at round the same time because I added more RAM to it. Once again, after powering every thing straight straight straight back on, the server and raid array information ended up being all intact.

Does your normal reboot routine of the host incorporate a reboot associated with MSA? would it be they had been powered straight back on into the order that is incorrect? MSAs are notoriously flaky, likely that’s where the presssing problem is.

We’d phone HPE help. The MSA is really an unit that is flaky HPE help is very good.

I really rebooted this server times that are multiple a month ago whenever I installed updates about it. The reboots went fine. We also entirely driven that server down at across the time that is same I added more RAM to it. Once more, after powering every thing straight back on, the raid and server array information ended up being all intact.

Does your normal reboot schedule of one’s host come with a reboot regarding the MSA? would it be which they had been driven straight right back on within the incorrect order? MSAs are notoriously flaky, likely this is where the problem is.

I would phone HPE help. The MSA is an unit that is flaky HPE help is very good.

We unfortuitously don’t possess a “normal reboot routine” for just about any of our servers :-/.

I am not certain just just what the order that is correct :-S. I would personally assume that the MSA would get driven on very very first, then a ESXi host. Should this be proper, we’ve currently tried doing that since we first discovered this problem today, as well as the problem continues to be :(.

We do not have help agreement with this host or perhaps the connected MSA, and they are most most most likely solution of guarantee (ProLiant DL360 G8 and a StorageWorks MSA60), therefore I’m not sure exactly how much we would need certainly to invest to get HP to “help” us :-S.

I really rebooted this host times that are multiple a month ago once I installed updates about it. The reboots went fine. We additionally entirely powered that server down at across the exact same time because I added more RAM to it. Once again, after powering every thing straight straight back on, the raid and server array information ended up being all intact.