Hi everyone, sorry for the delay. We were sorting this out. As you know we host our hardware with GIPNetworks and this is what happened according to our provider. Now this is not something that happens everyday and as we find out more, we will send out updates to you. All the VMs came right back up the second power was restored automatically.
At approximately 5am CST today, we received a notification from our UPS cluster that they all went to battery. After assessing the root cause, we found out that the main high voltage cables connected to service B Automatic Transfer Switch (ATS) had been cut. Since this cable was exposed in the rain, there a short circuit maybe detected by the ATS all the way to the UPS. As the UPS detected the abnormality of the input power from ATS B, to protect themselves and your critical equipment from unknown result, the UPS ran as long as possible for about 30 minutes until the battery was depleted at approximately 5:30am CST, causing the downtime for the data center. Our electrician was able to isolate the issue and the power was restored at approximately 7:am CST. Everyone should be up and running at this point, but please do check your equipment to make sure they are up and running. All our staffs are standing by and ready to help. We apologize for what had happened. This act of vandalism was likely planned ahead and done by professionals as they successfully evaded our security cameras and did everything very quickly. Two police officers are currently investigating this incident and we hope that the perpetrator will be caught and brought to justice. We will keep you posted as we receive more information. Thank you for your patience and understanding.