(08:54:07) BURSTNET: It seems we are currently having a issue with power in parts of our NOC, We are looking into the problem as of now. Sorry for any delay caused in ticket, IM response and phone support.
That radar image is odd. I live in North east Ohio, about 4 hours from Scranton and we had a pretty large storm roll through around 6 AM. I slept through most of it, so I don't know how long it lasted for. With way weather moves, it would've hit them too, so the storm theory makes sense.
We still had ten servers down 30 minutes ago.
We still have four down due to hardware failures after the power came back on.
We had one still not working properly following last weeks incident until yesterday.
Still have heard nothing about my SLA claim for last week.
I paid up front for a new server this week but I am just going to cancel it.
Can't use them any more.
Clearly they have no functioning UPS system.
Clearly they have inadequate weekend staffing.
I expect us to have several servers fried now and they are not being communicative.
Last week we had some servers down for 24 hours.
We experienced a power issue with a core facility UPS.
This is the same core facility UPS affected last week.
Unfortunately, equipment can fail from time to time, and we have been working our hardest to restore service as quickly as humanly possible.
Technicians are currently going thru and verifying all servers in the facility affected. We are currently working on the issue, and any server(s) still down should be up shortly, but most likely is back up already for some time now.
This is not a network related issued, and the overall network is running fine currently.
Due to this second similar issue, we will not have to manually test the facility UPS in question, to confirm the source of the issue. We had planned on manually testing the system to locate the exact issue in the next few days, after an ample stock of spare parts arrived which we had on order to replace our depleted stock due the initial occurance. We can now pinpoint the issue and have it repaired this week as soon as possible, without needing to manually test the system. We will be sending out notification via email once maintenance is scheduled, and once system is ready to be tested after repair.
We thank you for your patience and understanding, as well as your continued business.
I understand that equiptment can fail ... that's a given anywhere at any time. My concern is two fold.
1. Identifying and fixing the problem - before it multiplies.
The same issue happened last week, and according to the posts, cost many pieces of replacement parts to be out of stock, but since re-ordered/stocked. What exactly is failing?
2. Response and connectivity for customers in times of outages. During the initial outage this morning, your website, helpdesk and ticket system were all offline. I called the PA # listed, no answer in support. I tried a fax - no fax answer. I tried AIM support - no human response. Cell phones also were going to voicemail.
Have you considered colo'ing your support servers in another datacenter?
My biggest concern I guess is that I toured your datacenter about 2 weeks ago, and was shown generators, backups, cooling units that were supposedly all up and online. If this was a core power failure, why wasn't the battery backup immediately enabled in either instance? Switch bad?
As a new customer to Burst/Nocster, my first two weeks haven't been exactly "positive".
The exact issue is that something is causing one of our core facility UPS units not to revert back to normal mode once it goes to battery mode during a power flicker/outage. We believe it could be a faulty transfer switch within the UPS, but are not sure. It could also be a faulty control panel as well. The UPS will be serviced ASAP, hopefully tomorrow, now that we have confirmed the issue.
We apologize for the delay in responding to reboot/support tickets about the issue, which has been due to all available staff on-hand assisting with physically restoring service.