Page 1 of 2 12 LastLast
Results 1 to 25 of 48
  1. #1

    Honest RamNode Review

    I've been a customer of RamNode for a little while now. While they have a cheaper line of servers alongside more-expensive KVM's, I think it's fair to place them in the same bin as Linode, DigitalOcean, and other higher-tier VPS providers in the industry.

    Network/IO

    Absolutely phenomenal, no exceptions. I get consistent speed and IO regardless of peak hours, and no downtime (thus far) as a result of network outage. Bear in mind I reside in their Seattle location; had I of been located under their NL cabana, I would have been affected by their recent maintenance.

    Resources

    Again, no complaints here. RamNode meets or exceeds their competitors with regards to CPU, ram, disk space, and bandwidth.

    Stability

    This is where the review becomes a bit more interesting. I feel like the long-term stability issues that have been ongoing with RamNode go unmentioned, and thus aren't resolved to the point where they don't recur.

    Kernel panics and CPU lockups have become somewhat regular with RamNode. These recurring issues go all the back to July of 2013, and remain an issue today. Their Twitter (https://twitter.com/NodeStatus) tells all.

    Whenever a server locks up, it's rebooted without a second thought. No post-mortem posted, no resolution announced. This is made evident via their Twitter account.

    Reboots per month (derived from their Twitter):
    April 2014: 22 reboots
    March 2014: 18 reboots
    February 2014: 10 reboots
    January 2014: 17 reboots
    (etc)

    I'm NOT counting planned maintenance- these are all unannounced reboots as a result of panics or lockups, which have gone unresolved. I must note that Nick was more than willing to migrate us to another node after a reboot occurred, but two days following that node was rebooted as well.

    Conclusion

    They're a good company; Nick and his team provide great hosting. On the flip side, there's some serious stability issues that haven't been addressed. Once they're resolved, they'll go nowhere but up.

    -Linus

    PS: If I need to report a domain/IP, I'll be glad to- I don't see an area to do so via this interface.

  2. #2
    Thanks for the review. I'd be interested to see a comparison between us and another popular host in terms of public notices regarding issues like that. It's my understanding we are significantly more open and forthright about causes of downtime than most.

    Our @NodeStatus Twitter account is meant to be a quick reference for clients to check for current problems. It's the easiest way to notify clients of issues without waiting for a mass email to go through. If you or any other client have any specific problems with your service that you need addressed or if you require additional detail, we are happy to handle that over a support ticket. I'm not going to go through exactly what caused each reboot from the past 4 months, but I will quickly note that more than a handful of reboots over the last month in particular have been from 2 KVM nodes in the Netherlands (one of which appears completely fixed), and then the normal OpenVZ issues you'd find just about anywhere. I think that last part needs to be stressed - OpenVZ is prone to problems. That's the nature of the beast. You'll find it's much more rare for KVM to show up on the "reboot" tweets by virtue of it being KVM more than anything else.

    Also, saying that "whenever a server locks up, it's rebooted without a second thought" makes it sound like rebooting is our only strategy and that we don't do any sort of investigation when a problem occurs. Neither is the case, and neither should be inferred from our tweets. We only reboot if there is no other option.

    We have over 155 active servers, a fact which I mention in order to balance out the implication that things are constantly going down or something. We would not be thought of so highly here and at other communities if our clients were plagued with widespread stability problems. Yes there are two servers right now (one of which I, again, believe is fixed) which have been chronically rebooted about once a week for 1-2 months. (The process which led to fixing the first one is already in place for the second one if it goes down again.) Otherwise there are no stability issues to be spoken of for the vast majority of our clients. Any issues either are being or have been addressed, but you shouldn't expect all of the details to be displayed over Twitter.

    Please keep in mind we are under no obligation to make that information on Twitter public and could minimize our communication if we wanted to make sure only directly impacted clients knew about any given problem. If we went with that more common strategy, I think this review would have a somewhat different angle. That Twitter page is supposed to be a benefit to clients. I realize it's fair game as material for a review like this, but perhaps I need to rethink what exactly we mention on there if I have to spend time defending our communication.

    Lastly, I'll note that just because we don't detail a resolution on Twitter does not mean the problem hasn't been resolved. That's not the point of that account from our perspective. Anything significant and/or chronic results in an email to every impacted client with much more information than what goes on Twitter. We are also happy to answer specific questions via ticket.

    Don't get me wrong - you being rebooted on two nodes is painful. No one hates downtime more than me. However, your individual experience is more bad luck as opposed to general stability issues. Regardless, we are here working hard to fix any and all problems that may arise.
    RamNode - High Performance Cloud VPS
    SSD Cloud and Shared Hosting
    NYC - LA - ATL - SEA - NL - DDoS Protection - AS3842
    Deploy on our SSD cloud today! - www.ramnode.com

  3. #3
    I appreciate the response, Nick. I'd like to address a couple points you made.

    Also, saying that "whenever a server locks up, it's rebooted without a second thought" makes it sound like rebooting is our only strategy and that we don't do any sort of investigation when a problem occurs. Neither is the case, and neither should be inferred from our tweets. We only reboot if there is no other option.
    My intention wasn't to convey the notion that you didn't investigate issues, but that customers on the node were rebooted on a whim. I don't feel the seriousness of kernel panics/CPU lockups is/was being adequately addressed, as it would be with companies I've noted before. As a customer I would at least expect an email, or some sort of "we fixed it" notice, versus having to sort through Twitter to see if my node was rebooted after the fact.

    Please keep in mind we are under no obligation to make that information on Twitter public and could minimize our communication if we wanted to make sure only directly impacted clients knew about any given problem. If we went with that more common strategy, I think this review would have a somewhat different angle. That Twitter page is supposed to be a benefit to clients. I realize it's fair game as material for a review like this, but perhaps I need to rethink what exactly we mention on there if I have to spend time defending our communication.
    I completely understand, though I feel it's fair to compare you guys to Linode and DigitalOcean, and the frequency of issues with respect to number of servers. Both have status pages and address issues publicly, but neither suffer from the same panic/lockup issue as you guys do. Linode is sort of exempt from this comparison being that they provide Xen-based servers, but their stability should be noted regardless.

    I was trying to avoid a direct comparison of communication, as you never really know who communicates what, but throughout sampling many providers, I haven't come across kernel panics and CPU lockups to this extent, both under KVM and OVZ.

    I'd like to reiterate; you guys are great at what you do (as seen everywhere on this forum and LEB), but this is definitely something to improve upon. Blaming the virtualization only goes so far when there's hundreds of other providers out there.

  4. #4
    Join Date
    Nov 2000
    Location
    localhost
    Posts
    3,771
    normal OpenVZ issues you'd find just about anywhere
    I think that last part needs to be stressed - OpenVZ is prone to problems. That's the nature of the beast.
    Interesting, especially given this is from probably the most respected and popular (low end granted) OpenVZ provider. 57 reboots in 4 months is indeed shocking. It's odd though when the cliche topics of Xen vs KVM vs Openvz/Virtuzzo comes up (as it does every week) no OpenVZ providers come forward and mention that its basically an unreliable beast (although quite happy to tout the marginal benefits of container virtualization vs full/para - making some odd and practically never the case - assumption that nodes are loaded with same amount of guests)..

    last part needs to be stressed
    FAQ? Big site wide disclaimer? Knowledgebase? https://clientarea.ramnode.com/knowl...yarticle&id=52

    Blaming the virtualization only goes so far when there's hundreds of other providers out there.
    Indeed, how were you to know, its not like such is mention in an FAQ, sales pitch or rarely anywhere else.

    Weather do you it on Twitter or not, or do selective follow-ups privately, I think its a reasonable expectation to have a full post-mortem in the event of 57 reboots - its difficult to infer weather you mean 57 reboots in 4 months is on par for openvz or just reboots are in general.
    Last edited by MattF; 05-01-2014 at 06:07 AM.
    MattF - Since the start..

  5. #5
    Quote Originally Posted by Linus_C View Post
    My intention wasn't to convey the notion that you didn't investigate issues, but that customers on the node were rebooted on a whim.
    Which again, isn't the case. There is nothing whimsical about us resorting to a reboot. You seem to be judging that based on our quick tweets from an account that isn't there for full disclosure of every single issue that may arise. It's there to prevent a ticket flood in the event something goes wrong, letting clients know we are actively working on any given issue.
    I don't feel the seriousness of kernel panics/CPU lockups is/was being adequately addressed, as it would be with companies I've noted before. As a customer I would at least expect an email, or some sort of "we fixed it" notice, versus having to sort through Twitter to see if my node was rebooted after the fact.
    If there is any lingering problem, that is exactly what you would have received. We tweet and/or update News posts until the problem is fixed (which is usually just a matters of a few minutes), and don't always explicitly announce when every single container on a given VZ node is back online. If there aren't follow up tweets about a given problem, it's safe to assume it's resolved on our end. Also, you sorted through 4 months of tweets, counting specific ones for the purpose of this post. Perhaps you don't prefer news being posted to Twitter, but your own actions underscore the point that it's an easy and quick reference. I don't think there's much difficulty in searching through tweets on a designated status account to find the kind of information we put on there surrounding a given event. I will, however, take your expectations as defined here into consideration for future events.

    Let me reiterate that everyone who experiences chronic or significant issues (beyond a single reboot of a node without a history of problems, let's say), receives more information via email. You seem to be assuming that that never happens because it didn't happen for you. I don't have your account in front of me and can't remember your ticket ID from the other day, but I'm assuming you're not on anything that is impacted by frequent reboots. If that's not the case, please open a ticket. I would appreciate it if you'd send me that ticket ID from the other day, and if you'd send feedback through direct channels if your goal is to assist us in improving our services. That's why I list my direct email on our website. You can email me or talk to me directly on IRC, Skype, etc. if you wish to provide constructive feedback in an effort to help (which seems to be at least partially your goal?). Surely posting something like this on WHT is going to be less productive than sending me something directly. It just invites the kind of cherry picking that you see from MattF here, which means I now have to further defend our brand against misunderstandings.

    I completely understand, though I feel it's fair to compare you guys to Linode and DigitalOcean, and the frequency of issues with respect to number of servers. Both have status pages and address issues publicly, but neither suffer from the same panic/lockup issue as you guys do.
    Have you actually sat down and compared apples to apples there? KVM to KVM, etc (since neither offers OpenVZ)? Do they promise to address every single issue (not just network status) publicly on those pages as is our practice? Where do they list every server they have in production?

    I was trying to avoid a direct comparison of communication, as you never really know who communicates what, but throughout sampling many providers, I haven't come across kernel panics and CPU lockups to this extent, both under KVM and OVZ.
    That's my point - how do you know who simply doesn't communicate to the degree we do? How do you know where there is lack of problems versus omission? If you're going to assert that there are "serious stability problems" based on our communication history, you should have at least a few direct comparisons lined up. Have you found another host who communicates those events the exact same way? If so, please list them and their similar communication media which explicitly mentions CPU lockups, kernel panics, reboots, etc. so I can see exactly what you're seeing and compare frequency of events (and thus improve our services if we're the only ones out there who experience such things).

    Quote Originally Posted by MattF
    FAQ? Big site wide disclaimer? Knowledgebase? https://clientarea.ramnode.com/knowl...yarticle&id=52
    Sure, I've updated that exact link just for you.

    Quote Originally Posted by MattF
    Weather do you it on Twitter or not, or do selective follow-ups privately, I think its a reasonable expectation to have a full post-mortem in the event of 57 reboots - its difficult to infer weather you mean 57 reboots in 4 months is on par for openvz or just reboots are in general.
    The only thing you should infer is that no one is looking at the picture accurately by digging through our status tweets. Linus didn't suffer 57 reboots and go without a post mortem. No client has ever experienced that.

    This thread has certainly given me new perspective on our communication protocols. We will definitely be evaluating what all we use our status tweets to cover. Perhaps we should simply cover network news as it appears Linode and DigitalOcean do.
    RamNode - High Performance Cloud VPS
    SSD Cloud and Shared Hosting
    NYC - LA - ATL - SEA - NL - DDoS Protection - AS3842
    Deploy on our SSD cloud today! - www.ramnode.com

  6. #6
    Join Date
    Nov 2000
    Location
    localhost
    Posts
    3,771
    This thread has certainly given me new perspective on our communication protocols. We will definitely be evaluating what all we use our status tweets to cover. Perhaps we should simply cover network news as it appears Linode and DigitalOcean do.
    Just to add, if you decide to follow Linode, whilst Linode wont tweet or update status page about individual host nodes experiencing problems, they do communicate thoroughly EVERY planned or unplanned reboots of host, in the first you get several days notice (may have been a hit shorter with the Xen security issue 12 months ago), and if the latter they automatically open a support ticket with all affect clients with post-mortem (sometimes more than 1). Linode don't do reboots (lightly). Customer since 1st Jan 2009. Can't speak for DO procedures - not experienced a unplanned reboot yet though.
    MattF - Since the start..

  7. #7
    Join Date
    Sep 2010
    Posts
    269
    Quote Originally Posted by MattF View Post
    Just to add, if you decide to follow Linode, whilst Linode wont tweet or update status page about individual host nodes experiencing problems, they do communicate thoroughly EVERY planned or unplanned reboots of host, in the first you get several days notice (may have been a hit shorter with the Xen security issue 12 months ago), and if the latter they automatically open a support ticket with all affect clients with post-mortem (sometimes more than 1). Linode don't do reboots (lightly). Customer since 1st Jan 2009. Can't speak for DO procedures - not experienced a unplanned reboot yet though.
    That's exactly what Nick is trying to say, though. Linode doesn't update their status page or post on Twitter about a single node having issues. I remember some time ago there was a problem with my Linode host and they did open up a ticket and message me about it, but it was nowhere to be seen on any public channel. We have no idea how many single host issues they might have in a give month. Also, I'd be interested to see how many different host machines were actually rebooted in that 57 statistic. If it's only 4 different hosts, that has a different meaning entirely than if it was instead 57 different machines.
    I think it's quite nice and refreshing that RamNode posts about all of this kind of thing. It's not something necessary, but it does show that they are trying to be open with their customers about what they're doing.

  8. #8
    Join Date
    Nov 2000
    Location
    localhost
    Posts
    3,771
    Quote Originally Posted by Ghan_04 View Post
    That's exactly what Nick is trying to say, though. Linode doesn't update their status page or post on Twitter about a single node having issues. I remember some time ago there was a problem with my Linode host and they did open up a ticket and message me about it, but it was nowhere to be seen on any public channel. We have no idea how many single host issues they might have in a give month. Also, I'd be interested to see how many different host machines were actually rebooted in that 57 statistic. If it's only 4 different hosts, that has a different meaning entirely than if it was instead 57 different machines.
    I think it's quite nice and refreshing that RamNode posts about all of this kind of thing. It's not something necessary, but it does show that they are trying to be open with their customers about what they're doing.
    I think OP is more concerned about communication rather than the quantity of downtime/reboot (although alarming), nor the channel made (either 140char public tweet or an email after each reboot), by what the OP is saying "Whenever a server locks up, it's rebooted without a second thought. No post-mortem posted, no resolution announced" it sounds like he hasnt received 57 followups which he would with Linode. It sounds of NickA post they do investigate but for some reason the number of follow-ups dont match the quantity of reboots. Reboots are a real problem for many customers (that are doing something other than serving web requests).
    MattF - Since the start..

  9. #9
    Join Date
    Aug 2013
    Posts
    66
    Quote Originally Posted by Nick A View Post
    This thread has certainly given me new perspective on our communication protocols. We will definitely be evaluating what all we use our status tweets to cover. Perhaps we should simply cover network news as it appears Linode and DigitalOcean do.
    Please don't change. I appreciate the updates.

    Separately, I've been with RamNode for nearly a year now and have a few OVZ machines. One in particular has suffered 2-3 reboots in the last 6 weeks and it's frustrating. But I don't think it's representative of RamNode's overall service. My other machines have gone for months without issue (in fact, I don't think they've ever had an issue except for scheduled maintenance to replace a network card which was well communicated (over communicated, even)).

    I've been with other hosts and their communication and actual response to issues was appalling. Once, with RamNode, I noticed some latency which I identified via traceroute as an issue with an upstream provider. I didn't bother submitting a ticket because, "It's not RamNode's issue." Very shortly thereafter @NodeStatus tweeted they were in touch with their upstream provider to fix latency issues. Contrast that with another provider I have used in the past, where I literally have to tell them the steps to fix their issue (which they didn't even know or realize existed).

    I'm convinced Nick never sleeps, and he and his team always seem to be on top of issues. I think the transparency of their communication skews the perception of the number of issues they have versus other providers.

    On a positive note, the few reboots I've had on that one node have forced me to clean up an init script so a daemon I run properly restarts after an unclean shutdown. I was being lazy until now.

  10. #10
    I've been using RamNode for almost 2 years without any problems.

    Have had a couple server reboots (seems like they were ages ago). Automatically starting my website on boot in case it ever happens again.

  11. #11
    Quote Originally Posted by MattF View Post
    it sounds like he hasnt received 57 followups which he would with Linode.
    The OP wasn't rebooted 57 times. I want to make sure no one is confused on that point. He was rebooted twice, once on two different nodes. A large part of his complaint here is how we handle reboots that don't directly impact his service. That is why I said, "[We] could minimize our communication if we wanted to make sure only directly impacted clients knew about any given problem. If we went with that more common strategy, I think this review would have a somewhat different angle."

    No one has been rebooted 57 times at RamNode, so no one has lacked 57 follow ups. I hope that's more clear now?
    RamNode - High Performance Cloud VPS
    SSD Cloud and Shared Hosting
    NYC - LA - ATL - SEA - NL - DDoS Protection - AS3842
    Deploy on our SSD cloud today! - www.ramnode.com

  12. #12
    Quote Originally Posted by Nick A View Post
    The OP wasn't rebooted 57 times. I want to make sure no one is confused on that point. He was rebooted twice, once on two different nodes. A large part of his complaint here is how we handle reboots that don't directly impact his service. That is why I said, "[We] could minimize our communication if we wanted to make sure only directly impacted clients knew about any given problem. If we went with that more common strategy, I think this review would have a somewhat different angle."

    No one has been rebooted 57 times at RamNode, so no one has lacked 57 follow ups. I hope that's more clear now?
    This is correct. We were rebooted (in total) three times over the course of two months; each of the three services Jordan and I ordered (joint account) was rebooted once during that time span.

    The solution has nothing to do with what you communicate, but why you're communicating it. Closing the doors won't make you better as a company, nor will it provide any reassurance that services won't randomly topple over due to an issue that's out of my hands.

    The issue I'm having is with regards to how lightly reboots are taken, and my only source of information is your Twitter, unless I want to submit a ticket to inquire further, which I shouldn't ever have to do. If you don't announce the problem is solved via Twitter, how am I to know you fixed it? If you're investigating it? I haven't received any status-related email from you guys, despite three separate reboots.

    I really didn't think I would have to reiterate myself this much to get my point across. It seems I've been misinterpreted to think that a resolution would be to stop posting information about issues, where I'm trying to get at the exact opposite.

  13. #13
    Join Date
    Mar 2005
    Location
    Labrador, Canada
    Posts
    988
    As a longterm RamNode customer I'll offer my 2 cents...

    I have 3 RamNodes and they're all amazingly stable. Reboots are as rare as hen's teeth; 200 days uptime is common and unremarkable. Network uptime is equally solid, with month after month of near-perfect connectivity.

    If every VPS provider could deliver the quality of service that RamNode does then the world would be a better place.

  14. #14
    Join Date
    Sep 2002
    Location
    Top Secret
    Posts
    14,135
    I have to disagree with the stability factor as well.
    I've had 3 servers with them, just over 2 months now. I have yet to see any issues , and the only reboots that have been done have been by me.

    Maybe you're just on a really bad node?

    Given what I pay for these (about $5/month), I'd say this is right on the money, and the service is quite extraordinary.
    I think they may have had one issue on one of their NL nodes earlier this year, but other than that, I've yet to encounter anything drastic.
    Attached Thumbnails Attached Thumbnails Screen Shot 05-01-14 at 04.42 PM 001.PNG   Screen Shot 05-01-14 at 04.42 PM 002.PNG   Screen Shot 05-01-14 at 04.42 PM.PNG  
    Last edited by whmcsguru; 05-01-2014 at 05:46 PM.

  15. #15
    Quote Originally Posted by twhiting9275 View Post
    I have to disagree with the stability factor as well.
    I've had 3 servers with them, just over 2 months now. I have yet to see any issues , and the only reboots that have been done have been by me.

    Maybe you're just on a really bad node?

    Given what I pay for these (about $5/month), I'd say this is right on the money, and the service is quite extraordinary.
    I think they may have had one issue on one of their NL nodes earlier this year, but other than that, I've yet to encounter anything drastic.
    This is simply my experience with RamNode. Many people have had nothing but good things to say- as do I, but their Twitter does speak for itself. The consistent panics and lockups need to be fixed.

    All things considered, they're still high on my list comparatively. I have no intention of leaving any time soon.

    Edit: Typo.

  16. #16
    Join Date
    Sep 2002
    Location
    Top Secret
    Posts
    14,135
    Quote Originally Posted by Linus_C View Post
    their Twitter does speak for itself.
    what their twitter status does is show a snapshot of a global picture. Judging their overall usability by a few isolated incidents is like judging a country by the handful of citizens in it.

  17. #17
    Quote Originally Posted by twhiting9275 View Post
    what their twitter status does is show a snapshot of a global picture. Judging their overall usability by a few isolated incidents is like judging a country by the handful of citizens in it.
    I'm judging them based on my experience, and the 57 (quite more than a few) reboots that took place.

    If you feel like there's room for a company to improve, there's nothing wrong with telling them via a review, or any other medium. Denying issues exist because you haven't had any is ignorant at best- I'm questioning whether or not you've read the thread.

    Keep in mind this is the only negative side of RamNode I've found. They're a great company, and provide great service. There's nothing wrong with constructive criticism to aid growth.

  18. #18
    Linus - would you please send me your most recent ticket ID or account email? I'd like to review our interaction from the other day but can't find your account.

    Also please provide that list of other hosts you are directly comparing us to in terms of public announcements of reboots (from CPU lock ups, kernel panics, etc.) so that we can do our due diligence. You made it sound like you've found a bunch of them who handle all issues in the same public manner we do, so I'd like to compare frequency of problems as you've apparently already done.
    RamNode - High Performance Cloud VPS
    SSD Cloud and Shared Hosting
    NYC - LA - ATL - SEA - NL - DDoS Protection - AS3842
    Deploy on our SSD cloud today! - www.ramnode.com

  19. #19
    @Nick- sent.

    I've requested the thread to be closed, as it doesn't seem productive to continue.

  20. #20
    Join Date
    Nov 2000
    Location
    localhost
    Posts
    3,771
    . You made it sound like you've found a bunch of them who handle all issues in the same public manner we do, so I'd like to compare frequency of problems as you've apparently already done.
    I think your missing the point again (granted this was clouded by the aggregate 57 reboot which didnt all apply to the OP), its not about public notices, or frequency, and which hosts don't do individual tweets or not. He experienced a number of reboot (2 or 3) and there was both no pre-notice (granted bad things happen) and no direct (i.e. email) post-mortem follow-up with the clients e.g. "Hey there valued client, A node your on VZ1234 suffered a kernel panic and unfortunately we had to reboot at 2.13am PST. Let me assure you that HostZYX hates reboots, and we'll doing everything everything we can to determine the cause (hardware verification, core dump analysis) etc.. We'll follow within 24hrs with a full RFO via email" and then most important the full RFO..

    I know if you judge yourself and practices against fellow low end hosts, things such as reboots with sub minute downtimes are the norm and completely accepted (heck someone even said they run yum install everyday and then reboot, yikes) however given the stellar reputation you guys appear to have wouldnt be more in your interest to compete against DO/Linode (again has no bearing on public notices, just sort out private followups)..
    MattF - Since the start..

  21. #21
    Join Date
    Jul 2001
    Location
    .INdiana
    Posts
    2,465
    Quote Originally Posted by Linus_C View Post

    I've requested the thread to be closed, as it doesn't seem productive to continue.

    too soon, IMO. let's give the host time to respond.
    Sneaky Little Hobbitses

  22. #22
    Join Date
    Nov 2011
    Posts
    130
    Quote Originally Posted by CD Burnt View Post
    too soon, IMO. let's give the host time to respond.
    Agree.
    Would like to read this issue for a little bit more since the OP constructive and friendly criticism has not been answered yet.

    Quote Originally Posted by MattF View Post
    I think your missing the point again (granted this was clouded by the aggregate 57 reboot which didnt all apply to the OP), its not about public notices, or frequency, and which hosts don't do individual tweets or not. He experienced a number of reboot (2 or 3) and there was both no pre-notice (granted bad things happen) and no direct (i.e. email) post-mortem follow-up with the clients e.g. "Hey there valued client, A node your on VZ1234 suffered a kernel panic and unfortunately we had to reboot at 2.13am PST. Let me assure you that HostZYX hates reboots, and we'll doing everything everything we can to determine the cause (hardware verification, core dump analysis) etc.. We'll follow within 24hrs with a full RFO via email" and then most important the full RFO..

    I know if you judge yourself and practices against fellow low end hosts, things such as reboots with sub minute downtimes are the norm and completely accepted (heck someone even said they run yum install everyday and then reboot, yikes) however given the stellar reputation you guys appear to have wouldnt be more in your interest to compete against DO/Linode (again has no bearing on public notices, just sort out private followups)..
    Agree.
    Seems like OP and Nick are stressing two completely different points.

  23. #23
    Join Date
    Jun 2006
    Posts
    1,184
    I have 2 VPSs with Ramnode (run nameservers on them), and since I started monitoring, there has been about 10 minutes of downtime a month. I can only assume they are associated with reboots, since that sounds about the right timeframe.

    Been pretty solid though. Can't say I have any complaints. 57 reboots amongst all the servers does seem like a heck of a lot though! That's a hell of a lot of kernel panics.

  24. #24
    Quote Originally Posted by wswd View Post
    57 reboots amongst all the servers does seem like a heck of a lot though! That's a hell of a lot of kernel panics.
    Pretty much exactly what I was trying to get across. As far as I know, they haven't stated why the panics are occurring, despite their frequency. Their only medium of issue communication is through Twitter, unless it's pre-established maintenance. Submitting a ticket to inquire why panics occur shouldn't have to happen.

    This was really all I was trying to derive from the previous exchange. Instead we've gotten down to a semantics debate between which provider communicates what, whereas the actual problem hasn't been addressed.

    Modifying what you tell the public isn't going to solve the issue, nor would it have changed the review. I simply wouldn't have knowledge of the 57 reboots. My personal experience remains the same. I'm trying hard not to sound like I'm attacking them. I believe constructive criticism should be valued and appreciated; I would want to hear these things if I were in RamNode's shoes.

  25. #25
    Join Date
    Mar 2003
    Posts
    2,677

Page 1 of 2 12 LastLast

Similar Threads

  1. Ramnode review
    By mize25 in forum VPS Hosting
    Replies: 7
    Last Post: 10-26-2013, 01:22 PM
  2. RamNode Review (One Month)
    By Appdeveloper in forum VPS Hosting
    Replies: 6
    Last Post: 02-26-2013, 01:13 AM
  3. RamNode - My Review
    By ubarubar in forum VPS Hosting
    Replies: 3
    Last Post: 11-14-2012, 01:28 PM
  4. RamNode Review
    By GameMelon in forum VPS Hosting
    Replies: 8
    Last Post: 11-03-2012, 08:35 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •