Results 1 to 3 of 3
  1. #1

    Virtualization --48Gb @ 1333Mhz or 72GB @ 800Mhz

    So when buying nehalem servers, if you populate 6 rdimms, you can run at 1333Mhz, if you populate 18 rdimms, 800 Mhz (12 rdimms, 1066mhz)

    So what do you do for virtualization? the reality is most of the time 48gb is going to be enough for virtualization (that would run 12 4GB servers or 32 2GB servers) and i think at 64gb or more, CPU starts to become a limting factor, but the price of 48GB @ 1333 is slightly more than 72GB @ 800. So what are you guys doing out there? do you go for the performance or do you go with the overkill of memory (that might sometimes come in handy but you probably wont use it all -- more likely just give your VMs more memory for the fun it)?? The problem is the 1333 Mhz vs 800 Mhz performance.

    Our current environment is a bunch of 16GB servers (50xx chip) and the limitation on us virtualizating more per host is the memory. So I know 48GB is going to be very much welcomed and I think we could use more.. 64gb would be ideal, but we will be happy with [email protected], but 72GB sounds interesting too, there might be some scenarios where we could use that 72GB although i think performance will shift from memory limitation now to CPU

    Do you think if we buy 800Mhz 72gb we will ever look back and think "wish we bought the 48gb 1333 rdimms and not these 800mhz ones"?

  2. #2
    Personally I use consumer grade hardware except for raid cards if needed, after graphing data on a few variables; price, throughput... I came to the conclusion that high end hardware is overpriced and well useless due to a few key issues: redundancy, migration... It's simply much more cost effective to use well tested consumer grade hardware and much easier to migrate if the box in question breaks down.

  3. #3
    i appreciate the response, but a key metric is performance. I normally wouldnt care about 1333Mhz vs 800Mhz on a desktop or a single server but if you virtualize 15 or 20 servers and one or two is a big database, is the 1333mhz vs 800mhz on memory going to be noticable?

    mathematically it says it would, with a 40% difference in access time for memory. but how important is grabbing memory from rdimms? i would suspect most memory is done on the cpu with its internal cache and every now and then the CPU goes to the memory banks to grab stuff, but i really dont know.

    i wondered if anyone actually deployed 18 rdimms and noticed some performance impact. the key here is today we would be happy with 48gb, but if the performance impact is negligible we would do 72gb for the future. but if we will notice the performance difference then 48gb is fine enough.

    there is little input out there other than some metrics and stats saying 1333mhz is superior to 800mhz. but i am wondering about real world applications. if cpu is at 80-90% does the memory speed really even matter?

Similar Threads

  1. 12GB DDR3 1333MHZ ECC for X8STi-F
    By HNLV in forum Colocation and Data Centers
    Replies: 10
    Last Post: 04-18-2009, 06:46 AM
  2. Replies: 1
    Last Post: 04-16-2009, 11:19 PM
  3. Replies: 0
    Last Post: 01-15-2009, 12:35 PM
  4. Replies: 0
    Last Post: 12-29-2008, 03:43 PM
  5. Replies: 3
    Last Post: 11-18-2008, 08:53 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts