Results 1 to 17 of 17
  1. #1

    HW vs SW RAID - ease of management

    I read many inexperienced admins have trouble rebuilding a RAID array etc. when one drive fails. Is hardware raid any better than software raid in that respect? Would there be differences in site usability between them when a drive fails (continuing production on one drive only, possibly for extended time while the raid issues remain unsorted)?

  2. #2
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,290
    It really depends on the raid card. Some raid card's management utilities are complex / difficult to use for inexperienced admins... yet inexperienced admins often butcher mdadm raid when replacing drives / rebuilding.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  3. #3
    Join Date
    May 2003
    Location
    California, USA, Earth
    Posts
    1,049
    Quote Originally Posted by Steven View Post
    It really depends on the raid card. Some raid card's management utilities are complex / difficult to use for inexperienced admins... yet inexperienced admins often butcher mdadm raid when replacing drives / rebuilding.
    I agree completely, but for the inexperienced I'd still suggest hardware over software. We use primarily all dell perc controllers now, and have abandoned software raid and mdadm entirely. I never had a positive experience with a drive failure with software raid.

    On the other hand, I have never had an issue replacing failed drives with hardware raid. I would recommend spending the extra cash and going with hardware. Make sure the card has a battery in case of power failure.

    In terms of operating with failed drives, I think it depends more on the level of raid than whether it's hardware or software, though I would venture to say hardware is better since the CPU isn't in charge of it. Raid 5 for example will not perform very well with a failed drive. Performance loss with a failed drive in Raid 1 or 10 would be less noticeable.
    Blesta - Professional Billing Software
    Innovation that benefits the user experience
    Trial - Demo | 866.478.7567 | Twitter @blesta

  4. #4
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,290
    The one thing about software raid over hardware raid, I like is... its extremely forgiving.
    You can wipe the partition table on the drives and mdadm meta data and still rebuild the raid and recover it. Hardware raid, not always so easy.
    There has been instances where datacenters butchered up mdadm raid installing LVM over it thinking it had LVM and still been able to recover it.. Or multiple drive failures where we dd_rescued it, and then forced the raid to assemble and recovered the data.

    I prefer mdadm over hardware simply because its more flexible. I like using very large chunk/stripe sizes on mdadm that can't be done on many raid cards. Almost all of our file servers are 4 drive mdadm raid 10, with some pushing over 3gbit with nearly 0 iowait.

    For inexperienced admins, both styles of raid can be challenge. MegaCLI vs arcconf is a huge jump. The same goes for tw_cli. They are so different. Use what ever you are most comfortable with.
    Last edited by Steven; 10-11-2013 at 12:38 PM.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  5. #5
    Now we're more educated, but the choice hasn't become any easier!

    So likely problems will arise in a case of disk failure. Let's try this approach:

    - in a simple twin disk mdadm RAID 1 configuration for Centos / Debian, would the chances be that when one of the disks cracks the site would continue to be usable? If not, any better chances with hardware RAID (Hetzner, yea I know the drill...)?

  6. #6
    Join Date
    Apr 2007
    Location
    US, UK, Europe, ME
    Posts
    258
    Quote Originally Posted by Eliphaz View Post
    Now we're more educated, but the choice hasn't become any easier!

    So likely problems will arise in a case of disk failure. Let's try this approach:

    - in a simple twin disk mdadm RAID 1 configuration for Centos / Debian, would the chances be that when one of the disks cracks the site would continue to be usable? If not, any better chances with hardware RAID (Hetzner, yea I know the drill...)?
    Yes, The site will continue to be usable(If you configure the RAID 1 properly and mirror all partitions). You can also monitor the software raid by using the mdadm --monitor flag.

    Flexibility & ease of use = Software RAID
    High Performance & Hot swapping/Hot spare = Hardware RAID

  7. #7
    Thanks, that solves it!

  8. #8
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,290
    Quote Originally Posted by Sys Admin View Post
    Yes, The site will continue to be usable(If you configure the RAID 1 properly and mirror all partitions). You can also monitor the software raid by using the mdadm --monitor flag.

    Flexibility & ease of use = Software RAID
    High Performance & Hot swapping/Hot spare = Hardware RAID
    I disagree. You can still hotswap with software raid and the performance can exceed hardware raid in some use cases.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  9. #9
    Join Date
    Jun 2001
    Location
    Princeton
    Posts
    836
    When buying hardware raid, make sure you buy one with BBU. Without it, software raid will be faster (software raid is faster than hardware raid, simply because CPU in a server faster than CPU on a raid card).
    Yet, with BBU you can enable write cache -> which can give a huge boost to performance. Write cache + software raid will corrupt data on power outages/hard reboots.
    Igor Seletskiy
    CEO @ Cloud Linux Inc
    http://www.cloudlinux.com
    CloudLinux -- The OS that can make your Shared Hosting stable

  10. #10
    Join Date
    Apr 2007
    Location
    US, UK, Europe, ME
    Posts
    258
    Quote Originally Posted by Steven View Post
    I disagree. You can still hotswap with software raid and the performance can exceed hardware raid in some use cases.
    Well. Personally, I prefer software raid. However, A hardware raid with BBU offers plenty of caching (write cache, WriteBack, ReadAhead etc) which have a great impact on performance.

    I've tested this on both soft raid & hard raid using hdparm & dd and read/write speed is much faster on a hardware raid.

    However, There are a lot of fake and useless raid cards out there. Just select a good and reputable brand/model and you should be fine.

    Regarding hot swapping. Yes it can be achieved using raidhotadd/remove, However, It depends on the drive(s) type and it can't be really compared to the hardware raid functionality and ability in that regard.

    Quote Originally Posted by iseletsk View Post
    When buying hardware raid, make sure you buy one with BBU. Without it, software raid will be faster (software raid is faster than hardware raid, simply because CPU in a server faster than CPU on a raid card).
    Yet, with BBU you can enable write cache -> which can give a huge boost to performance. Write cache + software raid will corrupt data on power outages/hard reboots.
    +1

  11. #11
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,290
    Quote Originally Posted by Sys Admin View Post
    Well. Personally, I prefer software raid. However, A hardware raid with BBU offers plenty of caching (write cache, WriteBack, ReadAhead etc) which have a great impact on performance.

    I've tested this on both soft raid & hard raid using hdparm & dd and read/write speed is much faster on a hardware raid.

    However, There are a lot of fake and useless raid cards out there. Just select a good and reputable brand/model and you should be fine.

    Regarding hot swapping. Yes it can be achieved using raidhotadd/remove, However, It depends on the drive(s) type and it can't be really compared to the hardware raid functionality and ability in that regard.



    +1
    You can't base your testing on simple sequential testing. That is not real world load, and if you base your decision on that you are not going to have the best performance.
    Furthermore unless you setup mdadm correctly, and ignore out of the box settings its impossible to expect the best performance.
    I have a large number of customers who used to use hardware raid for their file servers... various raid levels, number of disks, stripe sizes, etc.. All with crazy iowait and performance issues.. We move them to mdadm raid with software raid 10 with higher than normal chunk size and readahead, they pur along pushing more traffic than they ever did.. There is so many different variables here, that its pretty foolish to say hardware raid is the best option all the time for performance.

    Regarding hot swapping. Yes it can be achieved using raidhotadd/remove, However, It depends on the drive(s) type and it can't be really compared to the hardware raid functionality and ability in that regard.
    Huh?
    Last edited by Steven; 10-12-2013 at 07:17 PM.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  12. #12
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,290
    example:
    Here is a e3-1230v1 with 4 x 2TB drives in mdadm raid 10. 3TB active dataset.

    md127 : active raid10 sdb4[1] sda4[0] sdc4[2] sdd4[3]
    3799654400 blocks super 1.2 2048K chunks 2 near-copies [4/4] [UUUU]
    [[email protected] ~]# blockdev --getra /dev/sd[a-d]
    1024
    1024
    1024
    1024

    [[email protected] ~]#
    Code:
    12:00:01 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
    12:10:01 AM     all     13.81      0.00      0.97      1.06      0.00     84.16
    12:20:01 AM     all     13.78      0.00      0.84      0.80      0.00     84.58
    12:30:01 AM     all     14.00      0.00      0.96      0.92      0.00     84.13
    12:40:01 AM     all     13.89      0.00      0.95      0.82      0.00     84.34
    12:50:02 AM     all     13.95      0.00      0.85      0.70      0.00     84.50
    01:00:01 AM     all     13.50      0.00      0.97      0.88      0.00     84.66
    01:10:01 AM     all     13.39      0.00      0.87      1.04      0.00     84.70
    01:20:01 AM     all     13.72      0.00      0.76      0.74      0.00     84.77
    01:30:01 AM     all     13.79      0.00      0.87      0.72      0.00     84.62
    01:40:01 AM     all     13.84      0.00      0.89      0.72      0.00     84.55
    01:50:01 AM     all     14.44      0.00      1.05      0.83      0.00     83.68
    02:00:01 AM     all     14.99      0.00      1.39      0.93      0.00     82.69
    02:10:01 AM     all     14.30      0.00      1.20      0.86      0.00     83.65
    02:20:01 AM     all     15.57      0.00      1.25      0.72      0.00     82.46
    02:30:02 AM     all     14.04      0.00      0.89      0.75      0.00     84.32
    02:40:01 AM     all     14.06      0.00      0.99      0.65      0.00     84.30
    02:50:01 AM     all     13.83      0.00      1.08      0.75      0.00     84.35
    03:00:01 AM     all     13.65      0.00      1.06      0.74      0.00     84.54
    03:10:02 AM     all     14.02      0.00      1.16      0.96      0.00     83.86
    03:20:02 AM     all     14.25      0.00      0.99      0.92      0.00     83.85
    03:30:01 AM     all     14.12      0.00      0.99      0.83      0.00     84.06
    03:40:01 AM     all     13.81      0.00      1.05      0.87      0.00     84.27
    03:50:01 AM     all     13.83      0.14      1.03      0.91      0.00     84.09
    04:00:01 AM     all     13.86      0.00      0.78      0.76      0.00     84.60
    04:10:01 AM     all     13.73      0.00      0.96      0.81      0.00     84.51
    04:20:01 AM     all     13.56      0.00      0.89      0.68      0.00     84.87
    04:30:01 AM     all     13.27      0.00      1.04      0.72      0.00     84.97
    04:40:01 AM     all     13.33      0.00      0.91      0.74      0.00     85.02
    04:50:01 AM     all     13.38      0.00      0.93      0.82      0.00     84.87
    05:00:01 AM     all     13.28      0.00      0.98      0.84      0.00     84.90
    05:10:01 AM     all     13.58      0.00      0.85      0.76      0.00     84.81
    05:20:01 AM     all     13.58      0.00      1.06      0.81      0.00     84.55
    05:30:01 AM     all     13.76      0.00      0.80      0.74      0.00     84.70
    05:40:01 AM     all     13.74      0.00      0.77      0.67      0.00     84.82
    05:50:01 AM     all     13.45      0.00      1.00      0.70      0.00     84.86
    06:00:01 AM     all     13.79      0.00      0.81      0.58      0.00     84.82
    06:10:01 AM     all     13.82      0.00      0.81      0.67      0.00     84.70
    06:20:01 AM     all     13.42      0.00      0.84      0.77      0.00     84.97
    06:30:01 AM     all     13.34      0.00      0.94      1.05      0.00     84.67
    06:40:01 AM     all     13.65      0.00      0.91      1.24      0.00     84.20
    06:50:01 AM     all     13.44      0.00      1.04      1.35      0.00     84.17
    07:00:01 AM     all     13.88      0.00      1.15      1.41      0.00     83.55
    07:10:01 AM     all     13.41      0.00      1.37      1.42      0.00     83.81
    07:20:01 AM     all     13.92      0.00      1.15      1.40      0.00     83.53
    07:30:01 AM     all     13.88      0.00      1.19      1.35      0.00     83.58
    07:40:01 AM     all     13.97      0.00      1.27      1.39      0.00     83.37
    07:50:02 AM     all     14.17      0.00      1.31      1.50      0.00     83.02
    08:00:02 AM     all     14.61      0.00      1.32      1.30      0.00     82.78
    08:10:01 AM     all     14.41      0.00      1.28      1.05      0.00     83.25
    08:20:01 AM     all     14.35      0.00      1.54      1.22      0.00     82.89
    08:30:01 AM     all     14.47      0.00      1.29      1.21      0.00     83.03
    08:40:01 AM     all     14.25      0.00      1.28      1.32      0.00     83.14
    08:50:02 AM     all     14.04      0.00      1.64      1.72      0.00     82.61
    09:00:01 AM     all     13.93      0.00      1.79      1.74      0.00     82.54
    09:10:01 AM     all     14.85      0.00      1.59      1.68      0.00     81.89
    09:20:01 AM     all     14.54      0.00      1.72      1.82      0.00     81.92
    09:30:01 AM     all     14.41      0.00      1.59      1.53      0.00     82.47
    09:40:01 AM     all     14.13      0.00      1.42      1.71      0.00     82.74
    09:50:01 AM     all     13.85      0.00      1.61      1.97      0.00     82.56
    10:00:01 AM     all     14.09      0.00      1.57      2.06      0.00     82.28
    10:10:02 AM     all     14.07      0.00      1.36      1.64      0.00     82.93
    10:20:01 AM     all     14.32      0.00      1.51      2.00      0.00     82.16
    
    10:20:01 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
    10:30:01 AM     all     13.95      0.00      1.47      2.20      0.00     82.37
    10:40:02 AM     all     13.67      0.00      1.18      2.03      0.00     83.12
    10:50:01 AM     all     13.74      0.00      1.25      2.19      0.00     82.82
    11:00:02 AM     all     13.65      0.00      1.34      2.22      0.00     82.79
    11:10:01 AM     all     13.44      0.00      1.44      2.05      0.00     83.07
    11:20:01 AM     all     13.60      0.00      1.21      1.61      0.00     83.58
    11:30:01 AM     all     13.86      0.00      1.26      2.11      0.00     82.77
    11:40:01 AM     all     13.62      0.00      1.25      2.13      0.00     82.99
    11:50:02 AM     all     13.57      0.00      1.56      2.66      0.00     82.20
    12:00:01 PM     all     13.64      0.00      1.61      2.63      0.00     82.12
    12:10:01 PM     all     13.72      0.00      1.62      3.79      0.00     80.87
    12:20:02 PM     all     13.58      0.00      1.54      3.29      0.00     81.59
    12:30:01 PM     all     13.66      0.00      1.48      3.50      0.00     81.36
    12:40:02 PM     all     13.76      0.00      1.62      4.21      0.00     80.41
    12:50:01 PM     all     13.27      0.00      1.70      4.42      0.00     80.62
    01:00:01 PM     all     13.73      0.00      1.51      4.66      0.00     80.10
    01:10:01 PM     all     13.85      0.00      1.66      4.57      0.00     79.91
    01:20:01 PM     all     13.71      0.00      1.51      4.41      0.00     80.38
    01:30:02 PM     all     13.71      0.00      1.66      5.29      0.00     79.34
    01:40:01 PM     all     13.65      0.00      1.49      4.80      0.00     80.06
    01:50:01 PM     all     13.71      0.00      1.55      4.43      0.00     80.30
    02:00:02 PM     all     13.62      0.00      1.62      5.14      0.00     79.62
    02:10:02 PM     all     13.68      0.00      1.73      5.34      0.00     79.25
    02:20:01 PM     all     13.56      0.00      1.92      4.71      0.00     79.80
    02:30:01 PM     all     13.81      0.00      1.47      3.85      0.00     80.87
    02:40:01 PM     all     13.31      0.00      1.64      3.54      0.00     81.51
    02:50:01 PM     all     13.72      0.00      1.39      3.63      0.00     81.26
    03:00:01 PM     all     13.78      0.00      1.63      4.60      0.00     79.99
    03:10:01 PM     all     13.70      0.00      1.23      2.90      0.00     82.17
    03:20:01 PM     all     13.37      0.00      0.83      1.28      0.00     84.52
    03:30:01 PM     all     13.55      0.00      0.98      1.51      0.00     83.95
    03:40:01 PM     all     13.45      0.00      0.93      1.26      0.00     84.36
    03:50:02 PM     all     13.47      0.00      0.92      1.23      0.00     84.38
    04:00:01 PM     all     13.31      0.00      1.29      1.50      0.00     83.90
    04:10:02 PM     all     13.55      0.00      1.12      1.98      0.00     83.35
    04:20:01 PM     all     13.64      0.00      1.30      2.95      0.00     82.10
    04:30:02 PM     all     13.70      0.00      1.44      3.50      0.00     81.36
    04:40:02 PM     all     13.48      0.00      1.65      3.38      0.00     81.49
    04:50:02 PM     all     13.65      0.00      1.54      3.19      0.00     81.62
    05:00:01 PM     all     13.32      0.00      1.61      3.03      0.00     82.04
    05:10:01 PM     all     13.53      0.00      1.38      2.80      0.00     82.28
    05:20:01 PM     all     13.67      0.00      1.34      2.54      0.00     82.45
    05:30:01 PM     all     13.51      0.00      1.38      2.04      0.00     83.08
    05:40:01 PM     all     13.44      0.00      1.26      1.91      0.00     83.38
    05:50:01 PM     all     13.75      0.00      1.31      2.93      0.00     82.01
    06:00:01 PM     all     13.75      0.00      1.23      1.95      0.00     83.07
    Average:        all     13.80      0.00      1.24      1.92      0.00     83.03
    It pushes on average 2gbit and the load remains under 1.50.

    Under load (1.75gbit):

    [[email protected] ~]# hdparm -Tt /dev/md127

    /dev/md127:
    Timing cached reads: 25370 MB in 2.00 seconds = 12702.58 MB/sec
    Timing buffered disk reads: 436 MB in 3.07 seconds = 141.96 MB/sec
    [[email protected] ~]#
    This same server with an LSI raid card + write back cache had a iowait of 15% and a load of 20 and it was only pushing 1gbit.
    Last edited by Steven; 10-12-2013 at 07:16 PM.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  13. #13
    Very nice. RAID 1 would probably benefit much from the write cache, but considering budget issues, low expertise, usability and all the sides I see the most viable route by far is software RAID. Having 5x to 10x the budget would make HW RAID 10 with BBU feasible perhaps.

  14. #14
    Join Date
    Mar 2005
    Location
    Ten1/0/2
    Posts
    2,509
    Quote Originally Posted by iseletsk View Post
    Yet, with BBU you can enable write cache -> which can give a huge boost to performance. Write cache + software raid will corrupt data on power outages/hard reboots.
    So, have you personally made comparisons to A HW raid card with BBU and Cache against Flashcache or BCache?

    Reality is that under Linux, it almost always make sense to use soft raid when you use raid1 or raid10, other usage is certainly on a case by case basis. Software raid under Windows is nowhere near as good, VMware has a list of supported HW raid cards and Soft raid is not an option. So, use the appropriate tool for the job.

    Why I personally choose to use mdadm over HW raid is the flexibility - you are Not tied to specific hardware (the raid card) and can recover. Some of the flexibility is that you can do things under soft raid that are very difficult on HW raid. For example, I generally use the first 250M of All the physical disks in the server as a Raid 1 boot disk - which is 3 disks as a minimum, then I use the rest of the disk in whatever configuration I need for the task at hand - it might be a Raid 1, raid 10 or just a single disk.

    There is virtually no system overhead by going soft raid and as has been highlighted, an E1230 is far more powerful than any Raid card Processor.

    Yes, it does take a bit of effort in understanding how it all can and does work - and the same can be said for a HW raid setup. Either way, before pushing either into prod you should have a good understanding of how to recover from all possible failure scenario's. This will mean having a dev system on the bench where you can do things like pull a random disk on a running array, then drop the power to the server, reboot and recover it.
    CPanel Shared and Reseller Hosting, OpenVZ VPS Hosting. West Coast (LA) Servers and Nodes
    Running Linux since 1.0.8 Kernel!
    Providing Internet Services since 1995 and Hosting Since 2004

  15. #15
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,290
    Quote Originally Posted by RRWH View Post
    Why I personally choose to use mdadm over HW raid is the flexibility - you are Not tied to specific hardware (the raid card) and can recover. Some of the flexibility is that you can do things under soft raid that are very difficult on HW raid. For example, I generally use the first 250M of All the physical disks in the server as a Raid 1 boot disk - which is 3 disks as a minimum, then I use the rest of the disk in whatever configuration I need for the task at hand - it might be a Raid 1, raid 10 or just a single disk.
    Good point here.. Say the raid card fails and the datacenter does not have one on hand.. you are screwed... mdadm raid just drop the drives onto a new motherboard.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  16. #16
    Join Date
    Jan 2003
    Posts
    78
    Let's not forget SSDs... If you want to run SSDs in a raid, you are most likely better of using a soft raid - unless your hardware raid can handle the performance.

  17. #17
    Join Date
    Jun 2001
    Location
    Princeton
    Posts
    836
    A year ago I did a slide on software/hardware raid comparison:
    http://www.cloudlinux.com/company/sl...onf2012_CL.pdf
    slide #4
    But it basically sums up what was discussed in the thread anyway.
    Igor Seletskiy
    CEO @ Cloud Linux Inc
    http://www.cloudlinux.com
    CloudLinux -- The OS that can make your Shared Hosting stable

Similar Threads

  1. Web host Midphase Launches New Cloud Servers Emphasizing Ease-of-Use and Management
    By IGobyTerry in forum Web Hosting Industry Announcements
    Replies: 0
    Last Post: 05-15-2013, 09:44 AM
  2. ask some questions about raid card management
    By ttgt in forum Colocation and Data Centers
    Replies: 9
    Last Post: 07-20-2011, 02:06 AM
  3. Soft-RAID management
    By atmark in forum Managed Hosting and Services
    Replies: 23
    Last Post: 07-13-2011, 08:34 AM
  4. Quad Cores + RAID 10 only $199 + Full monitoring and management
    By FastServ in forum Dedicated Hosting Offers
    Replies: 2
    Last Post: 10-25-2007, 11:23 AM
  5. Raid management software / monitoring
    By themedia in forum Dedicated Server
    Replies: 1
    Last Post: 10-12-2007, 08:10 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •