Page 1 of 2 12 LastLast
Results 1 to 25 of 30
  1. #1

    Getting Best Speed For Webserver

    Hi,

    I understand that we have the following options to accelerate the delivery of our pages to the user.

    1. Content Delivery Networks (Akamai etc..)
    2. Network Load Balancing
    4. SSL Acceleration
    5. Http Compression gzip or deflate
    6. Web caching applicances

    Now besides the 1 whcih seems to be very expensive. Can we combine any number of solution. Woudl it make any difference . For example can we combine 4 5 and 6 or it totally useless.
    Is one better than another ? .

    Our budget is currently not too high. Maybe $250 a month for the Accelerator solution. knowing that the Http compression is free. Might be all we need at this stage. Not sure.

    We need it for a dating website which generates about 30,000 daily uniques. A lot of complexe Sql Queries (Whcih will probably need ot be optimized) . an javascript call, small images.

    Here is our server config

    Dual Intel Xeon D 2.8 GHz, 800 MHz FSB with HyperThreading
    4GB Dual Channel DDR400 ECC RAM
    120 GB hard drive IDE
    1000 GB Bandwidth included (Non-Cogent)
    100Bit Port

    Thank you for your input


    J

  2. #2
    Join Date
    Oct 2002
    Posts
    705
    What are you trying to do? Make pages load faster for the user? If thats the case any of those are going to make a difference. The thing that will kill you the most is if your server is taking time to generate the page. If the server can send the page out in .02 seconds then you usually don't have anything to worry about. You can also use AJAX to have the user load a 30kb or so javascript interface and then make ajax calls to the server through that. Those ajax calls can be anywhere from 100 to 500 bytes. This is going to generate extremely fast load times similar to gmail and reduce your overall bandwidth usage by not sending html back and forth to the user.

    If you could give us a bit more information we could help you futher.
    ServerMatingProject.com
    The World's first server mating experiment
    We give new meaning to I/O intensive and hot swap

  3. #3
    Hi Thank you much

    Yes we're trying to gain speed from as many angle as possible.

    - Make page load faster to the user
    - Make sure he can use his full connection when loading our page

    Regarding the server performance yes it is another component we are workign on. Try to pinpoint any possible running process that might slow the generation, Mysql. It seems most of the CPU usage that is in use goes to Mysql right now. So We need to optimize our Mysql Queries as well. Any suggestions, reading on this on that would be helpful.

    We have a IDE HD, We need to Upgrade to Scsi I believe etc..

    J

  4. #4
    Join Date
    Oct 2002
    Posts
    705
    What is your site coded in? My first suggestion would be to get a profiler. This will show you what function/queries are the most intensive/take the longest. From here you can see your software bottlenecks. Once you're server is sending out the data as fast as possible then its easier to figure out how to improve the user experience.
    ServerMatingProject.com
    The World's first server mating experiment
    We give new meaning to I/O intensive and hot swap

  5. #5
    What is a profiler ? it's a script. Or a person who analyse the files ;-)

    The rest makes perfect sense

  6. #6
    Join Date
    Oct 2002
    Posts
    705
    A profiler is a program that as you execute a page of code shows you a breakdown of the time taken for each function to run. Xdebug would be one such program available for PHP. Fixing the problems are easy its finding the bottlenecks that is hard.
    ServerMatingProject.com
    The World's first server mating experiment
    We give new meaning to I/O intensive and hot swap

  7. #7
    Very well. Now How does the load balancing work. We have a server in the West coast and would like one on the east coast. Most of our customer are now from there. How does this work ?

    I understand the DNS version is the mosre simplistic What other other is there. If we update the site and DB on one server we have to do it on the other one or is there some kind of system that synchornize things.

    I assume butting a httpcompression on each server will not affect the load balancing schem ? please confirm. And what are the price for such thing. See if it is orth it at this time. Software / HArdware ?

    Thanks you ;-)

  8. #8
    Join Date
    Oct 2002
    Posts
    705
    Yes you would need to sync the two servers. Although for a website, physical location isn't a huge problem. It would be different if you had users in moscow and others in los angeles but if all your users are in the US load balancing across two servers is a waste. So I would say don't do any load balancing. It won't make any major change in what your users experiences.

    However, depending on how cacheable your data is you can setup a proxy server on the east coast and store frequently requested pages there using squid. Full syncronization of two clusters requires several high qualified administrators and a good bit of money and based on your post I don't think you have either so I would suggest putting your resources towards improving the current server cluster.
    ServerMatingProject.com
    The World's first server mating experiment
    We give new meaning to I/O intensive and hot swap

  9. #9
    Join Date
    Oct 2004
    Location
    Southwest UK
    Posts
    1,175
    Performance analysis of MySQL: buy the book, http://dev.mysql.com/books/hpmysql-excerpts/ch06.html

  10. #10
    Thank you for the suggestions. Anybody here available for consulting to get us squared away ;-) ?

    Regarding the second server our principal reason (more so than for load balancing) to get it was to make sure we have 100% uptime. O When a server goes down , the request is passed on to the other one. through DNS. But we get to the same issue of synchronizing the two servers with heavy admin work initiaally and ongoing corect ? or can it be done straightforwardely.

  11. #11
    Join Date
    Oct 2002
    Posts
    705
    You're still going to have the same issues doing proper sychronization. To get true 100% uptime you're going to need lots of money and knowledgable admins. You first focus should be on providing the best user experience possible.
    ServerMatingProject.com
    The World's first server mating experiment
    We give new meaning to I/O intensive and hot swap

  12. #12
    Join Date
    Feb 2004
    Posts
    633
    Numbers 2-6 can often be done with the same appliance, for example those by Redline Networks (recently acquired by Juniper) and NetScaler (recently acquired by Citrix). I've had experience with both, and they are extremely capable multipurpose devices that can do load balancing, TCP offloading, HTTPS/SSL acceleration, HTTP compression, caching, etc. But these are way beyond your means--they start at around $35k each.

    HTTP compression is a free and useful way to improve your end users load times, especially as page sizes tend to steadily increase due to additional scripting, DHTML elements, etc. I know you've written off a CDN based on your budget, but there are some very cost effective CDN services like CacheNetworks and Peer1's RED service; these are much, much cheaper than Akamai.

    I'd have to agree with TheVoice--you've got to crawl before you can walk. I'd suggest setting up a local load balanced solution first before you start considering a global solution at multiple data centers. The best way to maintain a positive user experience is to keep your site up and running as much as possible, and it appears you have several single points of failure right now. One well rounded book on the subject is called "Blueprints for High Availability"; I think it was put out by Wiley Publishing.

  13. #13
    Thank you for these great options. It looks like the CDN service like Cashnetwork is the way to go to go.

    Now the static content like text, Images and video are sync on their Network and delivered efficiently to the end user . few questions :

    - Is this Method compatible with Httpcompression that could be installed on the main server. ?

    - If our main server goes down because only some of the content is distributed away (The static ones), & if the core db goes down, whcih will make the CDN perfect for Fast delivery but does't provide us with a backup connection soluton in case of a server crash on our main server.

    Now What would be the most cost effective method to alleviate as much of the down time as possible. I know it can never be perfect.. But we could get a new smaller server at our current facily and use their load balancing software technology which handle : Least connections , Response time , Round robin , Weighted round robin , URL-based selection , Browser-smart , URL hashing, HTTP header as load balancing method. ? http://servers.aplus.net/loadbalance.html

    Is it necessary to get the same server configuration as our original one. Or can we setup the load balancing (Round robin) to simply use the new server as a backup (and take a smaller one ) when the load is too high or the Main server is down for some reason. (It just need to be located on a different area , cluster ? in the facility). Then the traffic is redirected back to the Main server once it's back-up or that the traffic load and Cpu activity is back to normal ? It won't do us any good if the entire facility goes down but is it likely ?

    Also would getting a Multiple RAID sol, be effective. ?

    In short We would :

    1. Take care of our server page output with method like Ajax
    2. Check our Mysql processes and optimize queries (critical)
    3. Setup our content with service like Cashnetwork.
    4. setup local load balancing sol in our current facility (Securing a new lower end server )
    5. Install Deflate if it make any different on our main server (?)
    6. Upgrade our IDE to a SCISI with RAID controller ?

    I assume 3 and 4 woudl not be incompatible (?)

    Let me know . Thank you

  14. #14
    Join Date
    Oct 2004
    Location
    Southwest UK
    Posts
    1,175
    I'd say HTTP Compression isn't much of a big deal, but that depends on your content - it does not compress images, just html text only. If you're running Apache 2, you've probably already got http compression installed (its called mod_deflate). Apache 1.3, you'll have to install mod_gzip.

    PHP Caching (Zend or eAccelerator) will have an impact on your delivery times, and reduce CPU load.

    For High Availability, you really do need to look at using 2 datacentres, my ISP went down recently due to a power failure in the surrounding area.. and I was offline until it came back, so local loadbalancing is a good suggestion - but only for increasing the scalability of your site, not for increasing availability.

    Please note that DNS will not automatically load balance you immediately like you'll want - DNS has a 'time to live' where the IP address is cached locally. Until that times out, the DNS will not refresh.

  15. #15
    We have Apache 1.3 and already have Zend Optimizer installed as Php 4.3 . Should we look into Apache 2 and PHP 5. ? later on down the line. ?

    For the Availability I hear you. That was our original idea with the East coast server. But I've been advised against for the ration work involve to load balance vs benefit and cost .

    If we take a server in another facility can any of the load balancing technique our current host has be used in the schem I suggested ?. Just use the East coast one for back-up or when the main one goes down ? The only purpose woudl be for availablity more than for real load balancing so the admin woudl be to make sure the files are sync-up but nothing too major once it is seutp ? Or is it wishingful thinking ?

  16. #16
    Join Date
    Oct 2004
    Location
    Southwest UK
    Posts
    1,175
    The high-availability thread is in the Technical section of WHT: do a search, eg: http://www.webhostingtalk.com/showth...hreadid=430276
    http://www.webhostingtalk.com/showth...light=failover

    You can improve Reliability within 1 datacentre by adding RAID and generally getting a good box. However, its rare enough that a server fails that I don't think you need to worry about it too much (ie, your ISP is more likely to lose connectivity for various reasons).

    You can improve Availability to roughly 10 minute outage by implementing simple DNS failover (ie, you get 2 servers in different DCs are use one of the dns load balancing solutions out there like http://www.zoneedit.com/doc/faq.html#faq47 or http://www.autofailover.com/ ). I think that will be fine for you, instead of 100% availability, and will be relatively simple to implement.

    Apache: I prefer Apache 2 as the apache people have said its better designed and scalable (its also one better ). The php people say otherwise, but that's because they prefer the idea of preforking instead of threads for concurrent processing. Apache 2 has some nice stuff, especially chaining modules that you cannot do in 1.3. But all in all, it isn't going to solve your problems.

    I still think you'll have to look at your SQL Queries to improve performance.

  17. #17
    Join Date
    Feb 2004
    Posts
    633
    Numbers 3 and 4 are certainly not mutually exclusive, and I'd suggest that anyone who is concerned enough about performance and uptime to be spending the money on a CDN service is generally going to have a local load balanced setup.

    While global server load balancing sounds great in theory, it is much harder (and expensive) to correctly setup. The biggest issue with global server load balancing is maintaing data coherency, especially with database replication. We have an e-commerce client that requires high uptime and yet their database load is quite low. So for them, doing GSLB is not that difficult. Now if you have a very active database that's heavy on insert statements, that becomes much more difficult, as you're dealing with the inherent latencies and issues with the public Internet and not your LAN. And MySQL does not have sophisticated mechanisms for this sort of architecture, though they are working on beefing up their replication features in 5.x. There are third party clustering solutions (EMIC, LifeKeeper) that may work for you but they cost thousands of dollars. MySQL Cluster may be an option; I say may be because it's generally not recommended to run that over the WAN, though I have seen it work well enough when the load isn't real high.

    Using DNS for load balancing / failover has many wildcards, as you can do everything you need to do correctly to minimize downtime on your end but there are simply too many things beyond your control. ISPs (especially those with a significant dialup customer base) can rewrite low TTLs at their proxy/caching servers, some of these "Internet Accelerators" muck around with the host file and save old addresses, inherent browser caching, etc.

    I don't mean to sound glib here, because there is nothing wrong with trying to do things inexpensively, but every so often these discussions come up about how to get 100% uptime on the cheap. The short answer is there isn't any. Since the cost of downtime is usually proportional to the investment that needs to be made in redundancy (or at least it should be), I've often wondered why people think they need 100% uptime if they don't have the revenue or budget to warrant it. If being down for a few hours only costs you a minimal amount of money (like say a thousand dollars or less), I'm not sure I'd bother with a HA setup. Simply get a cheap second server that can you use in an active-passive configuration (i.e. it would only be used if the first one fails), make sure you backup your data properly to it in a reasonable manner (using open source tools like mysqlhotcopy & rsnapshot), and live with the downtime if things crash and you need to change the DNS to point to this server. Or maybe go with a DNS provider, like autofailover.com (mentioned here) or UltraDNS (which I've used), to handle the DNS switching for you. You'll save yourself a lot of money and configuration headaches.

  18. #18
    I know. Need the fastest and most reliable site but have no budget ;-) But I got some great pointers here.

    Ok So we would get a second cheaper server (in another facility ) active-passive config and use a service like autofailover.com to assure a good reliabilty. The data & Database would be syncronized automatically.

    Now on the autofailover.com system. All requests are sent to server #1. If server #1 fails all requests are sent to server #2. When server #1 is back online all requests are router to server #1. Whci is what we need. It seems straightforward. I assume it is a simply immediate DNS redirect so the data and database will be already operational on the back-up server. correct ?

    Do you know the cost of UltraDNS's SiteBacker which also does exactly what we need. Autofailer basic packagestarts at $100/Mo . Any idea on the other one. You mentioned you used it ?

    Another thing about UltraDNS in their pricing structure is that they count by Records and queries. 30 Resource Records, 30,000 Queries For th eManaged DNS. What woudl the resource records be . And does it mean that it is not only a server redirect since they count the query ? Any idea ?

    Now the systems above are not incompatible with our CDN service we talked about (Like CacheNetworks) Correct ? Do we have to forsee any difficulties in using using them both solutions ? Something to look for ?

    Our static data (Images, Graphics, Video, text) could still be served by CacheNetwork's Global Delivery system And if the main server is down the Mysql Data from the back-up server would take over the queries delivery.

    As I mentioned it is for a dating site and most of our content is User Images and Graphic of the site. The rest is Pure Query, search, indexation etc... Most of the video is provided by a third party provider who handle bandwidth etc.. Is CDN still a good idea with no streaming content. We do need to have a ultra fast site and we will definitely go with CDN (CashNetwork) if it help our bottomline.

    Please confirm or Am I Off and advise . ?

    So our operation would cost us about $500 more dollars a month.
    $100 for the Backup server
    $100 for the Failover service
    $300 for the CDN service

    So we're Done to :

    1. Take care of our server page output with method like Ajax
    2. Check our Mysql processes and optimize queries (critical)
    3. Setup our content with service like Cashnetwork.
    4. Get low-end back-Up servers + failover service
    5. Upgrade our IDE to a SCSI

  19. #19
    Join Date
    Oct 2004
    Location
    Southwest UK
    Posts
    1,175
    Ok..

    1. I assume you'd be happy with a cheaper server just for failover (and to store offsite backups, bonus! ).
    2. You're right how autofailover works, but.. it will not be immediate. How long do you think it would be acceptable for to have your site offline? This is an important number as the higher it is, the cheaper your HA system will be.

    3. Cost of the autofailers.. yikes! Bear in mind that each query is a DNS lookup - so when a browser goes to mydating.com, it will ask for the IP address. Then, if you have a 10 minute failover time, it'll ask again 10 minutes later... a hour's surfing your site = 6 queries. For each visitor...
    A resource record is a DNS entry - like an A record for IP lookups.

    4. I do not know if the CDN network will still work, or whether you'll need it 'twice' - once per IP. Ask before you sign up. That said, are you sure you need the CDN caching service? $300 a month will buy you a much faster box (with loads of RAM) to put stuff on.

    5. Upgrade your IDE to a RAID system. Maybe spend $1000 on an Areca raid controller and stick 8 SATA drives on it in RAID5 - that will quadruple your IO performance (look in WHT Technical section for SATA RAID thread).

    I'd go for the queries first, then the RAID option, then the backup server. Only then would I look at the DNS failover options.

  20. #20
    Something I forgot to mentioned is that the DB and the content are on the same box right now. I had a suggestion that leaving the DB on our existing server #1 :

    Dual Intel Xeon D 2.8 GHz, 800 MHz FSB with HyperThreading
    4GB Dual Channel DDR400 ECC RAM
    120 GB hard drive IDE
    1000 GB Bandwidth included (Non-Cogent)
    100Bit Port

    and Put the Data on another server in the same facility Connected by Cross-over cable. Multiple Gigbyte data stream. Would tremendously ease the problem of speed and server output. Since 95% of the CPU is now taken by the Mysql queries.

    The other server #2 ($179/Mo) being

    Intel Pentium 4 2.8 GHz 800MHz FSB with HyperThreading
    2G Dual Channel DDR400 RAM
    120 GB hard drive IDE

    We will be getting 700k UNIQUES a month and will reach 1 million next month. This might help as well.

    In addition. We would get CacheNetwork Content Delivery Where content from the Server #2 Will be populated and served up throughout their Network.

    It won't solve the Availibility issue; we could look into getting another back-up server #3 and failover for this if the host performance is not up to par. Now the server #3 would need to have both the DB and the Content obviously. Is that doable of a scenario. ?

    We can start with this Then Upgrade our server #1 with RAID
    The Server #2 with RAID

    So

    Step 1 (Now)

    - Optimize our Mysql Queries
    - Keep our current server #1 for DB only at Location 1
    - Get standard Server #2 for content at Location 1
    - Get content of Server #2 delivered through CDN service

    Step 2 (Upon results)

    For Availability:

    - Get back-up server at Location 2 (Database + webserver)
    - Use Failover service

    For performance :

    - Upgrade Server #1 at Location #1 with New Raid Controller (multiple drive) + Faster CPU (Might as well)
    - Upgrade Server #2 at Location #1 with New Raid Controller

    Step 3 :

    - Consider Load balancing with Server #4 eventually or other option


    ?

  21. #21
    Join Date
    Oct 2004
    Location
    Southwest UK
    Posts
    1,175
    Yep. looks good. I think you won't get much of a boost from the CDN service though, especially as you'll have a server dedicated to serving pages once you get the second one. If necessary you can put images on a third server to spread the load (assuming they're saved to disc and not stored in the database...)

    When you get the faster CPU, you might as well get a dual-CPU with dual-cores, unless you go for a straight quad server. (or a dual-cored quad server, lol)

    The other thing you might like to do is choose a different provider who has more experience in the kind of heavy site you're running. Rackspace comes to mind, as do www.voxel.net, www.datapipe.net and www.cybercon.com

    Good luck, let us know what works, what doesn't and your experiences with this higher-end stuff.

  22. #22
    Join Date
    Apr 2004
    Location
    San Jose
    Posts
    902
    If your measurements are correct, and MySQL is taking 95% of your CPU, moving the application and web serving to second server will only help a small amount (5%).

    You really should look into optimizing your SQL queries first. As an example, at my company, a missing index took us from handling 1200 queries/second to 10/second on an 8 way Sun box. We pretty much were out of service until the DBA realized he had dropped the index from the wrong machine.

    You may go from being maxed to just using a few percent of your CPU, unless someone has already done the DB optimization.

    At a 50%/month growth rate, you are going to have a lot of trouble keeping up with the traffic if your DB is already up to snuff. Of course, traffic will flatten out if you can't serve the pages quickly.

    Good luck.

  23. #23
    Umm So Splitting server (DB / Content) might not even help that much ? 5% is not worth the investment. at this point. But it might come handy later.

    So we're back to our DB Analysis which we're already working on. So maybe I should approach it like this.

    Condidering :

    - We have 700k Unique a Month with an increase of 50% / Month.
    - 500 to 1000 New sign-ups daily.
    - It is a Dating site which main content are photos and small other graphics files (The Chat and Video are streamed from a Third Party provider to save bandwidth)
    - The main Processes work is on the many searches and advanced Searches queries, msg , favorites..,

    What server configuration do we need from now to at least the next 10 months For a ultra delivery and maximum availability

    At this point we are willing to Get a New Server at a new location (or Two if necessary - Server + DB server) on top of our original one which we could use for back-up, failover, Picture delivery or development test server)

    So in addition to the appropriate configuration I guess we could use some name of Hosting Companies on the East Coast that are very knowledgeable and can follow our progress if we need to upgrade things to better config in 6 or 10 months.

  24. #24
    Join Date
    Apr 2004
    Location
    San Jose
    Posts
    902
    Good to hear you have someone on the DB.

    Can you assume that the growth rate will continue geometrically at 50%/month? If so, in 10 months, you'll have ~58 times your current traffic.

    I don't know how much of your potential market you've already penetrated, so I can't say whether you'll really reach that.

    In general, for ultimate DB speed, you want your DB to fit into RAM, and do as few updates as possible. If you're storing pictures in the DB, you probably should move them out.

    SQL operations going from least to most expensive are:

    Indexed selects.
    Full table scan selects. (Depending on size of table and whether it fits in RAM. May be more expensive than following operations.)
    Inserts.
    Deletes.
    Updates, especially when the where clause hits an unindexed key..

    In order to make a model of your DB usage, I would need to know your schema and query pattern in relation the pages you show and hit rate for your different pages.

    If you have a good DBA now, he should be able to create a model of your DB usage.

    You probably want to get the biggest multi-CPU Opteron box you can afford for the DB. You may want to consider moving functionality out of the DB and into the web server, such as sorting, since you can horizontally scale the web servers, but it's not so easy to do that for the DB.

    Let us know how things go.

  25. #25
    Here is the config we are looking into getting..

    For the Mysql DB Server

    Dual 2.8GHz Xeon Processors
    RAM 3GB RAM
    3 x 73GB SCSI HDD Hardware RAID 5
    $450/Mo

    For our WebServer Content (The same with less Ram)

    Dual 2.8GHz Xeon Processors
    RAM 2GB RAM
    3 x 73GB SCSI HDD Hardware RAID 5
    $400

    We are getting them from :
    http://www.theplanet.com/control/pro...5_details.html

    I've heard good things on them. Any experience.

    Is it over the top Or a wise investment for where we are headed. We want to cover our basis. Is the RAM enough on both . ?

    We will use our current server for back-up, Failover and/or development platform

    We are currently getting our Mysql Looked at.

    Here is some info on our DB

    It is about 30 MB only but all the tables are in one DB. It was suggested that we should create different DBs for the Most accessed Tables to avoid to load of the full DB each time ?

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •