I am on a shared hosting. I have a PHP script that gets executed nearly 10000 times in a day. The script involves 2 SQL queries. One in start which gets the data and other in end which inserts back some other data. To add more features into the service which involves this script, I need to add few more SQL queries. I just wanted to know as far as SQL queries are concerned "How many are too many per PHP script execution?". And is SQL faster than simple text file read-write?
that depends on the server configuration, on what the server is doing at that time and on how many times it gets executed per second
it`s not the same if tou have 5 querys every second, or if you have 6 querys 100 times per second
If you are executing the sql query for around 10000 to 15000 per day the sql server is bound to consume resources. It all depends upon what is the execution time for the queries, what is the interval in between each execution & what is the size of the data base through which the query will have to fetch the data.
The PHP script gets executed nearly 3-4 times in a minute in normal times and 6-7 times when there is a rush on site. Script currenty involves 2 SQL queries, one of which searches and extracts the data (around 1-2 KB) of a user based on his ID and re-inserts it back after modification using another query. This happens in a flash and script doesn't take much CPU time I think. I need to add 2 more simple queries to the script but only after making sure that I am not putting too much burden on server.
Processor Dual AMD Opteron 242
Memory 2GB DDR
Hard Drive 10000RPM SCSI
Ethernet Port 100Mbps
Operating System CentOS 3.3 i686
Account Type(s) Shared & Reseller
Adult Content Yes
Data Center Located in numerous major cities throughout the Americas, Europe, Africa, Asia, the Middle East and the Pacific Rim, SAVVIS operates the largest next-generation Intelligent IP Network in the world.
Originally posted by akbsol The PHP script gets executed nearly 3-4 times in a minute in normal times and 6-7 times when there is a rush on site. Script currenty involves 2 SQL queries, one of which searches and extracts the data (around 1-2 KB) of a user based on his ID and re-inserts it back after modification using another query. This happens in a flash and script doesn't take much CPU time I think. I need to add 2 more simple queries to the script but only after making sure that I am not putting too much burden on server.
Originally posted by akbsol The database is around 6 MB.
It is not that much worring factor for the moment. But as your data grows you will find that difficult to manage the script. Take for example you have a 40 MB data and there is a query which say select * from table.. then it is sure to consume a good part of the server.
it will mather, but in low aspect ...SQL is built to handle more than 40 mega database select ...
as long as you use MySQL or pgSQL and the database size is under 2 GB you don`t see much difference ...
(if it grows bigger you can split it to mantain performance)
It depends on type of query also. Select query with one record to return is not a problem at all. But if you are doing one left join or linking more than one table with more records, then the page will slow down. Here also the server will not keep itself busy with one page, at the same time it will be serving other pages so the user of other pages will not feel the difference.
This really depends on the queries, as one query can be a lot more complex than another. Some things to consider: Are you using indexes properly? Are you returning more data than necessary (using *)? Are your queries prepared? How many joins? If you're really concerned about database performace I suggest you check out one of the many texts on database performance tuning out there right now.
Can you please suggest any online reading on DB performance. What do you mean by using indexes properly? I have indexed a row 'id' by which the query is done. Is there anything more to it? I don't use any joins etc. Its a simple SELECT statement for single table for specific items and not *.
Originally posted by akbsol Can you please suggest any online reading on DB performance. What do you mean by using indexes properly? I have indexed a row 'id' by which the query is done. Is there anything more to it? I don't use any joins etc. Its a simple SELECT statement for single table for specific items and not *.
Most flavors of SQL will determine the best index for use on each query, however this sometimes isnt the most effecient one. If its that simple you should have no worries. If you are using MySQL (which tends to be the fasts for simple systems as this) here is a good place to start:
I suggest the use of a reverse caching proxy, such as squid. It's relatively easy to configure, it supports name-based virtual host caching, and you can control the aging of the cache on a page-by-page basis in PHP. On a database-intensive site, I was able to reduce the load average on the MySQL server of 5.0 typical with peaks of 35.0, down to an average of 2.0 with peaks no higher than 3.0. If your site gets alot of hits, i'm sure it wouldn't hurt to drop the Zend optimizer on there too... but try squid first.. it's quite a bit cheaper
Originally posted by akbsol If it grows to 40 MB (which I wish caz its my business) and I use a query like :
SELECT name, email, info FROM members WHERE id = 'something'
in which the 'something' is both UNIQUE and INDEXED. Then too the database size will matter?
If you are using some conditions of this type then it will lay off some part of load but again it is not safe. I am not sure if you are looking for a shared hosting or a VPS or dedicated, but in case of shared hosting many of the hosts does not allow even a small load over the server.
Writing to a text file is orders of magnitude faster than using a database. If you have no need for an index or sequentially scanning the file/rewriting the whole file every time (ie. you always want the last fixed-size value, or only want to store a single value at a time), then the text file will blow the SQL query away, and that's not even counting the overhead of setting up the connection etc.
SQL is not the solution to every data storage problem. It depends on the data you're storing, but just using a simple file will often be a prudent choice where performance is a consideration.
40 meg database? ::longs for the day that he's last seen his DB that small::
error, while writing to a text file may be faster, it can be more CPU intensive if it's being written to multiple times at the same time. For something that's continually accessed such as the OP's script, it'd be better to keep it in the DB.
Douglas Hazard - Certifiable Sports Junkie and Sports Community Enthusiast
Host of Two Cents Radio - Follow @TwoCentsRadio on Twitter (@BearlyDoug on Twitter)
Please. Any fairly-static data should be kept in some kind of flat file. The database is easier, surely, but even with many concurrent hits, the file will only be locked for a miniscule amount of time. Maybe 1/100th of the time required for a database query. Not to mention that hitting the database many times concurrently will drive your load average up and probably cause more problems than waiting for a lock to be released. You must choose where you do this carefully, however. Whenever you're not wanting to use every piece of data that you're storing, or want to sort or otherwise adjust how you access the data, a database is probably a better idea. When the purpose is clear, and the data can be easily stored in such a way that no processing is required to get it into a usable state (filtering, sorting etc.), use a file if you care at all about performance.