I'm analyzing some performance data that was gathered during a period of one week and am clueless about some results. Here's the context:
Server: IBM xSeries 360, 4x Xeon MP 2.0GHz, 2GB PC1600 SDRAM, IBM 36,4GB SCSI HD
OS: Windows 2000 Server
Running: Oracle 9iAS (Oracle application server)
Average load: 50 concurrent users
Average CPU usage is low at 4%, average available memory is high at 1,2GB, average paging file usage is low at 4% and average disk usage is low at under 1%. Everything seems normal, except for this: average page faults per second is over 100.
I have run similar data collection on the database server (approximately the same configuration). The results are pretty much the same in terms of CPU, memory, paging file and disk usage. The page fault count is even higher at over 700!
I have looked at the graph and there are page fault spikes every couple of seconds.
I have been told that the accepted threshold for page faults is around 20. These results are far higher than this. Is this normal? How is this affecting system performance? What could be causing this?