ive got a mysql database thats got about 37million lines in. every 15 minutes it gets a file with 40k lines, and it has to compare those 40k lines to the 37million lines to check if it exists already, and if not it adds it... the db files are about 1.3gigs in size now too
now as you might've guessed this is v.v.v. slow, and in its current unoptimised state the sql server cant handle it. I've got some changes in mind to make to the table structures that were overlooked on original design/implementation, but im wondering if anyone can suggest any optimisations for the actual server itself.
the machine is a p4 2.8 with half a gig of ram. (im think upgrading that would be a good start..)
If it still doesn't work good, consider posting the table definitions including all indexes, the index cardinality counts, as well as the queries you are using. Somebody with a little time to spare should be able to see if you are missing any indexes or should rephrase your query.
If this doesn't work good, maybe you should consider reading the table once, storing whatever key youre using in a hash table and do your lookups there instead of doing it 40k times in the real table?