top | item 6901391

(no title)

gngeal | 12 years ago

The domains table currently has 84 fields.

Are you sure you've read up on your C. J. Date? I've had that once before: someone complaining that "queries take too much time" with a paltry single-digit-GB database. When I asked about the specifics, the only repeating reply was "we can't tell you". You don't mention anything of value, but querying a few million records can't possibly take a few minutes on the aging desktop computer I've bought seven years ago, much less on a modern server.

discuss

order

webstartupper|12 years ago

I presume the reason it was slow was because the domains table was write heavy. There are multiple crons running in the background selecting data from the domains table, accessing external APIs and updating individual records.

Selects per hour: 37K Updates per hour: 170K

While the speed of selects or updates by the background crons was and is not important, the speed of selects run by the users on the same table was important. The easy solution was to cache the data so that the users could search domains at a good speed. The memory table just worked brilliantly as a cache.

(I'm no mysql guru and its my first project where MyIsam did not work for me, so I know I definitely could do a better job of optimizing the Innodb table and Innodb settings in my.cnf)

jumby|12 years ago

You claim not a MySQL guru, but are shooting down the majority opinion here that something major is wrong with the schema design. That's fine - use your in-memory table and then claim you need a NoSQL solution when your "big data" hits 10GB.

How's the disk io on that linode VPS?