I have a total DB in size of 800MB... Recently I had to make a few configurations in the management tool and had to do an intialize restart... Not sure what's taking so long, but it's been nearly 8 hours and I have not seen any of my tables that used to be there.
When I do du -sh /var/lib/mysql-cluster, it shows that I have 800M /var/lib/mysql-cluster...
Any insight on this would definitely be worth while.
First of all, as someone that has worked with MySQL Cluster a fair bit, I would say it's always a good idea to do an online backup of your data before doing any changes to the config.ini file, just in case something catastrophic happens.
The first place to look is the log file of the management node; if you have the default install paths it should be here on the server running the management node:
Also, as a quick check, start the ndb_mgm management console and post the output of this:
This will show which phase of the startup process that the nodes are getting stuck at (based on past experience, it's probably phase 5).
When you say "I had to make a few configurations in the management tool", what exactly do you mean? Did you change some parameters in the config.ini file and then restart the whole cluster? What were the changes and is there a reason you didn't do a rolling restart (i.e. restart the management node to pick up the config.ini change, then restart one node at a time and wait for them to rejoin the cluster, which eliminates any downtime)? There are some changes, like changing the number of replicas, that require a restore from a backup; you can't change that parameter and simply restart the cluster as the node partitioning is different.
Last edited by lockbull; 03-21-2008 at 03:49 PM.