Well, if you look at the archives, you will see this is a much troden subject: let me give you a summary of my inquiries on the subject.
1. Bandwidth Monitoring:
The ONE TRUE WAY to monitor bandwidth is to monitor it from IP, from the switch if possible, from the ethX otherwise. Any other method (esp based on logfile stats) will only give a general estimate, more or less accurate depending on the habits of the user (cgis are often not metered correctly, etc).
Companies that offer private label reseller plans don't care about the inadequacies of bandwidth monitoring done by logfile processing(like cpanel), because they assign each reseller an ip, thus get clean monitoring for thier direct customers. The resellers on the otherhand can find themselves general estimates, but that's apparently good enough for most people. If you can find a place that can give you piles of IPS for free, go with them. Anonymous FTP is usually justification enough.
I wondered aloud before whether it might be possible to create some sort of "virtual NAT", to pass all requests for a certain domain through a locally assigned IP. From the research I've done, this would require some sort of custom application level proxying app, and my "c programming kung foo not good". Maybe you could do this via perl/tcl/java... but performance impacyt would likely be VERY significant, if not prohibitive.
This is handled by the kernel in linux/*bsd based on username or group. You would have to additionally take into account any storage a user could do via another username, ie through a deamon (ie files owned by "nobody", which you should avoid anyway) or the sql server. No cleaner way to do this either, I don't think. If you chroot the user account, you could measure the amount used in the user's directory structure.
When I first started learning this biz i was shocked to learn that the web hosting industry is based on
1. protocols that contantly pass user passwords in clear text.
2. either rough estimates, rougher estimates, or just plain lies.