For a web service, when a byte of data is sent, HTTP header is added to the data. And there are also TCP/IP headers etc added to construct the datagram. So the traffic for sending that articular byte is not just one byte.
Does anyone have figures on this traffic overhead, like what should i expect to add on to the data stream?
Depending on your media (ethernet, atm, etc) the overhead varies throughout the route, but if you're just looking at just the server and its ethernet uplink, I believe my figures (below) for an HTTP request for 17.16KB of content (mean transfer size on my shared servers, with several hundred million hits/month). If you disagree with my math, please be specific about where and why.
With a typical ethernet payload size of 1500 bytes, you get a TCP MSS of 1460 bytes, leaving 40 bytes for IP + TCP headers. Each 1460 bytes of transmitted payload incurs 40 bytes of IP + TCP overhead, plus 38 bytes of Ethernet overhead (including inter-frame gap and crc). Starting and tearing-down the TCP session consumes another two packets.
My typical request has about 250 bytes of HTTP headers. This results in a total of 18,502 bytes of transmit time on the ethernet interface for 17,160 bytes of content delivery. That's an additional 7.82% overhead, or 92.75% efficient.
All that said, my real overhead could be substantially higher, as 17,160 bytes is just a mean (average) figure. If you wanted to estimate the overhead per request, you could do so based on Apache logs; or you could even patch your mod_log.c to record both the size of the content and headers delivered.