Hi again friends. The WHT community has been extremely helpful so far in regards to several of my prior threads, and I hope to pick your brains again for what will likely not be the last time.
Firstly, I will reference my previous thread here
and again commend the community for making a great recommendation for me as to what network hardware will fulfill my needs for a small colocation project. I chose a used cisco 3550 from ebay. For the sake of anyone reading this without first reading my past thread, I will summarize what I am doing.
Some developer friends and I who all currently have our own dedicated servers or colocated servers mostly at other facilities realized that we could save some money if we pooled our resources and got a large amount of space somewhere. We are all fairly seasoned linux admin and what not but the part that continues to plague us is the networking aspect of how we will have this all setup. Basically what we want is to have a setup similar to how it would be at a commercial colocation or dedicated server facility. I can google my way through how to perform the actual implementation, however I wanted to verify that what I'm doing is correct. This is in a nutshell what I think I have to do to the switch to achieve the desired results.
Each server has 2 network interfaces, as well as 1 out of band management card providing IPKVM, etc. 1 interface will be used for the public facing internet, with several IP addresses assigned to the interface within the OS. These public connections need to be vlanned on the switch, into seprate vlans in order to segregate the machines from eachother. Each of these ports will be rate limited to a portion of the bandwidth provided to us by the provider. The second interface on each system will go un-used.
The management cards will all be vlanned together, along with some method of VPN-ing into this VLAN. We will likely dedicate an older p3 system for specifically this purpose.
The switch I have has 2 upstream ports for GBIC devices. I have one of these ports occupied by a copper rj45 gbic.
I need some clarification on the following points:
* Is it best to connect the ethernet drop from my provider to the copper gbic on the switch? This makes sense to me, but I'm trying to understand the significance of these 2 ports when compared to the rest of the interfaces on the switch.
* If I vlan each machine onto its own vlan and thus it's own port, wont it only be able to communicate with itself? How do I make the port which my providers ethernet drop is connected to a member of multiple vlans? Is this best practice? What is the proper way of ensuring that each of these machines can all talk to the internet while remaining isolated from each other?
* What will I need to ask my provider to do on their end to accommodate such a setup? I've read about vlan trunking a little bit. Is this what I'm looking for or am I barking up the wrong tree
* How do I restrict what IP addresses specific ports can assign to the machines attached to them. For example how can I make sure, if we all have 5 public IP addresses per system, that my friend cannot accidentally assign one of my IP addresses to his machine. Is this even possible? I have read that it is possible and one one of the main reasons I sprung for the 3550 over other models.
Is there anything I am missing or overlooking with what how I am envisioning this to be setup? Like I said, our goal is to mimic the setup of a (good) dedicated server or colo provider.
Like I said, I'm not looking for specific command by command instructions on how to set this up. I'm perfectly capable of googling around to find my answers, I just need to know what to learn how to do and to verify that I'm looking for the right things.
Any input on any aspect of what I've asked so far is welcome and encouraged. Thankyou for reading my essay of a post and I look forward to the discussion to follow.