For my journey into the IPv6 world I decided to build a lab to give some more insights about the different systems which deal with IPv6 addressing and how easy it could be to switch to IPv6 with at least a small part in a real productive network (I strongly emphasize: I do not mean the whole network at once ). My first concept based only on clients, mainly Windows and some routers to put in some configs and play around with IPv6 addresses and concepts. Some more planning and a few thoughts later I realized: Of course some servers with different services should also be there and so the new idea was born: Build a lab with a real working AD Domain, name it IPv6PoCLab.org and show IPv4 to IPv6 migration use cases. And of course to save some “real hardware” and to reduce some fan noise, the latest release of the Cisco Cloud Services Router (CSR) 1000V, btw. info can be found here, comes in handy. So I built the first working concept for a HQ and some branches with Internet servers. Finally, it looked like this (still missing some DMZ, Mail, Corporate WWW, but there should also be something left for later implementation…):
Three branches are connected to a HQ via a simulated routing environment. Each of the location has it’s own gateway router which is connected to an Internet router, represented through four core routers. In the simulated Internet there are two servers hanging around (could be ftp, www ..) just to give something to point at. Within these first scenario everything is connected via OSPF and the whole IPv4 connectivity is up and running. Right now: Firewalls or proxy systems are not in place (this is a “living” lab, so this subject could be changed in the future).
Just to explain the real exciting part: Everything is up and running on one physical server: Cisco UCS 240-M3 with (ok that’s quite nice): 212GB RAM, 2xCPU with 8 cores each and some TB of disk space. So all is virtual and runs on an ESXi as a Hypervisor. The virtual machines, once created and cloned were given the amount of NICs as needed. I separated the different branches and routing connections through PortGroups within my virtual switch. This gives me the possibility to put each NIC into a different PortGroup. So the communication between the different subnets is forced to go through my routers once they have to communicate with each other (the concept of port groups is similar to VLANs in the real world). The different port groups are shown in the picture below:
But this does not only provide the possibility to route within the virtual environment. You could also easily open the whole lab to real hardware. Just connect a switch with a 802.1q trunk to it and connect a “real” physical router to the switch, assign the interfaces of that box to one of the corresponding VLANs and that’s it. You are part of the physical world right now:
That’s for future use. After everything was installed and the IPv4 connectivity was granted I went on and put a first idea of some migration scenarios onto a piece of paper. I decided to start with IPv6 in the HQ, servers were dual stacked and some clients are IPv6 only. After that I migrated one Branch as IPv6 only, but there exists a dual stack connection on the routers through the Internet, one Branch as IPv6 only (Provider has only IPv4) and one Branch should stay with IPv4. So one possible outcome could look like this:
So, let’s see how everything could fit together and how complicated things are … I keep you posted with some technical solution in the near future.