Python…

Building Tools is an important aspect of learning for myself. Expanding on this , I built out some tools to do some quick subnetting and to increase the speed of the pingsweeper tool.

Subnetter tool will take in a prefix inside of quotes with the included mask. Then taking the difference of subnetting you want to output. In the example below 10.0.0.0/24 is subnetted into /26 networks.

#Louis DeVictoria 


#Script takes in a prefix and the amount of subnets you want to carve out 
#Libraries 

import ipaddress

def Subnetter(prefix,change):
    subnets = list(ipaddress.ip_network(prefix).subnets(prefixlen_diff=change))
    for i in subnets:
        print(i)

##Example Output 

Subnetter("10.0.0.0/24",2)
10.0.0.0/26
10.0.0.64/26
10.0.0.128/26
10.0.0.192/26

Pingsweeper but Faster!

Something made quickly clear is the need to do things faster. The subprocess module helps run multiple new processes in parallel. This allows operations to be completed faster.

#Louis DeVictoria 
#Slow Pingsweeper Script 

def pingsweepold(prefix):
    start = timer()
    prefix = (list(ipaddress.ip_network(prefix).hosts()))
    #The format function pulls the ip to be used in the function
    format(prefix)
    for i in prefix:
        ip = (format(i))
        result = (os.system("ping -c 1 -n -W 2 " + ip ))
        if result:
            print (ip, 'inactive')
        else:
            print (ip, 'active')
    stop = timer()
    time = (stop - start)
    print(f"{time} seconds" )


pingsweepold("10.245.0.0/26")
.....truncated output......
62.710192931001075 seconds

pingsweepold("public/26")
5.113516895999055 seconds


pingsweepold("public/24")
156.3443550189986 seconds
#Louis DeVictoria

def pingsweep(prefix):
#Start a timer 
    start = timer()
#This line uses a "special file" which redirects to discard 
    with open(os.devnull, 'wb') as limbo:
#This line iterates over a prefix and returns all the hosts 
        prefix = (list(ipaddress.ip_network(prefix).hosts()))
#Format returns the host ip address without the method attached 
        format(prefix)
        for i in prefix:
            ip = (format(i))
#Ping , Count 1 , Numeric Output Only , Wait 2 Seconds 
            result=subprocess.Popen(['ping', '-c', '1', '-n', '-W', '2', ip], stdout=limbo, stderr=limbo).wait()
            if result:
                print (ip, 'inactive')
            else:
                print (ip, 'active')
        stop = timer()
        time = (stop - start)
        print(f"{time} seconds" )



pingsweep("10.245.0.0/26")
.....truncated output......
62.58549390799817 seconds

pingsweep("public/26")
5.2381483190001745 seconds

pingsweep("public/24")
155.7946709909993 seconds

The performance gains were made from the using the switches in the ping command limit the wait time in the pingsweepold but there is still a performance gain with the script with the subprocess method. The times are not a massive difference for this tool but it was fun to measure the time and performance of this tool set.

pingsweepold(“public/24”)
156.3443550189986 seconds

pingsweep(“public/24”)
155.7946709909993 seconds


We gained a full second with the difference . I hope you gained something from reading this and I am thankful to share my journey with you.

Resources:
https://docs.python.org/3/library/subprocess.html
https://docs.python.org/3/library/ipaddress.html

VXLAN/EVPN

One of the original virtualization in computing was in the network construct. VLANS allowed for logical segmentation of the network. This worked well for a long time , but with the emergence of cloud computing demands, the need to scale this virtualization demanded more. In order to scale , the network needed to grow beyond the 4096 vlans. There is three technologies that have allowed the network scaled into millions of virtual networks.

Spine & Leaf

Spine and Leaf Networks are a designed to minimize the complexity of the feature of the network , minimize the features onboard the box and maximizing the bandwidth capacity. The goal of the organization is to create non contending network , oversubscription model.

What this means in practice is if you have a 1RU Switch with 48 x 10G ports + 6x 100G ports. You can connect all 48 ports to servers , run all 6 , 100G ports upstream to the Spine switches. This would be an oversubscription rate of 1.25 , or 25% more bandwidth can be achieve upstream than down to the servers. This should allow for all traffic to be non-contending. This topology design build the method and bandwidth delivery to achieve cloud scaling.

VXLAN

VXLAN is the overlay technology that allows a Layer 2 domains t be bridged over a routed IP Network. This is based on a tunneling technology , VXLAN makes use of four different headers (RFC 7348) which encapsulate the MAC ,IP,Port Header to forward traffic over a routed network. The headers direct the packet to be sent from the local switch to the peering tunnel endpoint. The routing to these local endpoints can be done with ISIS. OSPF , BGP but what is important to understand . The peers need to discover each other . This is how switches indicate they are interested in participating in the Layer2 domain .

router# show vxlan 
Vlan            VN-Segment
====            ==========
411             411000
500             50000
511             511000
711             711000
1001            2001001

This was initially down with Multicast which makes sense. Multicast is based on signaling up the tree that you are interested in joining a traffic group. Vxlan virtual interfaces (NVE) are able to map VNI (Vxlan Network Identifier ) . This can also now be accomplished with BGP EVPN Address families , this has the added feature of BGP traffic controls.

This segmentation technology allows to eliminate costly spanning tree across switches and simplifies the configuration local to switch. Vlans become locally significant , VNI are mapped across as the unique name space which ~16 million. This scales the network l2 virtually over an IP fabric.

EVPN

EVPN (Ethernet VPN)

EVPN is an extended address family of BGP. VXLAN did allow the ability to send Layer 2 Domain over a routed network but lack the control plane to control routes. The control and data plane were shared by the underlying IP Fabric and multicast groups. BGP EVPN allows vxlan to perform the data plane transactions , while allowing BGP to advertise the various routes types. This allows for a multi-tenant environments to be scaled and to ensure segmentation. EVPN allows tenants to be mapped into VRFs (IP-VRF or MAC-VRF) . VRFs are local , logical segmentations.

In order to follow the logical segmentation , What is important to remember

  1. Vlans are configured with VNI
  2. VNIs are associated to Multicast Groups or BGP
  3. VNIs contain both a Route Distinguisher & Route Target
    • Route Distinguisher is the VRF and separates the routing tables on the switch
    • Route Target is the policy to import / export tied to its own or other Route Distinguisher
  4. BGP is the control plane for the routes
    • MAC Routes
    • IP Address Routes
    • Prefix Routes
  5. There is separate routing tables for all of these.
# vlan -- vni MAC-VRF 
vlan 511
  vn-segment 511000

# Layer 3 Interface to IP-VRF Mapping 
interface Vlan511
  no shutdown
  mtu 9192
  vrf member dia-vdc
  ip address 172.31.1.2/29 tag 50000

#MAC-VRF COnfig 
 vni 511000 l2
   rd 10.200.0.1:511
   route-target import auto
   route-target export auto

#Ip-VRF Config 
vrf context dia-vdc
  vni 50000
  rd 65101:50000
  address-family ipv4 unicast
    route-target both auto evpn

#vxlan encapsulation config 
interface nve1
  no shutdown
  host-reachability protocol bgp
  source-interface loopback1
  global ingress-replication protocol bgp
  member vni 50000 associate-vrf.    #This is the IP-VRF 
  member vni 511000                  #This is the MAC-VRF 
    ingress-replication protocol bgp # BGP to pass Mac address routes 

#Show bgp info for MAC-VRF 
router# show bgp evi 511000
-----------------------------------------------
  L2VNI ID                     : 511000 (L2-511000)
  RD                           : 10.200.0.1:511
  Prefixes (local/total)       : 1/2
  Created                      : Mar 15 20:05:06.485798
  Last Oper Up/Down            : Mar 15 20:05:06.551859 / never
  Enabled                      : Yes
  Associated IP-VRF            : dia-vdc
  Active Export RT list        : 
        65201:511000 
  Active Import RT list        : 
        65201:511000 

#BGP Info for IP-VRF 
router # show bgp evi 50000
-----------------------------------------------
  L3VNI ID                     : 50000 (L3-50000)
  RD                           : 65101:50000
  Prefixes (local/total)       : 2/2
  Created                      : Mar 16 15:58:31.359963
  Last Oper Up/Down            : Mar 16 15:58:31.360000 / never
  Enabled                      : Yes
  Associated IP-VRF            : dia-vdc

Address-family IPv4 Unicast
  Active Export RT list        : 
        65201:50000 
  Active Import RT list        : 
        65201:50000 
  Active EVPN Export RT list   : 
        65201:50000 
  Active EVPN Import RT list   : 
        65201:50000 
  Active MVPN Export RT list   : 
        65201:50000 
  Active MVPN Import RT list   : 
        65201:50000 

Address-family IPv6 Unicast
  Active Export RT list        : 
        65201:50000 
  Active Import RT list        : 
        65201:50000 
  Active EVPN Export RT list   : 
        65201:50000 
  Active EVPN Import RT list   : 
        65201:50000 
  Active MVPN Export RT list   : 
        65201:50000 
  Active MVPN Import RT list   : 

#Route table for MAC-VRF 
router# show bgp l2vpn evpn vni-id 511000
BGP routing table information for VRF default, address family L2VPN EVPN
BGP table version is 123, Local Router ID is 10.200.0.1

   Network            Next Hop            Metric     LocPrf     Weight Path
Route Distinguisher: 10.200.0.1:511    (L2VNI 511000)
*>l[3]:[0]:[32]:[10.200.200.1]/88
                      10.200.200.1                      100      32768 i
*>e[3]:[0]:[32]:[10.200.200.2]/88
                      10.200.200.2                                   0 65101 65202 i

#Route Table for IP-VRF 
 show bgp l2vpn evpn vrf dia-vdc 
Route Distinguisher: 65101:50000    (L3VNI 50000)
*>l[5]:[0]:[0]:[28]:[192.168.1.0]/224
                      10.200.200.1             0        100      32768 ?
*>l[5]:[0]:[0]:[29]:[172.31.1.0]/224
                      10.200.200.1             0        100      32768 ?

There is several RFC that explain this better than I can here and an interesting Facebook whitepaper about BGP as the single protocol in the datacenter.

I hope this brief overview helps in your journey to understand more about the tech available to us today.

RFCs:

https://datatracker.ietf.org/doc/html/rfc8365

https://datatracker.ietf.org/doc/html/rfc7432

Docker

In a recent project , it was decided to deploy the new service with Docker Containers . A crash course of learning Docker has become part of the scope of this project . While I understood the conceptually Docker is a way to abstract various components of the an underlying operating system without using the full resources of the Virtual Machine. There is the concept of Images and Containers. Images are the Operating Systems , Containers are the running instance of them.

Docker Engine Install

For my instance for Ubuntu20.04 this was

  1. Updating the local linux packages (as one does on all new installs)
  2. Installing the packages to pull the Docker Repo over HTTPS .
  3. Add Docker GPG Key
  4. Set up the Repository
    • Stable , Nightly or Test Repo options
    • Stable was selected
  5. Install the Docker Engine
    • Check the versions installed in the repo
  6. Hello-World
  7. Manage Docker as a non root user
  8. Enable Docker on boot
# 1. Update the local linux packages 
sudo apt-get update

# 2. Install packages for HTTPS 
sudo apt-get install \
  ca-certificates \ 
  curl \ 
  gnupg \ 
  lsb-release

# 3 Add Docker GPG Key 
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# 4 Set up the Repo 
deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu   focal stable

# 5 Install Docker Engine 
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
#Check the versions 
apt-cache madison docker-ce
root@server:/home/# apt-cache madison docker-ce
 docker-ce | 5:20.10.14~3-0~ubuntu-focal | https://download.docker.com/linux/ubuntu focal/stable amd64 Packages
 docker-ce | 5:20.10.13~3-0~ubuntu-focal | https://download.docker.com/linux/ubuntu focal/stable amd64 Packages
 docker-ce | 5:20.10.12~3-0~ubuntu-focal | 


#6 Hello World 
sudo docker run hello-world

#7 Manage Docker as Non root 
sudo groupadd docker
#Add user to docker group 
sudo usermod -aG docker $USER
#Add new group 
newgrp docker 
# Test you can run without sudo 
docker run hello-world

#8 Enable Docker on Boot default on Ubuntu 
sudo systemctl enable docker.service
sudo systemctl enable containerd.service


Running Containers:

Installing my projects container from a repo was easy , but then it was a requirement to install more images

#Container repo  
docker pull example.com/pathtocontainer 

#Run the container 
docker run --name example 

Docker Engine vs Docker Compose ?

After Docker Engine was installed it , I was able to install the containers image from the project repo. This was easy enough. What I did learn is projects often need multiple images/containers to run. Docker Compose is what is needed to run these.

#Install Docker Compose 
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

#Apply executable to the library 
sudo chmod +x /usr/local/bin/docker-compose

#Validate Installation 
docker-compose --version
docker-compose version 1.29.2, build 5becea4c

Docker Compose is great , you are able to make changes to the configuration in the yaml file “`docker-compose.yml“`. Inside of here you can config each of the containers. You can create a volume or mount the container directory to a directory on the virtual machine itself.

version: "3.7"
services:
  nautobot:
    image: "image"
    env_file:
      - "local.env"
    ports:
      - "8443:8443"
      - "8080:8080"
    restart: "unless-stopped"
    volumes:
      - ./secret:/opt/nb/secret. #This is a mount location 
      - ./secret:/var/lib/docker/volumes/nbvolume/_data # This is a volume 
volumes:
  postgres_data:
  nbvolume:

Docker Images:

Images are installed and now we have run the container , lets check out what is installed and running.

#List running containers 
docker ps 
#List all container 
docker ps -a 
#Inspect Images 
docker image inspect batfish/batfish:latest 
docker run <IMAGE>
#Docker add a DNS entry 
docker run --hostname HOSTNAME IMAGE 

Docker Volumes:

One lesson learned quickly is the need to mount and or have volumes . Containers do not save info on a reboot. Make sure you create a volume and add it to the docker-compose.yml.

#Create the Volume
docker volume create nbvolume

#Check for the volumes 
docker volume ls

# Inspect the volume  
docker volume inspect nbvolume 
docker volume inspect nbvolume 
[
    {
        "CreatedAt": "2022-04-20T09:04:12-04:00",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/nbvolume/_data",
        "Name": "nbvolume",
        "Options": {},
        "Scope": "local" 
}
]

The you can rebuild the service to connect the container to the storage volume

docker-compose up --build 

---this will output the log messages to determine if there is any issues with the container builds 

docker-compose up -d --build    

Docker Install Linux Packages from Container

Containers are intended to be minimal installations. If you need to install packages on the container . You need to enter as root

#Enter the shell as root 
 docker exec -u 0 -it  container_name bash

#install stuff 
apt-get -y install firefox
apt-get -y install vim

Docker Networking

You may also need to move the networks of these containers to be shared . The following is how I was able to review and move a containers network to one shared with another one.

#Inspect All Networks 
docker network ls
NETWORK ID     NAME                              DRIVER    SCOPE
345c8863e2dd   bridge                            bridge    local
d728aeb8fb17   host                              host      local
93ccefdc0b5c   container                        bridge    local
e61e468c0c0a   none                              null      local


#Connect container to a network 
docker network connect b46076c7a7b5 batfish/batfish 

Inspecting Networks

Bridge Network is the containers — the virtual machines network , but you can learn details about the networks with the inspect command .

docker network inspect host

# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "345c8863e2dd24587aa1410b20ffd5092c8e30c58a8ec035f223541b5b70679b",
        "Created": "2022-04-04T22:34:26.153453995-04:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
        "Containers": {
            "c17dcdd680b4a766c557ba74dc5fa5335508cf30cbc67524f936e0470afeb344": {
                "Name": "affectionate_engelbart",
                "EndpointID": "16889b97db32a412731453a77ba99a5085375884db1e1d37de66386001d02117",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        
]

Learning container abstraction is an important component to implementing solutions, I hope this post might help you in your journey while enhancing my own understanding.

Python

One of the most import skills for modern network engineers is Python. One of the habits to learning Python was to use it daily as a habit and tie it in directly to my daily needs. What I think gives me those success is when I can make basic tools to make my life easier.

Network engineering is all IP Addresses .Working with the ipaddress module bridges daily tasks with learning python continuously . Something easy is a function to check if whatever is passed in , is an IPv4/v6 address.

#Python3
#Lou D 
#Function to return True/False if IPv4/IPv6 passed is valid
import ipaddress

def isIP(address):
    try:
        valid_ip = ipaddress.ip_address(address)
        return True
    except:
        return False

Something a little practical , a quick script to ping sweep a range a subnet.

#Python3
#Lou D 
import ipaddress
import os
#
def pingsweep(prefix):
    prefix = (list(ipaddress.ip_network(prefix).hosts()))
    #The format function pulls the ip to be used in the function
    #format will pull the ip from the address 
    format(prefix)
    for i in prefix:
        ip = (format(i))
        os.system("ping -c 1 " + ip )1

Simple , easy and it will run some pings to a range of addresses , the return should look something like this

Python3
pingsweep("8.8.8.8/31")
dns.google.
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=118 time=18.459 ms

Anyone coding would tell you there is room for improvement here , there is another method to make DNS queries inside the socket module . You can see a sample output below.

 
import socket
def revdns(address):
    # Address = addr varable
    addr = address
    domain = socket.getfqdn(addr)
    if domain != addr:
        print (domain)
        return domain

    else:
        domain = (addr + " no revdns ")
        print (domain)
        return domain

if __name__ == "__main__":
    revdns("8.8.8.8")

I get the most value out of starting small and making more function as I go.

Modern DataCenter Topology

Modern datacenter network designs employ technology to promote virtual networks over the top of layer 3 underlay networks. This network design has multiple layers and maintaining segmentation between the layers is important. Compartmentalizing each of the designs aspects allows the engineer to facilitate the build. The topology , underlay, overlay components are designed within their own domains. This blog will be for my notes on the topology design.

Before all the automation , overlays and magical clouds can be built , the network design must scale to meet the demand. How to achieve scalability? Scale Out .

Scaling out is the concept of adding capacity to meet demand. In a Spine & Leaf topology this is adding devices at the “spine” layer to add east/west bandwidth or devices at the rack layer for access capacity. Keeping the features at a minimumn and adding as much bandwidth and redundancy as needed.

These topologies are deployed in PODs. A POD is a layer 3 micro-cluster to deploy services. The access or top of rack switch connect to fabric layer (spine) switches with 40/100 G uplinks to each .The top of rack (leaf) switches provide 10/25/40G access to the servers. The goal is to provider a “Fat Tree” where the uplinks from the ToR (Top of Rack) to the Fabric (Spine) Layer has an oversubscription to allow for non-contending network.

Fat Tree Design is oversubscription uplinks from the ToR — Fabric layer switches to provide Non-contended throughput. The image below is a sample POD design with the FSW (Fabric or Spines Switches), RSW (Top of Rack or Leaf) Switches . Each RSW has 3x 100G uplinks to each FSW , The Servers have 2x 10 uplink to the RSW. If the RSW are 48 x 10G and 6x100G, this would allow for 1.25:1 Oversubscription model a full rate.

The goal of the design is agility, so building out these PODs with Fixed port devices support predictable bandwidth , latency and avoid backplane bottlenecks from elephant flows in chassis boxes. The goal of a this design approach minimizes features needed , maximizes the physical resources of the switch and avoids software and other complications with chassis boxes. Chassis boxes are physical less flexible and encourage more dependency on “God” boxes.

What you do lose out on with chassis boxes is power and cabling efficiency when compared to singe rack unit, you gain in agility still Single rack switch topologies can be deployed for specific use cases , used for generations even with proper underlay/overlay design avoid vendor lock in. The key advantage of this design is fault tolerant design , with less “critical” devices , The allows for devices to be quicker to be replace , redeployed. In my next blog , will discuss the Underlay routing design of these networks.

References:

Russ White – Effective Data Center Design Techniques

BGPAlerter

Something that kept me up at night was the idea that one of my organizations prefixes could be abused, after a few restless night, I decided to look for a tool comparable to what BGPmon was before it was acquired.
I came across BGPAlerter. BGPAlerter is open source software for BGP Monitoring. The program monitors BGP Streams from public repositories from RIPE , CloudFlare , and NTT. The software checks for BGP Hijacks , Path Changes and RPKI based upon the ASN and Prefixes the user elects to monitor.

The platform is useful since it was a lightweight open source program I could use to detect BGP streams from Public Repositories to ensure path security and stability. The software can be run from source code , docker , or as a linux service. I elected to install and run the source code version on a linux box.

#Add the system Modify IPS
netplan apply 
#Check firewall for ssh allowed
ufw app list
ufw allow ****
ufw enable 
#Change the system Host name
sudo hostnamectl set-hostname newNameHere
sudo reboot

#Check if SSH is enabled 
sudo systemctl list-unit-files | grep enabled | grep ssh

#Switch to user account 
sudo su bgpalerter
cd /home/bgpalerter/

#Install whois 
sudo apt update
sudo apt install whois

#Initialize Git
git init
git pull url = https://github.com/nttgin/BGPalerter.git

#Install BGP Alerter 
wget https://github.com/nttgin/BGPalerter/releases/download/v1.24.0/bgpalerter-linux-x64

#Mark the File as exectutable
chmod +x bgpalerter-linux-x64

#Check the installation and version 
./bgpalerter-linux-x64 --version

#Add prefixes to be monitored 
nano ~/prefixes.yml

#Install Node.js
sudo apt install npm

#Use Script to Auto-generate Prefix List

npm run generate-prefixes -- --a ASN,ASN  --o prefixes.yml

Getting started was easy enough, using the organizations BGP ASNs , BGPAlerter quickly built a list of prefixes that were seen on the RIPE RIS Server and configured a prefixes.yaml file to store a list of monitored prefixes.
BGPAlerter uses the concept of “monitors” . Monitors analyze the data flow and produce alerts.
Different monitors try to detect different issues the administrator is interested in.

 monitorHijack
 monitorNewPrefix
 monitorPath
 monitorVisibility
 monitorAS
 monitorRPKI
 monitorROAS
 monitorPathNeighbors

The use of these monitors allows me to monitor for specific path issues a be alerted to them. The code snippet below is the example from the github but works well for my implementation.

Example: The prefixes list of BGPalerter has an entry such as:

165.254.255.0/24:
   asn: 15562
   description: an example on path matching
   ignoreMorespecifics: false
 path:
   - match: ".*2194,1234$"
     notMatch: ".*5054.*"
     matchDescription: detected scrubbing center
   - match: ".*123$"
     notMatch: ".*5056.*"
     matchDescription: other match


These events can be fired into multiple alert channels such as slack , email , kafka , syslog of course. A visual of the Slack integration is seen below. This was used to detect a prefix that needed to be removed from announcement.

Been using the program for a few months and it serves it purposes. Verion 1.28 has introduced some new features such a restful API where active alerts can be pulled from.

#Helpful Commands 
#Start BGPAlerter
npm run serve&
#Confirm the Version in Use
npm run serve -- --v

Resources:
https://github.com/nttgin/BGPalerter
https://ris-live.ripe.net/

May 4th 2020

Networking Tools , Upgrades , Deployments and Troubleshooting.

Last week was a productive one. I started a new site build, reviewing security scans with NMAP. Auditing Speed Testing with iperf. Troubleshooting with Nexus switches and some augments to my local environment.

This week , a client was reporting speed issues on a P2P circuit that egresses off Nexus 3064. The device continued to demonstrate log messages “MTM Buffer”. There was hundreds of these logs. The system messages between the ASIC on the interface to the CAM table were being left open and not updating the CAM table. The Layer 2 consistency check continued to fail. The recommendation from TAC was a reload of the switch. Following a reload the quantity of the (show system internal mts) was back to normal levels.

Iperf: Following this reload. I needed to test the transit service off this impacted switch. I wanted to test against the capacity of the server before I tested across the circuit. Baseline Speed Test with iperf3. I turned the server on locally. (iperf -c localhost -p 5002). Once the server was on, I ran a test against the server (iperf -c localhost -p 5002). I also learned about the ‘nohup’ command in linux. The nohup command executes commands specified and ignores any hangup messages. I tested across the circuit as well successfully.

NMAP: A client has been running some nmap scanning for the internal network monitoring. Any issue was discovered the existing method was not picking up all open ports. The client was using the TCP Connect method (nmap -sT) . The TCP Connect method used the underlying OS to establish a connection rather than a raw packet connection. This method missed an open rsync port. The method was moved back to (nmap -sS). The SYN Scan is problematic since it triggers reactions from security devices in the path but I was able to whitelist the “attack” server in this case.

Over the last few weeks, I have discovered the need to expand and monitor my home network. One of my latest decision is to expand capacity with UniFi Dream Machine Pro. The device is combination of firewall & security gateway with 10G SFP+ WAN and LAN. There is also support for hosting the Ubiquiti controller directly on the appliance, DNS filtering , IPS/IDS , and direct support for the camera system. I hope to write more about the deployment in the next blog.

I am looking forward to the webinar on “History of IPv6: Past, Present, and Future” hosted by Nalini Elkins and Bob Hinden. Bob Hinden is the Co-Creator of the Ipv6 Protocol. There is a follow up later in the month on “IPv6 Transition Mechanisms and DHCPv6”. These are sponsored by ARIN.

Last item of this weeks post is the Packet Pusher’s podcast. Ethan & Greg discussed the last decade of hosting the show, how their relationships with other professionals , vendors have changed over the years. The Slack channel is one of my favorite places to discuss everyday networking. I highly recommend.

IPv6 – April 23 2020

One of my clients recently requested to deploy IPv6 access on multiple sites. There was a previous /28 assignment from ARIN that was partially deployed. I did follow some of the previous information to model my deployment but still wanted to spend some time reviewing ipv6 [RFC2460] and learn about the why and how to deploy the protocol. I reviewed some material from the CCIE books as well as attended a Webinar sponsored by ARIN which was on IPv6 Fundamentals.

Originated in RFC 2460 and replaced by RFC 8200 The IPv6 protocol has five improvements 1. The increased address space 2. Header Simplification 3.Improved Extension Support 4.Flow Labeling 5. Extensions to support confidentiality authentication and integrity. Address Types.

Ethan Banks [Twitter @ecbanks] tweeted about the ARIN sponsored webinar for IPv6 fundamentals. The first in a series of lectures led by Nalini Elkins. She has a remarkable credentials and has two RFCs. The first hour covered the IPv6 address structure. The 128 bit address length , represented in Hexadecimal. The various address types Global[2000::3] , Private/Link Local[FE80::], ULA(Unique Local Unicast)[FC00::7] , Multicast [FF]. The second hour covered ICMPv6 , SLAAC , Multicast Listener Discovery , and Router Advertisement for IPv6. I took plenty of notes from this course and am looking forward to the next webinar on May 7th History of IPv6: Past, Present, and Future , which is lead by Bob Hinden. Bob Hinden was a co-inventor of the IPv6 Protocol.

After a review and the webinar. I felt empowered to build the new IPv6 deployments. I started by assigning address space to each site. A /32 global prefixes to each of the sites. Broken into two /48 prefixes [XXXX:AAAA:0::/48 , XXXX:AAAA:1::/48]. One for internal use , another for customer assignment. Each 48 bit network prefix is followed by a 16bit subnet id, followed by the host. This allows the assigned of /64 on each interface/ customer. It seems like a tremendous waste of address space on a point to point link but it helps keep the address hierarchy.

OSPFv3 was deployed to support internal address routing. OSPFv3 uses the links themselves instead of the actual subnet for advertisement. This is due in part that interfaces typically have multiple IPv6 addresses and OSPFv3 uses the link local address space for communication. BGP was turned up both internal / external. The most interesting issue was creating a transit access list for IPv6 Bogon/Martial space.

My next phase of this project will be for automatic address assignment for the space and some additional learning on the protocol. I look forward to continuing making use of IPv6 more.

References:

April 14th 2020

Despite my blogs being dead since February , I have been active on my networking journey. COVID-19 has forced millions to work from home including myself. The home networks importance has come to the forefront.

Since everyone started working from home a few weeks ago. There was reports from users with video and audio call quality issues. Prior to the events, I had ordered additional Access Points to expand the network. My internal monitoring (librenms) showed my firewall CPU utilization over 85% daily. I reviewed the Ubiquiti controller and USG (3P) logs and noticed that the firewall was not forwarding in hardware and I could not switch it back on. I noticed the Beta IPS/IDS was enabled. Although the service should still allow 85Mbps it disabled hardware offload. I disabled the feature and re-enabled hardware offload. Users have reported improve performance.

As a result of the issues, It became clear my monitoring needs to improve. Since the issues were related to voice quality issues , I wanted to get a better idea on the latency of my network. Smokeping is a latency monitor that I installed as part of my librenms build. The chart before is showing some smoke I will need to review over the coming days.

I am continuing my studying as well which albeit is less exciting sometime is equally important. IPv6 has been the latest topic. Recently a customer tasked me with deploying v6 to an existing site. I spent time reviewing v6 material; watching videos on topics as a primer for material but reading the RFCs allow for my own interpretation of the technology. RFC 2460 & RFC 4861 and 4862 have been worthwhile reads. Yesterday while scrolling through my Twitter feed , Ethan Banks over at the Packet Pushers posted a Webinar for an ARIN IPv6 course this week. I am looking forward to this Webinar.

February 26th, 2020

Today I decided to take the new class of Cisco Certification for the CCNP Enterprise Track. I booked the 300-401 Exam for the end of April. I am not certain if this is enough time to study for the exam but should motivate me to review everyday. The ultimate goal is for a CCIE Number, which is no easy undertaking. If I pass the 300-401, I would need to take the lab exam inside of 18 months.

This mornings topic was OSPF Stub areas. The ability to limit SPF runs in OSPF is the key to scaling out the solution. OSFP Stub areas prevent either LSA Type 4 (ASBR) or Type 5 (External) from being announced into a stub area. Instead a default route is announced to the ABR. OSPF allows further traffic engineering with blocking Type 3 (Summary) routes as well. This is known as a Totally Stubby Area.

OSPF Traffic Engineering allows for Not So Stubby Area and Totally Not So Stubby Areas. I believe I understand how these work. The principle behind both is that there can be a need to redistribute external protocols into OSPF and to announce to other areas of the OSPF graph.

In NSSA or TNSSA , redistributed routes are introduced into OSPF. The ABR for OSFP for the NSSA/TNSSA will announce there routes as a Type 7 LSA internally. When the ABR is flooding this information into another area is will convert the Type 7 to a Type 5 (External) LSA.

I learned about traffic engineering in a topology where there is a stub area that can learn a default route from two different ABRs in the network. Although the “Stub” Flag is a negotiated feature in OSPF Neighbors, the “no-summary” option has not bearing on this. This allows for traffic engineering to have one ABR to filter LSA Type 3 Routes for one of the ABR. This has the effect of allowing the longest match prefix to be matched on the other ABR in the topology and to have the redundancy of the default route to the “no summary” ABR.