...
- Why does na-arc-3 have such poor network performance to the other na-arc nodes?
- ping na-arc-[1,2,4,5] with anything larger than -s 1490 drops all packets
- iperf tests show 10Gb/s between naasc-vs-3 (p5p1.120) and naasc-vs-5 (p2p1.120). So it isn't a bad card.
- iptables on na-arc-3 look different than iptables on na-arc-[2,3,5]. na-arc-1 also looks a bit different.
- Why does iperf show 10Gb/s between na-arc-5 and na-arc-[1,2,4]? How is this possible if the default interface on the respective VM Hosts is 1Gb/s?
- Why does iperf show 9Gb/s between na-arc-4 (which is on naasc-vs-4) and na-testcarta-2 (which is on naasc-vs-3) when there is only a 1Gb/s link between those two VM Hosts?
- Is putting all the production docker swarm nodes on the same ASIC on the same Fabric Extender of the cv-nexus switch a good idea?
- I am thinking not because it looks like the production docker swarm nodes use the 10Gb/s network which is on cv-nexus9k
- When I run ethtool eth0 on nar-arc-{1..5} natest-arc-{1..2} as root, the result is just Link detected: yes instead of the full report with speed.
- Why does natest-arc-3 have ens3 instead of eth0 and why is its speed of 100Mb/s?
- virsh domiflist natest-arc-3 shows the Model as rtl8139 instead of virtio
- Why do natest-arc-{1..3} have 9 veth* interfaces in ip addr show while na-arc-{1..5} don't have any veth* interfaces?
...