...
- Why does na-arc-3 have such poor network performance to the other na-arc nodes?
- ping na-arc-1 with anything larger than -s 1490 drops all packets
- Why does iperf show 10Gb/s between na-arc-5 and na-arc-[1,2,4]? How is this possible if the default interface on the respective VM Hosts is 1Gb/s?
- Why does iperf show 9Gb/s between na-arc-4 (which is on naasc-vs-4) and na-testcarta-2 (which is on naasc-vs-3) when there is only a 1Gb/s link between those two VM Hosts?
- Is putting all the production docker swarm nodes on the same ASIC on the same Fabric Extender of the Nexus cv-nexus switch a good idea?
- I am thinking not because it looks like the production docker swarm nodes use the 10Gb/s network which is on cv-nexus9k
- When I run ethtool eth0 on nar-arc-{1..5} natest-arc-{1..2} as root, the result is just Link detected: yes instead of the full report with speed.
- Why does natest-arc-3 have ens3 instead of eth0 and why is its speed of 100Mb/s?
- Why do natest-arc-{1..3} have 9 veth* interfaces in ip addr show while na-arc-{1..5} don't have any veth* interfaces?
...