You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 37 Next »

On Jul. 25, 2022 Jeff Kern asked K. Scott Rowe to head a tiger team to investigate the various issues that have affected the ALMA Archive hosted in CV for the past few weeks to months.  The team was initially just K. Scott.


Documented Issues


Timeline of events

  • 2020-03-19: ALMA suspends science observing and stows the array because of COVID-19.
  • 2020-06-24: Archive webapps (aq, asaz, rh, etc, but not SP) moved to new Docker Swarm (na-arc-*) system.  See more.
  • 2021-03-17: ALMA re-starts limited science observations, resuming Cycle 7.  See more.
  • 2021-10-01: ALMA starts Cycle 8 observations.  See more.
  • 2022-02-03: Science Portal (SP) upgraded Plone, Python, RHEL and moved into Docker Swarm.  All other webapps had already been in Docker Swarm.
  • 2022-04-18: First documented report of performance issues.  Webapps moved to pre-production Docker Swarm (natest-arc-*).  See more
  • 2022-05-09: moved Science Portal (SP) from Docker Swarm to an rsync copy on  http://almaportal.cv.nrao.edu/ for performance issues
  • 2022-05-31: moved Science Portal (SP) from rsync copy back to Docker Swarm
  • 2022-06-30: Tracy changed the eth0 MTU on the production docker swarm nodes (na-arc-*) from the default 1500 to 9000. The test swarm is still 1500.

Benchmarks

  • Using Apache Benchmarks every hour to load http://almascience.nrao.edu/ on rastan.aoc.nrao.edu
  • Using download script to get 2013.1.00226.S-small (no ASDM tarballs) every hour on cvpost-master.aoc.nrao.edu
  • Using download script to get 2013.1.00226.S-large (with ASDM tarballs) every hour on testpost-master.aoc.nrao.edu
  • iperf tests using iperf3 -B 10.2.97.72 -c 10.2.97.71 -P 1 -i 5 -V -f g -t 15 -w 1M

Production docker swarm iperf tests measured in Gb/s.


na-arc-1

(naasc-vs-4)

na-arc-2

(naasc-vs-4)

na-arc-3

(naasc-vs-3)

na-arc-4

(naasc-vs-4)

na-arc-5

(naasc-vs-5)

na-arc-1
180.0022010

na-arc-2

20
0.0022010
na-arc-30.0020.002
0.0020.002
na-arc-420190.002

na-arc-510100.0021010

There is clearly something wrong with na-arc-3


Test docker swarm iperf tests measured in Gb/s


natest-arc-1

(naasc-dev-vs)

natest-arc-2

(naasc-vs-1)

natest-arc-3

(naasc-vs-5)

natest-arc-1
0.90.8
natest-arc-20.9
0.8
natest-arc-30.30.4

The test docker swarm (natest-arc-*) are performing as expected.  The VM hosts have 1Gb/s links so getting 80% to 90% bandwidth is about as good as one can expect.

Questions

  • Why does na-arc-3 have such poor network performance to the other na-arc nodes?
  • Why does iperf show 10Gb/s between na-arc-5 and na-arc-[1,2,4]?  How is this possible if the default interface on the respective VM Hosts is 1Gb/s?
  • Why does iperf show 9Gb/s between na-arc-4 (which is on naasc-vs-4) and na-testcarta-2 (which is on naasc-vs-3) when there is only a 1Gb/s link between those two VM Hosts?
  • Is putting all the production docker swarm nodes on the same ASIC on the same Fabric Extender of the Nexus switch a good idea?
  • When I run ethtool eth0 on nar-arc-{1..5} natest-arc-{1..2} as root, the result is just Link detected: yes instead of the full report with speed.
  • Why does natest-arc-3 have ens3 instead of eth0 and why is its speed of 100Mb/s?
  • Why do natest-arc-{1..3} have 9 veth* interfaces in ip addr show while na-arc-{1..5} don't have any veth* interfaces?

To Do

  • Switch the production docker swarm back to MTU 1500.  Since the test docker swarm uses MTU 1500 and is performing better.

People (not necessarily team members)

  • K. Scott Rowe - Tiger Team Lead
  • CJ Allen - sysadmin
  • Tom Booth - programmer
  • Liz Sharp - sysadmin
  • Brian Mason - DRM Scientist
  • Zhon Butcher - sysadmin
  • Tracy Halstead - sysadmin
  • Alvaro Aguirre - ALMA software
  • Pat Murphy - CIS lead
  • Rachel Rosen - previous ICT lead
  • Laura Jenson - current ICT lead
  • Catherine Vlahakis - Scientist


Communcation lines


References

  • No labels