Eric, Matt, Victor, Nicolas, Paul, Yee Nicholas: We should have the same interface for: * the user-facing tool * the inter-domain interface These are the most important for inter-domain communications. Lucic is also doing the first implementation. Topics: ======= * What Eric et al have been doing * What we've been working on * Outstanding question: how does this relate to the NMWG - who can attend that meeting at GGF GGF: If we can go then we could also continue to Chicago meeting. The week before the full member meeting in indeanapolis (haha, sp?). Eric: ===== They have been trying to do revisions to the architecture that came out of the workshop. little tweaks: now talking about initiator and acceptor at domain level and at PMC level instead of source,sink at PMP level - the initiator is actually the sink, and the acceptor is the source. They have code for Iperf and OWAMP. OWAMP access via web interface. Victor: why have initiator=sink, acceptor=source for PMP? Epic: For Iperf specifically, you get a lot more data on the sink end. To try to make things more compatible with Dante stuff. They've been working with PMP/PMC mostly. But they've been having a lot of trouble with the performance of the data. OWAMP produces a large volume of data. Talked with Warren about tying this stuff in with MonaLisa/???. They've been trying to think about AAA. Matt has been experimenting with it. that will take place over Fall, with the CITI group of the Uni. of Michigan. They are also looking at Shibboleth: it's a way of showing who you are, and assigning levels of trust to various groups. Shibboleth has only really been thought of wrt web-access based stuff. Not really generic enough for general use. Finishing up full deployment on Abeline, also putting it on two campuses. Mostly they've been doing clearing up database. Trying to think of ways of keeping the collaboration going. OWAMP data problems: continuous flow of OWAMP data writing, with very little reading. They also have to look at old data when sanity checking, and that poses serious problems for table design. Matt: the volume of data is due to the fact that they are doing a particularly large number of measurements - full mesh, continuous measurements. The Demo: http://abilene.internet2.edu/owamp/status_map.cgi Storing minimum, 50th percentile, 90th percentile. When they start adding campuses, it won't be part of the full mesh of nodes. There will be measurements between the campuses, but not between the campuses and the routers. Chessboard in top right of measurement page: shows full mesh of recent measurements. How does the architecture look from the Dante pov?: Looks good! Does Dante do anything that the pipes architecture doesn't handle. Depends on the web interface. atm they are trying different sorts of queries. - are there any sort of errors on the measurements? you *could* They would like it to be possible for any tool to collect metadata, including error data. There might be three types of data for each tool: data that the tool collects, stuff generated on the fly, stuff that's generated later in an aggregatory type way -- Nicholas would like to --- Major difference: For Dante, a PMP is a thing associated with a single host and a single measurement For piPES, a PMP is a thing that has lots of tools. --- So, this should be a non-blocking thing for the client. Could do: answer to A is: either do nothing, or ask again. -- Meetings in early-mid october. in one week there is the GGF, next week is Internet2.