2003/10/30 UCL HEP/CS networking meeting ======================================== Peeps here: Peter clarke (Pete) Andrea Yee Roy Javier ?? CS PhD #1 ?? CS guy #1 ?? CS PhD #2 Peter Van santen (PVS) Saleem Me. Why are we here: Pete: Sitting in a meeting in CERN, there were lots of people from all over the place. It occured to Pete that we communicate with all these people but not with the people closest to us - CS. We don't do pure research - because we have to get stuff working. CS guy #1 ========= Things they're interested in: Good protocols ---- Bridging? ---- Denial of Service ---- - it's basically architectural The whole service is there to get stuff from one end to the other as quickly as possible. This is exactly what you don't want to prevent DoS. With high speed networks there is a great risk of DoS attacks. - They are looking at architecture which might be resistant to DoS attacks, while at the same time trying to improve the architecture for normal stuff. examples of resistant architectures: - phone network - you can't disrupt current calls, but you can disrupt the ability to make new calls. - other network might restrict the ability to send traffic anonymously. - Interesting problems of lightweight trust - Normal trust is impossible, as it is too computationally expensive to do the crypto. Anything that does normal trust checking is a target for a DoS attack. In the area of high speed congestion control. Background in doing congestion control for streaming stuff. (ish). More recently working on XCP. - In the packet, you add what the source's window sizes are, etc plus what the source would like to do next time. - The router can then decide what to do based on that, and can reduce the window size to whatever it's capable of. - 2 disadvantages 1) lots of bits in each packet 2) the routers need to do a lot of work on each packet. Needs to do a few multiplies and a few adds. Routers can do a few adds and a few shifts. - However, if it was working everything would work great. Mostly because there are never any surprises - the window size for the next round-trip is set by the hosts/routers on the network. The conclusions from this are: - you can't do converging on a high throughput flow without feedback from the network. - with small transfers, you never even get beyond slow start. Saleem ====== Most users assume that the network resources are available, and all the goodies are too. For e-science it's probably true. The presentation is a set of reasons why these things aren't necessarily all there. The things that the users ought to want are: - high capacity - upper layers. But not a complete middleware service. - predictability / timely delivery - some confidence in how the service is provided. Both short term and long term. the behaviour of the network under load. - don't know quite how the applications will make use of this. People do want some sort of booking and protection of resources/services. - realtime stuff - video and voice - not realtime - want a file of a certain size to arrive in a certain time, but the data rate can vary. - Users don't realise: - the applications need to be changed. Apps need to be adaptable to feedback from the networks. - security. Proper end-to-end security } DoS attack - reliability. } resistance - security - It's very easy to really mess things up with high throughput transfers. - Even just threat analysis is hard. - PVS: CS have a new project just started. Integrated project, run by Thales (sp?). What will probably end up happening is: IPv6 has signalling built into it. If someone had implemented, in the parts beneath, response to this signalling (but they haven't as they're working as if with IPv4) then this signalling could work. -> IPv6 has the ability to be signalled from above and to signal below. - reliability - There was a bit of a peak in this work when the telcos started thinking about using IP. - Otherwise there isn't a lot of work in reliability - predictions of performance and so on. - Notifications: finding out when there's a problem from the network, instead of just finding out by the fact that things have fucked up. - There is a problem here wrt DoS: outage notifications would be a good way to mess up applications. - other requirements - Security - Charging - Recommendations for site policy PVS: in some environments (eg Defense) there is no way the core network can do all security. There have to be policies at all different people. There is a massive misunderstanding about how security can work in networks. - Inter-domain issues - Network weather maps - monitoring - Unified resource control - Next generation - e.g. You have to convince the HEP community that IPv6 is worthwhile. (it is true that IPv6 is an investment in the future, not in the present). Saleem thinks it is vital that for SuperJanet 5 there is a development network. Pete: if noone uses the current dev. network then they probably won't do that. Yee's talk ========== The idea of his project is to transparantly improve network performance for applications. PVS: large companies aren't really interested in improving just the performance of high-throughput networks; the major part of that cost is getting the network :)