APNIC home / Meetings / APNIC 24 / Program / APOPS / Transcript . .

APOPS transcript

Session one

Session two - IPv6 Operations

Session three

Session One

Wednesday 5 September 2007



Can we all settle down, please? And those who are outside, please come back in.

So I want to introduce the first of the three APOPS sessions that are going to take place for the rest of this day here in the Crystal Ballroom.

As was explained at the start of the opening plenary, APOPS is really the Asia Pacific Operations Forum. What it does is really the main plenary session. It's the main plenary session at APRICOT and the main at APNIC meetings. And with the joint APNIC-SANOG event here in Delhi, what we have done is we've basically branded what would normally be the SANOG plenary session as the APOPS session, to indicate the joint meeting. So we have three sessions here. I'm going to chair the first one. In case you don't know, I'm Philip Smith, I'm one of the chairs of APOPS. The second session this afternoon will be chaired by my co-chair, Hideo Ishii, who is sitting at the end there. And the third session this afternoon will be chaired by Gaurab.

So we have four presentations this morning. Before I start, I'd like to ask the presenters if they could speak clearly and slowly for the benefit of the stenographers. Likewise, for you, the audience, if you have any questions you'd like to ask at the end of each presentation, please use the microphones because the session is to be recorded and is available on the Internet as well. So please state your name and affiliation before you ask the question just so we know who you are.

I'll just mention a couple of other things. The Lightning Talks session is tomorrow afternoon. If you're interested in presenting a Lightning Talk, please let myself or Gaurab know if you're interested. An email has gone out. You can respond to that if you received it. Otherwise, contact us here.

Also visit the local conference website - www.conference.sanog.org - for up to date information about that. There is the meeting noticeboard, which I'll pop briefly up on the screen as well. And you can go there for, again, up to date information about the meeting.

So, let's move on to the presentations. The first presentation we have is from Abhishek Aggarwal, talking about isolating suspicious BGP updates to detect prefix hijacks.

Isolating suspicious BGP updates to detect prefix hijacks


OK. Good morning, everyone. Hi, I'm Abhishek Aggarwal. And I shall be presenting some preliminary work in isolating BGP prefix hijacks which were done as part of my Masters thesis in Delhi, under the guidance of a professor and a doctor. The process of this is BGP updates prefix hijacks but I'll be talking about isolating suspicious updates to detect prefix hijacks. So 'isolating' is underlined to indicate the focus is on isolation. We're not trying to solve, we're not trying to solve the BGP prefix hijack problem here.

BGP prefix as a problem has been studied in both the network and the community for quite some time now. But the main - we've seen cases like the AS 912 incident which around 100,000 prefixes which led to a network outage. This was an example of accidental misconfiguration leading to a network outage. But prefix hijacks can be done deliberately for profiteering purposes, as for sending spam and automated mechanisms to isolate and classify ASP prefixes are actually needed.

Let me talk you through a basic prefix hijack example. The blue bubble indicates any network, maybe the Internet, and the white bubbles actually indicate the autonomous systems. On the left, we have AS - announces a prefix and the rest of the network start sending traffic for the prefix to AS 52. On the right, we see that AS 110, if it decides to hijack the prefix which belongs to AS 52 and announces the same prefix. For ASP, this newly announced one is highly attractive, as compared to the one from AS 52. Therefore it switches, it switches its traffic and actually starts sending traffic to AS 110. And hence as well as the rest of the network is concerned, this area, this prefix gets hijacked.

So normally what we would observe is it would get, it would see them announcing the same prefix which is AS 110 and AS 52. Normally we should have been able to figure this out. Or a BGP prefix hijack. We have valid cases on the Internet where the conflicts occur, as the next example will show. So here AS 1 is normal. AS Y and AS X are providers. Both of them announce the prefix that belongs to AS 1 to the upstream Internet. So the AS in the upstream in the Internet will see two announcements for the same prefix with different ASs. Here is a case, this is a conflict but this is not actually BGP prefix hijack but a valid case.

So our objective is to isolate suspicious BGP updates for further analysis. And objective, as I said earlier, is not to solve the prefix hijack problem. While we are not solving the prefix hijack problem but we still believe our objective is worthy because this would help the network operators to actually look at the few interesting suspicious updates. These suspicious updates can be investigated further and actual BGP prefix hijack incidents can be detected.

So the basic approach - the basic approach that we follow is we observe the state of prefix, and we analyse past BGP data to try and establish what is normal for a prefix. Once we've established the normal behaviour for a prefix, we can analyse the new incoming updates in an online fashion. And isolate them as either suspicious or classify them as normal.

To build the state information, we take the prefix and associate the state with it. Origin A is the state that we associate with a prefix and with the router, we drag the changes to the origin of the prefix. So whenever the origin of the prefix changes, we monitor certain properties, as to establish whether it is a safe change or not. So during the monitoring period, we're able to establish what would be the normal changes for the state change corresponding group of the particular prefix. We use this knowledge when we're doing the classification.

So these are the main features that we use in order to classify the state changes, safe or normal. So first one is change in percentage hold time of conflicting ASs. If we have a prefix which is being advertised by AS 1 and AS 2, and say AS 1 holds it for 70 hours and AS 2 holds it for 30 hours, then the percentage whole time would be 70% and the percentage of AS 2 would be 30%. And it would be 70% minus 30% - 40%. As the new updates come for a prefix, we bring in the change in AS path, as a second. And the third is the AS path relationship. We go further and investigate what is the exact nature of the overlap of AS paths. There are three main categories we define. Overlapping, cross and the origin. The two parts are set to cross each other. The two parts intersect at unique points but they are different - unique. And the two parts are to be distinct if they are all independent.

So just to find this - in data collect. Our data collection set-up. We apply this approach from the point of view for single autonomous system, so we use the data collected. We use the data collected from one router and we use one month data. During the monitoring period, we process a total number of 600,000 updates, all of which we found a total of 124 prefixes. So one which has more than one AS announcing them. So these were our observations with regards to the parameters which I described earlier, in this graph, on the X graph. And on the Y I have whole-time difference. Since what we can observe is since most of them are close to 200%, what this indicates is that in most of the cases, where a prefix had more than one, it was a single for most of the time. This was true for a majority of prefixes. This was our first observation.

In this figure, on the X, on the prefixes, in the figure I have plotted the percentage whole-times. So for instance, take a prefix and a bottom figure, on the Y, I have spotted this. This vertical, down graph shows the majority of this for a prefix and this was the minority. And correspondingly in the lower section when we look at the graph, we see the majority here has a smaller - a shorter A. So what this implies is that for prefixes which had more than one original announcing them, the original - a shorter path, or a more attractive part. So this establishes negative correlation between the change in percentage whole-time and the AS part of it. So what we're trying to say here is if we combine observation 1 and 2, what we claim is that whenever the state of a prefix changes, there is a new AS, announcing that prefix. If the percentage whole time is negative, then there's a majority versus a minority. The change should be positive. That's a new part which should be longer. If that's not the case and the new part is actually shorter, that is breaking the negative correlation we have established and we become suspicious. And that is exactly how a hijack is supposed to spread. So we further investigate this case and actually look at the nature of the AS path overlap. On assessing the nature of the AS path overlap, we found for almost 88% of this the conflicting ASs have overlapping paths. And the other two cases were a minimum 5% and 6%. That's the cross in the distinct path.

So what we conclude from this is that whenever during a conflict, if the two AS paths are overlapping, we go forward as normal. If they're distinct, crossing, we deem them as suspicious.

So putting it altogether, we would say a potential hijacking AS would have a negative, a low percentage whole time. A negative would have a shorter AS and a distinct is path relationship. These metrics give us during in which the first phase, the first phase is the warm-up phase where we would actually run through some part of the data and build the information to do the analysis and during the second part we would use a classification to actually analyse the data, depending on the state information we have been given. The classification is presented in the form of this. I would not discuss this part of it. I want to emphasise this branch. So, as we had when we are monitoring the updates, if there's an announcement, and it causes the change in state of a prefix, it's from a different original AS path, it causes a conflict. Then we analyse the three parameters, with the percentage whole-time change. If the percentage whole-time change is negative, that means we're shifting to a major possessive. If the AS path changes and is negative, that's a violation of the correlation and that actually means that suddenly a new original AS has popped up which has never possessed a prefix for a major time. And it's providing a more attractive path which is how we expect a hijack to spread. We go further and analyse the actual AS path overlap relationships. If they're distinct, they are highly suspicious. They are cross, overlap, then they are suspicious.

So we began the classification on one month's data. And we processed a total of around 600,000 updates and out of these the MOAS conflict were 958. And they were suspicions. 60% of them were highly suspicious which is the negative AS path. 11 of them were medium, which with the characteristics, plus the AS path relationship was overlapped. There were low 67 in which the AS path relationship was distinct.

So one interesting thing we have encountered is the high suspicion - this is a /24 prefix which was active. It's origin AS was 5050. And this origin possessed a prefix for around 99.9% of the time. A shorter AS path, as we can, and it had previously possessed a prefix for a very short time.

So this update was flagged as highly suspicious by our system and actually, according to us, needs further investigation. This, in fact, forms the basis of a further, for this study, and the future work, which will after isolation, we will try to validate these by actually conducting the prefix folders or through, or by inquiring in the data plan.

So the main conclusions from this study is to isolate suspicious BGP updates and the percentage whole-time change and the AS path change. And they are useful features to isolate suspicious BGP updates. So currently we're working on this and looking for more data to actually run it on different data sets to validate our findings. And build a community database of prefix hijacking incidents which can be treated as base truths for further studies. Thank you for your time. I shall take questions.


Are there any questions?


I'm Taiji from JPNIC. Thank you for the very interesting research. I'm looking forward to seeing this. And one question is: you used the past data for detecting the prefix hijack, right. Did you have any consideration on use of the data or the IRR data?


We are trying to base our study from an autonomous system perspective. We wanted to build a system which can be deployed around this system so it can actually immediately start processing the updates that are coming inside the system. So with the IRRs, where they collect their data, does not fit in to our scheme of things.


Thank you.


My question is you're taking data from a network, have you done any analysis in looking at it to promotion of a live Internet? Are you looking at information feeds from the public Internet at all, down to any analysis?


We haven't been able to do it on the public Internet data. Our concern is with the prefix hijacks which would actually be available more in the Internet live data.


Probably quite an interesting area of security research, so I wonder if you're kind of cooperating with people in the security, on the security industry? Is there any correlation between what you're doing and what's currently - what the current state of play is there? Is there any cooperation between what you're doing with your research and the security community?


It's a hard area. I picked it up as my Masters thesis, based on a NANOG paper. The community is pretty active in this.


Is the code you've used publicly available?


Yeah, I can give it to you offline.


I think it would be useful and interesting for people to have as a tool.


I would love to share it. Thank you.


Any other questions?

I have one. So data that you're looking for, does something like a live BGP feed make sense or are we looking for static BGP table?


A live BGP would be ideal but even a static BGP would be good for us. We have a system in place that we can feed it as a live feed to our systems. For the offline analysis we can do it.


Sounds to me you could get a live BGP feed from one of the ISPs here in India. There will be lots of people who would probably be willing to do that, I'm sure. Places like Route Views and the rest have lots of BGP data. I'm not saying you should just do that but there are plenty of places you could get them from.


There's a lot of historical data, different locations, there could be a lot of additional, very interesting information from analysing that. Just to see where they have taken place, past hijacks. In the length of time it's been in effect for and then correlate that with other security events. Just a point of interest.


Thank you.


Thank you very much.


Automated system administration - Devdas Bhagat


Next up, we have Devdas Bhagat talking about automated system administration.


OK, so this is more from my experience with working at a hosting provider rather than this point of view. But the same logic still applies. Even if we're looking at managing networks, which the same thing was said in the morning, you should try and keep stuff identical and simple.

System administration mostly says we're all individuals. Every system is supposedly unique. Everything has got something different. Different IP addresses, maybe different operating systems. At the end of it, it provides the same service, just not the main server. Somebody makes a bit of change and then another change and at the end of it, all your systems become separated. They all become individual systems and somebody needs to remember how to manage them. It does not matter how much documentation you have, people will not read it.

First lot, there are unique systems. They're hard to replicate. The first system goes down, a crash, we still have a problem. You lose the data for any reason. Oops. You can't read it because it's all in somebody's head. This person gets hit by a car, "Sorry, dude, can't maintain that system because nobody has the knowledge of it."

Am I going too fast? Services end up going all over the place and at the end of the day you still can't manage them. They can be painful to manage. They're not actually all that different. What we have is a bunch of identical machines providing the same services. Some are different but most of them are the same. What we need to do is have a system that lets everything converge in together. Every change that you make brings all your systems to a more homogenous environment. It is showing one machine and every machine of that class is exactly the same. They may have different physical specs but once you get past it's still the same. They can send mail and still be the same. The same mailbox here, the same mailbox there. I don't want to worry about what is different on those.

First of the three-step solution. We have a problem. So let's have more people to fix it. The first thing that everybody does, it is getting more complex, let's hire more people. It's taking up too much time. More people is more complex. Your problem is not in the whole system environment as such but in the communication between people.

Someone says more people on the software project will always make it later. There's a fairly good analysis for why. Again, a step point, you have more complexity. A lot of companies like to do this - make it a process! I'm not sure if 'processify' is a word but it sounds good. You have the methodology in the assignment or you have other stuff. You end up with blanket checklists. You make a process, follow it religiously. If it fails, "Sorry, dude, somebody else fucked up, it's not my fault." There's a problem with this. ITIL makes it obvious. There is a centre of management database. Everything should go in to the CMDB. You have a change process for everything. A change is coming. Any change is a change request, you have a bunch of people work on this and it goes in to the CMDB. "I don't know how to do this, let's put it in the CMDB." At the end of the day, nobody knows what's going on in the CMDB. Your complexity is that it's sitting in the CMDB. There's still no documentation, no clarity. Nobody knows.

"Go away or I will replace you", is the standard small shell script. You take their jobs, replace them with a small problem and, "Now you can go off and do better stuff." It should be scripted. Follow the methodology. If you need to do it twice, you make it a program. You make it a small program and you keep extending it bit by bit as you need. You can end up with a 35,000 line transcript, which we actually have at this time in work. A lot of people do this because most of the assignments go, "I'm not a programmer." How do you point and click. They actually let you do that. But it's not very commonly known. If you're going to go to the programmer side of things, you write most of your stuff in the memory management. You don't worry about how to combine stuff. It gets repeated identically all the time. You don't just apply them to our side.

Benefits of automating stuff - auditability. It makes work consistent. Those are major benefits. How often have you walked in to work looking at a problem and go, "Oh, shit, someone else made this minor typo last night and we've been losing out on stuff and have been rejecting mail for the past 12 hours." Or somebody made a change on Friday night and you walk in Monday morning, and then for the past 48 hours there has been a screw-up.

More benefits - make work testable. You can give guidance, make it work in a virtual environment. And we know your changes will not damage anything. You can make sure that, "OK, I can work out the software." When you're servicing 50,000 users on a single box, then you can have 100 calls to your helpdesk.

You can actually have a process that says, "This has to go on to the test infrastructure. It must be documented and automatically it is done." It needs to be done before it gets deployed in to production. Make it be faster and bigger. For the end of this year, my goal is that my entire department will have full assignment work as such - 15 people. Maybe another few helpdesk calls. No more work. That's it. We need tools. Currently the state of tools is not very good at the moment. No-one does it except for John McCormack. How many people do you see offended - one. I sympathise.

OK, this is what assignments do, this is how we need to make it work in 100 boxes. Let's write down all the steps we need to do. There are better ways to approach the problem.

But the rest of them are all aimed at basically replacing this.

The next few slides. I'm more familiar with the tool, so I use Puppet configs. You have a whole bunch of people who can make the test, that's the only way you can test it. It will take about a year's worth of testing before you're willing to deploy.

OK, let me jump through this. This is my example config. Deploys postfix on the box. I need to configure it like this. I spent 16 minutes writing the definition for the first. The second one was a copy and paste. The third one, same amount of time. Deployment time for this has gone down from two hours to five minutes. If I need to deploy 10 boxes, it's still the same five minutes because most of the five minutes is penned in. We need to do this, log in to the box. Do the change. Close it. That's about it. It takes me more time to do that kind of work than to do this. If you want to change the configuration, I can do this. I just have to say, with the allocation thing, a blacklist file. A blacklist file there. A refresher, actually the postmap, that's a postfix specific thing. Basically you're making maps. And I can automatically get it deployed out to all my boxes. Puppet does that for you. I no longer have to worry about, "Have I forgotten that box?" It will be in the same state as any of the box, they're all identical using Puppet.

Alternatively, I could define my boxes like this. We use this class for anything else. So if I have a website which needs MDB, Puppet will start off saying that I don't have that but then Puppet will install it all for me, the configuration. I'm not interested in knowing how it all works. I don't care if it's Solaris or RedEye. I don't want to know all that. I want it all in the box.

This works, migrating between operating systems between hosts. You need to express your policy. Normalise it. Define it once and then you simply make a reference to it everywhere else. There's a lot of mathematical theory that goes in to building these databases. Building these databases is like building a relationship database for your system conflict.

Benefits - you don't need that many administrators. Yes, I was told I should not put this thing in because a lot of people say that we don't need all these. Describe it - it's no longer the assignment's problem of how they work. For most of the time when things go well, you're no longer worried about the final days of how they're implemented or differences between systems like that. You say, "OK, I want this on the box." How do you define it - Puppet will take care of it for you. Puppet will do the script. It's no longer a problem for this assignment. This is a big change. What happens when you have the box and they're extended down there? If you miss out on one change, it's fine. You can push it out to the next change. Two changes, three changes, four changes and then you realise that you have another box that has been down for two changes. You cannot maintain client state here. Don't complicate it. Let them figure out the stuff. What is its current state? What the state should be and synchronise the two.

Recommendations - Google does it. They have their own stuff that they don't publish, which kind of sucks. One company manages it with lots. Servers are located so they have not monkeys running around if need be but there are four. If they're afraid somebody is going on vacation or whatever, four people. Not all things can or should be automated but quite a bit of it can. The decision-making process, you shouldn't but with implementation, you should.

Not all the tools are equal. They have their pros and cons. The tools are starting to offer things but there are completely different views of the box at this point in time.

There are great reporting but it doesn't have the same detailed internals that Puppet does. Puppet has fairly low reporting. It can dump but there's not much more of reporting other than that. There's reporting in that but it's building. People don't like changing the way they work. This is fairly simple. You spend four hours, six hours building a config. And you start deploying the boxes, have your untrained guy walk in in 10 minutes. No training required or anything required. This is a great change to your management policy. People are used to logging in to the boxes and working. What you need to do is stick your conflicts in. After that everything else is automated. You just check your conflicts in to control and then check out your conflict, it does them for you and it's OK.

No problems. You need few administrators but you need them with a far higher skill set. They have to understand what they're doing. You can have a bunch of more monkeys doing all the grand work if you need to but in general you're replacing them with software. So you need fewer people, four people is more than enough. But all of them need to be correct at their job. They need to be able to decide and implement policy as I'm going X, Y and Z. This is what I want on these boxes. This is what the system will require and this is what needs to be in the boxes. This is what not needs to be in the box. It takes time and it requires management buy in. That's a problem because you still need that infrastructure. Hopefully everybody has one but a lot of smaller companies don't. It's hard to get it because most of them will prefer to say, "It's easier for me to get cheap, unskilled labour, rather than trying to get somebody who is skilled enough to use this tool." If you have the standard kind of idea, "I need someone with X years of experience." So that's a standard joke where you have, "I need somebody with seven years of experience on Windows 2000 or someone like that." OK. Questions?


That was a fantastic presentation. A lot of this material is very useful and needs to be, I think, communicated to the larger ISP audience in particular. Are there any forums or perhaps central locations where best common practices are documented? So this sort of approach you're talking about here is...


Infrastructures are about website. There is a website, League of Professional System Administrators. There is an automated config management system. There is a config management mailing list as well. But at the moment that's pretty non-busy at this point. I've not seen any traffic on that list for the past few months.


How do I do this. How do I approach this and take the software that is out there? You've shown there are many packages and a lot of people have time in developing these, but how do I as a small service provider go about automating all my processes and procedures? Most people, I would think, probably do it their own way. They put their procedures together in their own way. Is there any definitive kind of reference? Are there books that are published that take you through?


There are a couple of books but they're old. There are no current edition of best practices. Infrastructures is probably the best quick reference to get started off. They have this paper that they have published and they have a process that they follow there. This was published in 1994. Still speaking about the fact we need to automate stuff today. I'm not sure how well the idea has been communicated around. But, hey...


I think the opportunity exists for material about this.


There is a future. If you're looking at infrastructures, with hash infrastructures and Puppet. There is probably others for all other products as well.


How long has the Puppet product been around?


For about three years now. 3.5 years.


Fairly widely deployed?


No, but it is growing. Probably one of the more interesting tools. At least according to what I'm hearing on LOPSA. There are people looking at it and deploying it, rather than just figuring out what to do with it. The Puppet website has a whole bunch of recipes so you can go there, copy, paste those down and do your work.

MARK SEWARD. There are references there?


There is.


That's the point of presentations like this, I guess. PHILIP SMITH: Anybody else? No other questions? OK. Thank you very much, Devdas.

Experience in wide deployment of wired/wireless/fiber in Kathmandu valley - Samit Jana

Our next speaker is Samit Jana from WorldLink.


Good afternoon. I'm Samit from WorldLink Communications. My presentation is a little bit different to others. I'm not too technical. I'm not doing dos and don'ts or anything in technologies. I'm presenting our own experience, how we were able to scale the layered network. We're not using QTQ, just a simple layer of the Internet extending over the metropolitan area of Kathmandu valley.

So this is the outline of my session. The challenges we face in Nepal to deploy broadband. Number one, we have a political instability, which makes it difficult to make a long-term investment and introduce new technologies.

We're still connected to the Internet backbone via VSAT, expensive and very slow.

Similarly, many people still don't know what is Internet, lack of education and awareness. Similarly, the GDP of the people is very low.

And our landscape is also very difficult and we have plains, hills, as you realise, mountains, so to cover a large area also is a very challenging part.

But still we have managed to introduce broadband and we were able to provide the cost effective broadband solutions. So our vision mainly focuses on the need of the people. That is to hook them up into the Internet and now, so that our main vision is connect everyone in Nepal to the Internet at an affordable rate.

So those who don't know what is Kathmandu, it's the capital city of Nepal. It's a very small city, 280 square miles. The population is about 2.8 million in the valley and Internet users are probably around 3 million around the whole country. And according to the ISP Association of Nepal, we have 33 operational ISPs currently operating and we have 60,000 active user accounts in all ISPs.

As I explained earlier, we are still connected to the VSAT. They have acquired fibre from VSNL but not of the ISPs have produced any transit yes. So this is how Kathmandu looks. It's very small.

The diameter from here to here is approximately 60 km so it's pretty small.

So WorldLink, we started in 1995 with a single VCN holding but currently now we manage 102 servers and we subscribe over 65 mbps of satellite bandwidth. We have over 25,500 active accounts and 300 employees. We have Internet market share of 41%. Network coverage all over the valley as well as all the major cities within Nepal. Currently we have /18 IPv4 address space. We also collocate a service and we use unique applications.

So this is how our network looks like. So most of the places we are still connected via the VSAT and these blue lines, are the lines and recently we managed to put in a wireless link but many of the places we couldn't see are the red lines. These are all back-up links which can span up to 100 km between one hub.

So typical ISP set-up, nothing so special. We are connected via the VSAT up in to the Internet in Hawaii. We have a collocated server at Hawaii, where we have a web and DNS server to serve the international customers. We have a bandwidth managers as well as a bandwidth compression at the broad end. We use a Linux manager for the QoS and output setting and this compressor is the compressor where we hack the GBS source code and I think maybe within three or so months we'll make that open source for the use of everyone. It saves 10% of satellite bandwidth. That means we are subscribing 65, we are processing more than 70, 72, something like that.

So we have one switch running a bunch of servers to manage the route balancing for the services like SMP, PoP, DNS, web, etc. Here - this is our upstream and we peer within an exchange from here. We have routers which connects to PoPs outside the valley via VSAT or via the wirelessly. Similarly, we have a knock-out here. Even the knock is behind the firewall, we have access to all the network in case of any failure. We have all the servers and bandwidth managers.

So why do we need the wireless broadband? Because, until 2002, we are fully dependent on telco leased line and the point-to-point wireless connectivity and, as we all know, it is very expensive and cannot scale much, so we thought that point to multipoint connectivity would be a scaleable solution.

So in late December 2002 we introduced a Motorola Canopy point-to-multipoint platform for the first time in Nepal. We started with static IP allocations to all the customers and by the end of 2003, 400 new medium-sized corporate customers subscribe for the Internet and the VPN data service. So as soon as we introduced the wireless broadband, the bandwidth licence has jumped from 8mbps to 18mbps in one year and the price went down to $10 per month.

Oops, what have we done? We have leaked the private LAN in the public network. So our network was down a couple of times due to all these things. I don't think I need to mention all these things because it's natural, if you extend the LAN, you will all get this. And there are a few related problems, which stopped us to scale the network. At this point in time, we thought that we would stop doing it. We thought that this might be impossible, but things changed.

As soon as the Motorola, they introduced the new firmware with the packet filtering and the NAT in the subscriber of the wireless, it stopped many of our problems, but still we made a 24/7 proactive wireless monitoring team within our network and it runs packet sniffer and filtering stuff and we employ the broadband routers to connect directly on the PC of the customers. We ran scripts to reboot it automatically if there is any GBS errors and we automated a lot of things. And we got encouraged that Layer 2 can be scaled.

In late 2003, we did a PPPoE-based service. Gave us traditional dialogue. We introduced hourly volume-based broadband connectivity. The service was very successful. We have over 700 customers within one year. Again, the price went down, around $90 for 60 kbps per month. Currently we have 67 access points servicing the Kathmandu Valley.

So this is how our connectors look like. We have six access points covering 360 degrees. We have 25 megahertz of separations and some clusters. We even have two access points. This is what our tower looks like.

This is subscriber module. This is taken from my house.

If the customer is a VPN subscriber, we only allow the VPN connection. So if the customer is using a static IP address, then we allow all IP and all others.

So, there are not so many major problems in this space, only a few bottlenecks. We added additional access points, upgraded the classic AP to a newer one and we even segmented the PPA customer and the static wireless customers and then we ran out of the /19 IP address and we acquired another one. We are confident that layer 2 can be stable.

So why we need a Last Mile Ethernet? Because we need to migrate the dial-up customers to broadband. So most of the dial-up customers, they are paying more to telcos than to ISPs. They are paying $10 to $20 for the Internet as well as paying $12 to $25 to the telco using a home line. So we cannot migrate those customers, the end-users, home-users, for the expensive broadband because the subscriber module is still very expensive, it costs around $300, $400 for one module. So we need a last-mile solution. The solution is simple Ethernet. Sounds stupid, but it worked.

So, in the first place, what we did is on the electricity poles, we designed our own metal boxes, waterproof metal boxes and in the metal boxes we mounted an eight-port VLAN mobile switch and the switch costs less than $20. The poles, it was hardly 100 metres apart and all the switches are cascaded linearly. And this way, wherever we have wireless connectivity, we can put in Last Mile Ethernet.

So this is how it looked like. Nothing so special. It's connected either by fibre optic connectivity or by wireless. We have a remote switch out here. If the switch hangs, we can power recycle the switches. And we have a custom of cables bundled with electrical wire to power supply all the switches.

So we have Ethernet segments up to 20 cascaded switches covering around two kilometres. So all the ports can reach - one Internet segment cannot exceed more than 100 metres but we are operating up to 150 metres because we are running - every customer was kept in VLAN. PPPoE connection is being allowed. By the end of 2006, we have over 4200 customers being served by the Ethernet. Until now, we have only deployed 600 kilometres of Ethernet cable and 6,500 VLAN switches. So this is how the VLAN on the last mile Ethernet VLAN is configured, so every port on a separate VLAN, the sport number is configured as multi-VLAN, member of all the VLAN. In other switches, the port goes to VLAN. This part either goes to another switch or to a subscriber module or to the catalyst switch port.

So the major problem we faced in this is switch-related, because we are using very low-grade switches, so we can migrate to the high-end switches, because that usually increased the total costs of the users. And sometimes the servers go down with the customer's PC so we start the to use Internet service protectors. Sometimes the switch hangs so we install remote-accessible power control to recycle the power switch. Can take the switch and reboot it automatically. Replace the power module with 80 to 100 A/C. When you go on extending the power, the voltage gradually drops, so our first version of the power cable, it has a low grade of power so we increased the size of the copper wire and we managed to change the power supply. It can operate in low and high end.

Occasional Internet cable switch breaks. And this one is very interesting. Faulty Internet cables and broadband routers of the customers creating a loop in the network and it creates like a broadcast slump and it brought down our network a couple of times into a complete hump. This is something like, if you take a switch, a cross cable and connect the two ports, so what will happen is the traffic will loop and nobody will be able to connect. The problem is not in the metro Internet because the catalyst has a loop mechanism which can disable the port as soon as it sees the loop pack, so it will put the port in the error disabled state so that segment is off. But the problem is in the wireless, because we don't have that feature in the wireless, so the problem still exists in our wireless network, but we have a mechanism to detect that and isolate the segment very fast.

So as the number of clients increased, the subscriber modules were incapable of handling the load so we migrated to the optical fibre network so we gradually start to replace the subscriber modules with the fibreoptic. We have already deployed 450 kilometres of fibreoptic cable.

So this is the metal box mounted in the electric pole. This is the inside of the box. You can see the switch, the link and also see the wooden box to protect from electrical surges coming up from the electricity pole and this is the power distribution switch. You can see this customised cable. Here we have a cable and here we have the power wire.

So these are the features which we use. Not so common. We use RSTP, we use spanning tree. We use storm control and MAC security, we use protected ports and we also use port-based MAC and IP LAN. We use all these.

So this is how our network looks like. No so complex. We have two different rings, for a wireless and for an optical fibre. You can see the wireless extended over the SN up to the end-users. This runs RSPT, we have a failover time of less than five milliseconds and we have - since all switches are manual switches to find out whether this segment is alive or not, we have an IP switch at the end. We even have a hot spot running from the same network so if any customer requires a static IP, then they need to have their own CP at their end and we add them to turn it into a switch. So we have a FreeBSD service and we have a Linux bandwidth manager on the edge.

So the main challenging part is monitoring the switch network. So how do we do that. With the open NMS and the Nagios tools, we developed our own application to trap the active MAC address and analyse the packets in the network. Did a report on the aggregation switch which trapped all the packets in one server. Actually, I'll go to the next slide, which will be a little bit clearer. We have a packet analyser out here. This is our aggregation switch. This is a mirror board, which sends all the packets of this whole network out here, so it traps all the packet information. It puts it - it stores it in another database for the historical data and on the fly it sends it to our monitoring servers. So this monitoring server, it can, on the fly see whether this user is alive or not, it is connected on which board and what traffic it's sending, how much packets it's sending, everything, and whether that user is online or not, it makes also to the RADIUS server as well. So if anything changes in the database, it can directly send the changes into the memory of this monitoring server.

So this is how our monitoring looks like. This is all our PoPs. Green means it's active, red means it's inactive for some period of time. Grey means some activity is going on. So if you click any of these sections, then you'll get this type of topology. So based on all the information, it can automatically create a network topology. So these greens mean the switches are alive and the black means it is dead for hundreds - from last 100 seconds, that means it's not getting any packets from this network. So every one second, this segment changes the colour, so eventually it runs into the black colour.

So if you click on any of these sections, it will pop this type of applet and you can see in which port, which user is connected. So green means this user is alive. So if you click on this port, you get - you can get all the user information, his MAC address, his download speed, his IP address, and if you want an on-the-fly, graph, could get that. We are still optimising this. We can even - we can graph anything, we can grab it from the server. So that's not a big problem but if the user is disconnected and if the customer is alive, it will dynamically create the graph on the fly.

So this has helped us quite a lot to monitor our online wireless network.

So, currently we are Googling for this additional features on the subscriber model. We have already sent a request to add these features. So if they add these features on their future releases, then it's fine. Otherwise, I think we need also hack that or do a reverse engineering. With the metro Ethernet, we plan to replace all the unmanaged switches with managed switches.

So our future plan - get the fibre backbone as soon as possible, get our own terrestrial wireless connectivity and we are planning our broadband power. That's the end of my session. Thank you. Any questions?


Mark again. You're saying you use a custom-modified FreeBSD to enhance your natural connectivity. What does that change in terms of latency? You've already got a latent -


I'm not sure because I'm not a programmer, but in our test it's less than one millisecond.


Oh, really? That's very quick. No large changes in the amount of buffering you have to do? It's pretty quick?




OK. Second question for you. I notice all the metro Ethernet you use brings it back where you've got PPPoE servers everything. Is the amount of peer-to-peer travelling on your network an issue? Does it mean that you have customers out of their location, wherever, at the home, talking to other homes in the valley, do they all have to traverse through your network back to the core, to the PPPoE server and transfer there and come out again? Does that cause capacity issues for you?


Not yet. Because the traffic is currently only the web browsing and e-mail. So a lot of people there are not using any VOD or any live streaming. It's basically only the Internet and e-mail traffic. Capacity is not an issue right now.


People aren't bittorrenting in Kathmandu?


Most of it is web and e-mail. We have a gigabit - capacity has not been an issue. We have a capacity of 95mbps of traffic. So that's not an issue for bandwidth capacity.


You're running VLAN spanning tree for redundancy. Do you have any situations where the fail over time becomes a problem? Are you saying it's failing over in five milliseconds. Is that regular? Is that normal?


As the number of switches grows, the recovery time will gradually increase. We have currently 14, 15 switches and we're getting a recovery time of four or five milliseconds. Maybe if you add more switches, the recovery time can go higher. At that time, we can use NSD to segment a section and to get the better recovery time.


What percentage of your switches are still old, unmanaged boxes, versus managed 3,500s, or whatever you've got?


Almost all.


They're all unmanaged?




That's a lot of switches. Thank you, that was a very good presentation. Thank you.


It's not so much a question. I'm a big fan of pragmatic networking like this. I've done some similar stuff back in New Zealand, but this is 100 times bigger than what we've done, so it's really good to see.

Secondly, what is the alternatives that are offered, so the incumbent telcos, DSL or wireless or some similar sort of product? And how does that fit in with what you're doing?

Basically, what does the incumbent provider offer?


OK. You mean to say that - basically, our main competitors, they are also using the same - they are also using the same wireless which we are using, but they are not using the metal Internet type that we are deploying. They are deploying television and Internet on the same cable, so that was one alternative. And we don't have any additional connectivity and they are soon introducing connectivity after one or two months. Besides that, I don't think there are other alternatives. Maybe some are providing a wireless service as well, but not much.


OK, great, so the sorts of technologies you're using are accepted as the way it's done in Kathmandu.




Cheap and nasty. That's good to hear.


We need to focus on the need of the people. We need to make them aware of the use of the Internet. So once you create a market, then the technology will automatically come.


That's great. Thanks.


Gaurab from SANOG. I should know about this because I'm one of the customers on your network. I was wondering what kind of Layer 3 protocols you use and how fast your routes can coordinate broadband. I think it's pretty fast all the time but I would be interested to know what Layer 3 you run on top of this and how it's scaling right now.


We use a typical OS sphere and BGP. In our layered network.


And for the end-users? You don't do any others, backhall back to the core routers? Even with PoPs, all the way, static routes and VLANs?




Works well?


Yes. It was all backhall to our people, on the PPS servers.


That's good, thank you.


I think in the interests of time, we probably should move onwards. Very quick.


I'm Shengyong Ding from China Telecom. I want to know if this is very profitable in your area, in your city? I know in China Telecom, that the broadband is not very profitable now, but I want to know how profitable it is in your country? Is your service very profitable?


Yes, of course.



Thank you.


OK, thank you very much for your questions and to Samit. Thank you.


Trans-oceanic systems - check list - Barry Greene

So final presentation is from Barry. It's on trans-oceanic systems. While he's getting set up, just a request from the folks on the network and infrastructure here. If you see a plug plugged into the wall, do not unplug it, OK? Let me be very clear about that. A plug was unplugged at the back and a fair amount of infrastructure was switched off. If you need to plug your laptop in to something, there are lots of empty seats here and lots of power boards for you to plug your laptop into. Hopefully we will not have to make this announcement again.


Move up front.

My name's Barry Greene. I'm from Cisco Systems. Some of you know me because I've been out in the Asia Pacific for quite a while. Some of you don't know me. Usually you see me as a security guy. I've been doing that for six or seven years. Before that, when I was living in Asia Pacific, one of the things we did was build backbones across oceans.

Me coming back out looking out here, one of the things I'm concerned about is - and I'm just going to make this brief because I'm between you and lunch - is the people are not realising that, you know, your link across the ocean, satellite links, terrestrial links, are things you need to engineer, and give an example - changes in, like, in big providers, US, Europe, things like that, people are moving beyond because what it used to be, was just Internet back and forth with two peering routers and people were going back and saying, "I want video and voice and lots of different things." Your interconnects are a current component of your business. It's about business. As Vijay was talking about earlier in his keynote talk, you've got to think about your business as you go through.

To give you an illustration of why I'm concerned, I'm seeing a lot of different providers as the market increases out here. What they do is they don't realise the dynamics of an engineering principle. Engineering principle is, technically, as you get a bunch of stuff coming across your backbone, you have choke points and if you want full control over your business and your networks, you've got to be able to have places where you can police the traffic and control it and have visibility. A DOSVO out of the cable community has a lot of functionality that you put inside the house. The reason you put it in the house in the broadband, whatever you want to call it, is because there's a lot of stuff going on inside the house.

You've got kids doing games, people doing peer-to-peer, people doing video on demand, the voice services, all these sort of things. To do that effectively and not give your Helpdesk, you know, constantly in call by your customers, you've got to be able to control both sides of the link. So in industry terms, it's shown that this principle is valid, you've got to be able to have full control over both sides of the link. You see that and the standards work.

On a trans-oceanic system, users of ISP have a bunch of stuff coming down the pipe and if all you do is have a circuit out to a service provider out there, what do you do? I've run into four different providers here and they're describing congestion slots and BGP is getting knocked out and it sounds like you're overfull on the buffers. Do you control your upstream router? No. So the service part is not very helpful, right. So these sort of scenarios is fine if you're an enterprise customer but you're running your business over this so you need to be able - just like you do things on your side of the link, you need to be able to do things on the other side of the link, because this costs you money. Right? That's kind of the core principle.

Think about it from a very simple standpoint. Go to your bosses, your CPOs and say, "If all we've got is a link across the ocean, really we have no control over our business because lots of stuff is coming down the pipe and we're paying for it. We need positive control. QoS, we need to control both sides of the links." You don't want to go through and, as you send packets up, you have packets come down, you pay for the packets because they're coming across your links and then what happens? Somebody drops a packet. You have lots of spam coming through. What happens? You drop the packet. You don't have any business control over that. If you want to do QoS for a customer, you can't do it on one side of a link, you have to control both sides of the link.

So this is one of the things that's a concern because if you want to have positive business control, it's something that Philip and I, when we do the workshops - and Philip says he's still doing it in the workshops - you've got to be able to control both sides of the link, engineer it as a system and what I mean by that is there's a whole bunch of stuff that you saw in the last presentation, there's a lot of things happen in the service provider in-country.

Correspondingly, there's supposed to be something on the other side of the link too, engineered as a backbone system going back and forth. The question that was just brought up by a colleague of me here about what happens when you get peer to peer between local PoPs, it's the same sort of principle, you need to be able to engineer through that. Think of it as both sides of the ocean. So my critical advice - and I'm going to give a suggestion of what we can do about this is don't be misled by service providers who say they're your peers. A lot of service providers say, "Buy a circuit off me. I'll give you a good price." To them you are a customer. You're somebody they can make money off of. They don't really, you know, care about your business success. All they care about as long as they get a pay collect and you keep on - you keep on calling them up to upgrade the circuit, right?

Sales engineers, from the service provider. "Come in. I know all about this stuff." But they're sales engineers. Their job is to sell stuff. Beware. If you're smart, any sort of sales engineer when they come in, you use it as data points. You don't copy exactly what they do, you use it as data points. You can collect data points.

The same thing from vendor engineers. They're out there to sell equipment. They may not understand your business. But sales engineers from a service provider or vendor are good points of collecting data and then really watch out because, over the years, I see this over and over and over again, watch out for the consultants from aid organisations. "I'm a consultant from country X, I'm here to help you. Please listen to my advice. I've never run a network in my life but I'm a consultant." Right? You know, over and over, different service providers and businesses who've been messed up because they listened to the consultants, right? So watch out for that.

And beware anybody who wants to have you dependent on them. Alright? Like, for instance, " Hey, connect to me and I'll allocate IP addresses to you from my IP address block." Right? What you're doing is setting your business up to be dependent on them you're taking prefixes from an upstream provider, building your network out of it and guess what? You're stuck. Right?

This is why we've got APNIC. That's why all the policies and everything going on with APNIC that's involved over the years is so each service provider as they run a business can actually have control over their business and not be dependent on any upstreams and you have choices and when you decide to go from one connection to another connection and one backbone to another backbone, you have that choice and you have that control over your business.

So the advice here is: Do your own engineering. I mean this segues into Vijay's talk, you can't let people outside do your engineering for you, you've got to do your own engineering, you've got to empower yourself and the advantage of forums like this group, APOPS, is that you can tap into your peers from all over the place, right? The advantage of having a mailing list on APOPS is to actually ask for help and find out what to do with this or that engineering problem.

So one thing that I started to do was I started brain-dumping because I was thinking as I ran into - earlier this year, I ran into a couple of service providers and they wanted to do these - how do I build my link across the ocean and they were going to buy a port from somebody in the US and I said, "Wait a minute," and I started brain dumping. Realising coming here and the power of the Wiki, is we can get a massive brain updumping going on, so my suggestion to the APOPS coordinators is to use the APOPS alias and getting people to brain dump, write down principles, techniques, tricks of how to run a successful business when you've got high latency links, engineer across high-latency links, operations over those high-latency links and we can brain dump it.

And then, since we don't have an APOPS Wiki right now, I've got a public Wiki out there we can use and we can brain dump. I created an area on it and we can brain dump in there, the knowledge from different people. There's different approaches. If you're - one easy thing to classify is there's particular approaches of work when you build and a one-gig and there are particular approaches above one gig because you have different traffic profiles and engineering things that happen with circuits and buffering and what you can do and what things you can not do with it so there's a lot of different knowledge that we can brain dump and tap into. That way, when you need to build these things out, you know, and we've got new engineers coming in and everybody always has the issue with new engineers coming in to your organisation - how, you know, how did they come up to speed? Right.

And we also have service providers out there with the knowledge set and can tap into that where you can tap into other people's, you know, hard-knock experiences. They can say, "I tried this and it didn't work. I do this now." Share that out.

That's all I have to say. This is actually like the executive summary of the slide presentation I'll put up. There's a bunch of other slides that talks about principles to kick off the dialogue but I don't want to get between us and lunch now.

Questions? Thoughts?


No questions for Barry?


I want to make a comment regarding what you say about trans-oceanic network. I think there are practical issues for Asian ISPs to set up, let's say, a PoP in the US, because people cannot easily manage router in the US and they have issues buying equipment over there, they have issues setting up and maintaining routers over there. So on the other side of the - I think you have to think about how you can set up and manage the router on the other side of the ocean. So that's what I wanted to say.


Good point and every service provider that I particularly work with to get trans-oceanic system up, outside their country, Hong Kong, Singapore, US, Europe, it's a matter of learning the techniques of how to do that. To get started, there are service providers in the United States, in Europe, in Hong Kong, who will lease you routers. You know, all the vendors, right, I'm a vendor and in our competitors, colleague vendors out there will set up sales contracts where you can buy in your country and they will deliver it onsite at the co-lo and there are agreements around that. Things that we've had problems in the past and - it's a matter of empowerment, teaching people how to work it. It's not a huge hurdle.


I want to make a comment on this also. I think when you set up router over the other side of the ocean, you shouldn't rely on your ISP to lease you the router, because actually you're stuck with your ISP also with that kind of configuration. You probably need to have, need to find an independent equipment re-seller or integrated, system integrator to help you to do that and then you have, you know, more flexibility to connect with different ISPs and to switch over things like that.


Good point. Document the point like that. And then point to people - how do you find people to help out. Speaking as someone who stands at a microphone helping out, things that would be worthwhile for APOPS, my opinion, to get written down, right? So the knowledge is not locked up in people's heads, people have access to that.


There are a lot of other Internet service providers that are at your scale, whatever scale you're at, globally, and so, if you're sort of at that stage of trying to go from being a small multi-national or a national ISP to a small global ISP, there are a lot of other ISPs who are not in direct competition with you who are based in other countries and other parts of the world, but who have exactly the same issues that you do, right?

I mean, you may be based in India, they may be based in South Africa or in Brazil, but you're both trying to get equipment into London or New York or something like that and so what I've seen is a lot of really good support between ISPs that are of comparable size that are dealing with exactly the same issues but who aren't in competition with each other because they're based in different places and obviously, you know, there are a lot of small ISPs that are perfectly competent that are based in New York or London or Hong Kong who would love to be able to make a little bit of extra money by helping you, you know, do remote hands on your equipment and, you know, they'd love to get the experience of working on a little bit bigger equipment, a little bit larger network and dealing with engineers who are doing global-scale routing and, you know, they may be willing to get up in the middle of the night and go work on your equipment for you for almost nothing in exchange for that experience.


I'd like to add I know a couple of ISPs that do that and I think best example is in South Africa. They started as a small ISP, they put a PoP in London and the next thing we knew, they put out their own PoP in New York, because they are paying their telecom for the STM circuit and they were depending on a telecom to get up and running and now they're expanding into Hong Kong because they realised the benefit of running from South Africa up to London and as well as eastwards to Hong Kong and they seem to have scaled pretty well. I don't think they have staff in New York or London or Hong Kong and from the three people working for the entire network ops group in Johannesburg. So it seems to work pretty well. And if you are going to put a Wiki, then the Kiwis, who use this thing to put the servers across the ocean in the US, because the telecom won't play ball with them, so put all the content that telecom users get out in San Francisco locally and get it cheaper out for them, of course, if crosses the ocean and, you know, probably gives the telecom a bit of a hard time, where some services are slow and some certain services are fast from different ISPs or different places.

Actually, one of your former speakers, Samit, I'm one of his customers and I know when they put the servers across the satellite in Hawaii, they saved our satellite at least four to five megs of spam, this is globally. They realised that when they put in the spam filters on the other side of the satellite link, they dropped five to six megs of spam and junk on the satellite. It's a very valid point. Thanks, Barry, for bringing this up.


Right. And you engineer it as a system and you've got these characteristics of good points. I don't see an engineering principle over high latency links talked about a lot because there is a saving of, like, you know, where you can see bandwidth save right now, like in a spam issue but there's also, if you put servers that talk to each other on either side of the link, you get TCP window sizes increase. Instead of having your window size way low, it's cranked up because you're able to have a little bit more effective sort of dual proxy sort of thing on with it. So anyways, if you're interested in it, we'll start up a conversation on APOPS. It's something I'm going to be writing about because I've got customers out there. And just give awe scenario, this isn't just for the ISPs.

What got me shocked about this isn't so much with the smaller and medium ISPs, it was actually large ISPs that got me shocked and appalled about this. In one service provider in one country in Asia Pacific - and this is, you know, the number two provider in the country - they didn't have their own link across the ocean for various reasons. They decided they wanted to do that. At first, their design was they were going to buy a circuit. Luckily we got some people in there and said, "Actually, you get no control over that. Here's suggestions."

And they designed a system across the ocean and then we introduced them to different buddies, right, and the nice thing about APOPS is APOPS is plugged into things like NANOG and others. You've got buddies. This guy sitting next to me, Philip Smith, you put a router across the ocean and if you want to find people you can peer with, like Bill points out, there are small ISPs who would be welcome to peer with you if you drop into an exchange point or co-lo.

How to find them? Go to your buddy Philip here, go to NANOG or RIPE, and Philip will introduce you to those. Go to NANOG and hook up with someone like Vince Fuller, and they come out of that two-day meeting with 16 peering agreements. Done. The circuit was up for two days, go to NANOG and get 16 peering agreements besides your transit service on it and that's the power of groups like APOPS, NANOG and talking to peers and things like that. Anyways, we'll move this over to APOPS. I'm around during lunch and thank you.


Thank you very much, Barry.


OK, before the presenters go and before everybody goes for lunch, we've got some gifts to give to the presenters. I was hoping that somebody from the ISP Association would be here but I don't see anybody to hand over the gifts, so Hideo, if you could - for the four presenters.

Sorry to run into lunch a little bit like this. We've got an hour for lunch. Lunch is out on the terrace patio so you come out of the room and turn right.

This afternoon, we've got the continuation of the APOPS session. So we've got 90 minutes for three ISPs who are going to talk about their IPv6 deployment experiences. So this is a real operational issue that these ISPs have faced.

That session will be chaired by my colleague here, Ishii. So please come back here at 2:30 sharp.

In parallel with that, we also have our APNIC Fees working group meeting, meeting in Regal 1 and 2. From memory that's downstairs when you go back into the hotel.

So those are your choices for this afternoon.

Please enjoy your lunch and we'll see you at 2:30.

(End of session)

APOPS Session Two - IPv6 Operations

Wednesday 5 September 2007 1430-1600


Can everybody take their seats, please?


So, let's get the ball rolling and we'll start the session, the first topic of session two is IPv6 deployment. So for this presentation, we'll talk about deployment and we'll go from there. So let's start.

First Yves Poppes, and IPv6 deployment at Teleglobe. You can start.

IPv6 Deployment at Teleglobe - Yves Poppes


Thank you very much. Good afternoon, ladies and gentlemen. A great pleasure to be in New Delhi. The last time I was in New Delhi was early 2005. There was an IPv6 session organised. Because IPv6 was one of the points on the agenda of Mr Moran and it was the sixth point, and little did I know back in early 2005, that one year later, VSNL would buy Teleglobe. So now I'm part of the family. IPv6 is something we've been involved in quite early from a Teleglobe perspective. And, of course, an internal question I get - "Did it pay off to have an early mover advantage?" I had a colleague who doesn't like IPv6 and I think he doesn't like me, who was asking the question last year, "How can you spend one third of your time on IPv6? And it only generates 1% of the traffic? So I suggest we should cut your end of year bonus." But I had some answers for him.

So what I would like to share with you are some ideas on the perceived drivers for IPv6 and there are quite a number of them and everyone has their preferred drivers. And the second aspect - are there really early mover advantages we can speak of?

First of all, to put things into context, we're a member of the Tata Group. We don't have too much time to explain the Tata Group so basically the Tata Group took an interest in VSNL, then decided to expand overseas. For that effect, they created VSNL International, which is based in Singapore. And with VSNL International, they made two - one is the title cable, which gave Trans-Pacific and Trans-Atlantic capacity in Europe. And then VSNL, which was acquired in February of last year. So basically we're now part of the VSNL International structure. So I report in to Singapore.

The result of this acquisition is that you have a variety of lines of business. Wholesale Voice was one of the major reasons VSNL bought Teleglobe. The entry zone is 21 billion minutes of international voice traffic. This is the biggest carrier in the world. Mobile - particularly being in America is the mobility side, all bodies essentially come in from GSM and CDMA. GSM is the European standard and CDMA is the North American standard. And global transport services, which is the overseas capacity is the contribution of this.

If we come back to our Internet - and we're all quite close to the subject - if we read the declarations of all in the industry. Internet proliferation, trillions of things connected and obviously it is a bit too small to connect all these things and give an address to everything. The Internet is becoming a key ingredient in our economic infrastructure, on the same level as electricity and roads. It's part of our social structure. We can wish Internet will go away but it will never go away. On the contrary, it's getting closer and closer.

And reading on the plane coming down here, 'Business Week', the last issue, they basically have their scenario, the issue is on the end of work as we know it. The teams of technology are outsourcing of globalisation, etc. Basically what they say is that we will have our billions of wireless sensors attached to buildings, retail products and etc. The same recurring theme.

As we all know, we come out of a major recession around the year 2000 with the famous dot com bubble and a lot of us were a victim of that phenomenon. Looking forward, a lot of thought was given where are the next billion revenue resources? Everyone agrees a number of factors have to be met. One is we need wireless broadband. Everyone is talking about IP convergence, convergence on the axis and the transport. We need multi-functional devices. Look at our cell phone, it becomes multi-functional. It has to be always on, always readable, nomadic. And good with security. All of us who are familiar with the IPv6 arguments will recognise a lot of this in there. And if we move around, we have mobile ad hoc networks. We have networks involved when we're on the plane. We have the notes moved. And the latest is MANEMO. One thing which has happened in our industries are the blurring distribution models. Basically, there used to be a time when you were in Telecom, or in music, or in printing, or in home entertainment and you had a discreet little niche. Everything is mixed up in an e-world. It is disruptive on our existing business models, be it from a carrier or an industry point of view. If we extrapolate in to the future, it's a kind of 'Star Trek' kind of scenario. Basically everything which can be materialised will be dematerialised. The transporter beam has not been an advantage yet but maybe we'll have enough addresses for each and every one of our atoms.

Now the one factor which is really spectacular is the growth of mobile. At the end of the first quarter of this year, we had 2.8 mobile devices and probably today, the three billion has been reached. Here in India, last month, I think 8 million cell phones have added to the network. 8 million is quite a number. Practically two times the population of Singapore. So if you look at that growth, it is incredible. I used to have the same slide two years ago and basically what I had there at the time was three billion would be reached in 2010. And I changed it to 2009, then to 2008 and now it's to 2007. Making a prediction is never easy especially when it's about the future.

Now mobile devices and Internet. In fact, when Vint Cerf was in Bangalore, he said the huge growth of the Internet and mobile phone users, not computers, the logic there is you have many more cell phones or personal devices that are computers. So, basically, you need the famous cell phone to be your computer, to be your end device. So you have a kind of convergence there on the end device which I think is quite clear, when you look at the iPhone. I think we even have a iPhone in the audience. It is a spectacular device. Now from our friends at Google, there's talk of a gPhone. Although they deny it exists. And Microsoft - I was looking up my shares and I have shares in RIM - if you have a Blackberry you know what it is. There was a rumour Microsoft would be RIM. So the shares went up a good 20%. I was wondering if I should sell them or hang on to them. So basically it's good business to see it all evolve.

If we continue on the mobile track, the mobile people every year at their main event, it uses to be at Cannes in France, but that became small, so they took it to Barcelona. The second edition in Barcelona. Basically, if you listen to what to what was said in Barcelona, they're waiting for the major breakthrough beyond voice and SMS. A Short Message Service has been one of the spectacular successes which nobody expected. It generates billions of dollars in revenue and it uses the signalling channel from telephony. It's amazing. And, of course, if you're talking about high-speed links, you have similar use where VoIP could destroy the revenues from some carriers, including some of our revenues, because we make money from roaming. So if I can forward roaming charges, why shouldn't I? So VoIP is something and VoIP over data, over mobile is something else. Of course broadband could trigger an explosion in use of social networking. We're all familiar with YouTube and things like this. Nokia thinks only a fraction of the email accounts use mobile. Only 5% of them. So there's market there. Of course the iPhone has been considered a threat by major operations, so they tried to react. I was reading yesterday that the rumour is that now Apple would announce a ring tone service, downloads on the iPhone and that could be an announcement next week. Again, everything is moving.

3G as mobile access. We mentioned that. It is growing by leaps and bounds. If we look at one of the components is IMS, famous Internet Multimedia Subsystem. There are conferences on the topic. People think lots of money can be made. I once labelled IMS as Internet Metering System because the mobile telephony people nearly threw eggs at me. They said we don't know how to bill. We in the mobile world know how to bill.

Now IP address-based billing. If you had a scenario where you billed someone on the IP address, it's better they have a permanent address. In the hotel upstairs, it's 10.10.something address. I'm behind in that. They would never be able to bill me a specific IP address service. The argument for IPv6 - if they don't care about IPv6, some argument will work because everyone wants to be able to bill their customer.

Data and mobility. I think the Japanese example is quite striking. If you look on this slide, since 2002, when 3G was introduced and you look at it today, the majority of these telephones are 3G. Just imagine what will happen in China soon when the 3G licences will be issued. There will be a real explosion again there in data.

Now what do they use their famous 3G handsets for? Every now and then you talk on that cell phone, email, Internet search, etc. Again, all data-oriented applications. So your mobile device will be your computer and your everything.

IP Telephony. It is disruptive. It has been disruptive for our telephony revenue at Teleglobe and it continues to be. If you cannot beat them, join them. We bought one of the major VoIP international providers. If you look at it today, in North America, we have about 400 providers, and BB in Japan is the biggest one. The numbers still increase and with the growth of Google and Microsoft and eBay, again, they will change and continue to change the world. And Skype, we all know this phenomenon. Kazar is behind them as well. Six months before eBay bought Skype, someone said they would not pay $100,000 for Skype and six months later it was sold for $2.6 billion. Again, to make predictions is always dangerous.

Windows, as IPv6 deployment catalyst, Windows Vista, it gives preference to IPv6. If you want application like call-up operation, you need to be able to reach the address on the other sides. And I like a sentence that was said that the American Forces Conference, "We are betting our business strategy on IPv6 and this." IPTV is another catalyst. Those of you who go to IPTV conferences will have seen the sessions. ComCast ran out of their private address space. And basically they're looking at IPv6 and to deployment. A little statistic I gave here, I got from the 'Economist'. And basically the interesting part is you have the IPTV component which is local distribution of television and then you have the Web TV component. International carrier, I like the Web TV component. I had a meeting here and some of the Indian TV producers want to have the Indian communities overseas watch their programs using Web TV. In Canada, for example, we have quite important Indian communities. So this will generate a lot of potential traffic.

Number of networkable devices. If one looks at this statistic, which was in a presentation in early 2005, then if you accept at 17 billion networkable devices, I think the four billion IPv4 addresses are not enough. If you include others, you reach much more. These figures could be optimistic but if you look at the hand-held subset categories, or the cell phones, they projected it at 2.6 billion. It's 3 billion today. So, basically, that statistic at the end of 2004 was at least in that category too conservative.

RFID - another one of this phenomena, I think some of you must follow the efforts in Korea. So basically the idea is to associate an IPv6 address of sorts. Everyone had thought it would go very fast but take-off has been slower than expected. But again it's getting there. And, again, why - traceability? The major ones - bank notes, passports. The Swiss came up with that idea to have an RFID in every bank note so you will go through Customs one day and they will say, "Why do you have $264 in your pocket?" Prevalence of digital access is another one of these drivers. DSL cable, WiFi and Wi-Max phenomenon.

The national policies is quite an important factor. We were mentioning the 10-point agenda here in India. Of course in China we have this NGI project. Korea and Malaysia. Malaysia are pushing for the IPv6 adoption. In the US, the Department of Commerce is pushing for 2008. Why is everyone pushing? It's basically to improve the contribution of information, technology to the gross domestic product.

National policies on the defence side, I think there again military wants memos because they want to have automatic configuration.

What does IPv6 bring to the table? It solves address shortage. It restores communication. It makes roaming easier. Battery life is even better. And with the configuration, you have mobile networks, etc. And permanent addresses, we have all the advantages we took for granted in telephony.

Early mover advantage - from a Teleglobe point of view, we moved early. One of the reasons was we were involved in the NGI in 1995, when the old Internet, became commercial. I had a call and he said, "You have a new cable." And in the Atlantic, "Why don't we use that for the next generation of research network?" I was part of a board in Canada. It's the first time I heard about IPv6. They said we will join together in Chicago with another company. The rest is history.

So basically, if you look at it today, we have that famous global footprint which we got through the Tyco way. We're running the global network, which is dual stack, before and v4 and v6. Surfing the globe is one of the major arguments because you would remember the Taiwan earthquake in December last year, when basically most of the cables were cut to the west. All traffic had to go to the east. Two years before, there was the earthquake in Algeria when all the traffic between India and Europe, for example, had to go via the US and then to Europe.

Now there's an explosive growth and that's thank you to YouTube and MySpace. 4-5 years ago nobody would have anticipated the impact of user-generated content on that. So basically that's results in about 600+, 700+.

Now the bonus is IPv6 in the core. Because only the traffic growth could justify this expansive system. So thank you for YouTube and user-generated contacts.

Now if I look at it as a tier 1 carrier, IPv6 is still minimal. Traffic growth depends on adoption in tier 2 networks. The success of the tier 2 networks depends on the application. They're looking to Windows Vista and other operations. And would it be Vista or the exhaustion of IPv4 addresses or a combination? We don't know.

I had that colleague of mine who didn't like IPv6 and the time I was spending on it. So the answer I gave to him at the collective review was, "It gave high visibility and an early advantage. It gave a different move in the marketplace. If I only sell IPv4 and you sell IPv4 and IPv6, where will they get their service from - the one who was IPv4 and IPv6, even if they don't use IPv6 today." If one looks at the RFQs we had last year, we had about 60 major requests for transit moves. About half of them gave points to IPv6 and about 10 of them made it mandatory. So it was an advantage to support IPv6. So my colleague, he didn't argue anymore. But the next step is to stimulate growth of IPv6 traffic. Do you still spend one third of your time on IPv6 and still only generate 1% of the traffic." I have to be careful there.

In India, it has started on the AS4755. A set here is a - it is available as we speak. So thanks to Gaurab, we have set up a network and more on India at the next IPv6 summit, which it is confirmed for December 12 and 13. Please come and visit us there. A lot of new things about IPv6 in the region should be said and announced there.

Now today we have Gaurab and the IPv6 with SANOG. In conclusion to some thoughts, I think our human activity is effective by telecoms. No doubt about it. I think we have digital lifestyles, we make friends, even get married on the internet. Business processes from design to consumer or are Internet dependent. Some people are addicted to the Internet. In Amsterdam, they have a detox centre for addicts. A man was spending 18 hours a day on the Internet. He didn't eat anymore. I think there will be all networks in motion. I think we will be moving around with lots of information and communicate and pick up its speeds. Of course, we will be ad hoc networks, so where does the carrier make a revenue if we are all part of the big network? The downside with Ambient, Pervasive, whatever you call them networks, there will be no place to hide anymore. So if we have an advantage, don't forget whatever advantage you have, someone will take it away from you. Thank you.



Any question? Thank you very much.

Next we have Bijal Sanghani.

IPv6 Deployment at FLAG Telecom - Bijal Sanghani


I'm going to give you an overview of the FLAG network. We've got multiple STM backbone links which mostly sit on the cable system. It's diversely routed and fully redundant between core PoPs. We're in the process of upgrading to multiple 10-gig and these will sit on the T640 platform. IP transit is the main drive for capacity growth on the network. Our customers are typically ISPs, specifically around the Middle East and Asia. FLAG has big expansion plans and you can follow the links on the slide for more information.

This is our fibre optic networks. All these sites are now live. You probably can't see very well from the back. I'm sure the slides will be up on the website.

This is our global IP network and our peering points. Apologies, because this slide is slightly out of date. India Mumbai is missing on there for a start, and, yeah, I think that's the only one that's missing from there.

OK, so a bit about our building blocks. We've got a single-area level 2 of IGP. Metric is based on distance and round trip time. We use this as our primary method to control traffic flow. We do sometimes have to adjust the metrics artificially; for example, if there's cable cut or something like that. We've got a global AS number, 15412. Communities are used extensively and all routes are tagged when entering the network. You can find a list of the communities that we use on the RIPE Whois database. Our prefix filtering is done by AS-set, so customers need to register their routes with the appropriate registries and maintain their AS-sets. Routes are not accepted if they're not registered.

Both LDP and RSVP is enabled on the network. All traffic is labelled switched and follows the IGP metric. RSVP is used for load balancing for special engineering if needed, again during cable cuts for example, to shift traffic and reroute. We have a primary LSP set-up with fast reroutes so, if a link goes down, the next-best route is used. To control LSPs and multiple links, we use link colouring, bandwidth, least fill, band rid and re-optimise.

The services that FLAG offer include IP transit, direct routes, - that's our customers and peer routes only - global Ethernet, VNAP, layer 2 VPNs, based on Kompella, layer 3 VPNs, multicast and IPv6, which is what I'm going to talk about next.

So IPv6. We talked about it for ages but had all the excuses not to deploy it. There was no demand, no business case, no time, too busy, too hard to do. We finally completed the implementation in December last year and this was due to a customer demand in Asia. Since then, we've had a few more customers connected in Asia and we've seen growing interest in the Middle East.

We currently offer IPv6 transit to our existing customers for free. It's offered globally and customers can connect locally. The support is relying on a few senior engineers offering best effort with no SLAs and support is done via e-mails only.

This is our allocation, which we have from RIPE. And this has been broken down in various /48s for customers and internal assignments. The /48 it's have then further been broken down into /64 s for our back-to-back backbone links and customer WAN links. All Flag PE routers are dual stacked, so customers can connect without additional physical or logical connections or a tunnel. However, we do support tunnels if needed. Customers can choose to run v6 and v4 in one connection or run v6 on separate logical connections.

We use the same routing policy and BGP communities, except dampening and black hole is not supported yet. We basically tried to keep everything the same as v4 as much as possible on the network.

We've got a congruent ISIS topology with v4 and have a separate iBGP mesh for v4 and v6. This makes it easier to apply different policies to v6 if need be. The current implementation is native IPv6, so v6 traffic is not labelled switched.

With Flag's NGN expansion plans and having new P player routers, we're probably going to go for a 6 PE design in the future. Currently we don't have any tools for prefix filtering or config generation. This is all work in progress. So customer prefixes are updated manually when we get e-mails sent to that e-mail address that was on the other slide.

Monitoring is done via MRTG or using CLI on the routers. We also have Are bore supported in v6 as well. The current version of JUNOS that we're running does not support v6 cflowd export so we don't have any stats for v6 at the moment. There's no v6 content on there and I'm not really sure where it is.

OK, we rolled out v6 on all our routers with no problems. The actual work took about three days. The most consuming part was putting the IPv6 addresses on all the backbone interfaces and establishing the iBGP sessions. New configuration was needed on the routers. This included interface addresses, firewall filters, policies and BGP. It actually took longer to write the design documentation, and planned work procedures and worrying about it than actually doing it. Juniper supported v6 for quite a while. However, we were running an older version of cflowd, which had a security bug so we had to upgrade that before deploying. This delayed the deployment. The original plan was to have v6 on the network by August last year. However, saying that, I'm not sure what the consequence would have been if we had had v6 on the network and then found out there was a security bug.

Sprint's existing route connected to us was not v6 enabled, so we needed a tunnel to the nearest v6-enabled router. So for this, on our network, we need ad Juniper tunnel PIC. There were problems with the initial BGP session, as we saw, that it kept flapping. We then realised that was due to the M TU size problem. So even though we had path-mtu-discovery configured, sprint had to lower the MTU on the tunnel interface. We found that it stabilised. Another problem was that when a new IP backbone link is added, the IPv6 address had to be configured at the same time as the IPv4 address, otherwise the IPv6 packet forwarding will break and all iBGP sessions using that backbone or traversing it will drop.

We also needed to allow ICMPv6 on the management filters for loO for Ethernet connections, for example, on connections to the Internet exchanges, this had to be done. This was a change in policy from how we do it for IPv4.

We needed to adjust the firewall and bogon filters a few times after we deployed IPv6 as we didn't realise a site local address had been' placed by a unique local v6 address. I think this came out in the IETF meeting just before we deployed v6 on the network. That's something we need to keep on top of for the future.

As for the future of v6 in FLAG, we'll carry on supporting basic IP transit. We have an open peering policy at all the Internet exchanges that we're at. We're particularly interested in peering with academics and researchers in hope to see more v6 traffic on the network.

Unfortunately, it's unlikely that we'll do much more with v6 at the moment, due to other projects and since it's non-revenue-generating, it's not the company's primary focus.

We would like to offer SLAs on the service and we'll be able to do this once our NOC is fully trained and we have their support.

We'll keep tracking v6 developments and keep filters updated and we're ready to serve customers when they need it.

Currently, we don't have any plans to offer any transition gateway services.

IPv6 is still a specialist subject in FLAG. Been hard to train everyone to support v6 in the NOC, even though we've tried to keep everything as similar to v4 in as possible, it's still very new to the operations. We're not really sure how the customers are using it, whether it's just a regulatory requirement or if they're actually using it for some real applications. We would like to get v6 connectivity to the corporate IT network, but that would probably be a bigger challenge than rolling it out on to the production network. I think we've been brave, but we do worry about our filters not being updated and possible v6 bugs that might be around. Unfortunately, we don't have enough time to keep track of all the changes on top of everything else that's going on, as we don't have a v6 person dedicated v6 person inhouse.

Finally I'd like to say thanks to George, Wai-Kay, Lillian and Che-Hoo for making IPv6 a reality in FLAG.

Any questions.


I guess this applies to both equally. Have you done any analysis into transition traffic on the networks? That is, you were saying before, there's a small amount of v6 traffic that is carried natively across the network. Has there been any sort of further looking into transition of them, so any 6to4 traffic, any awareness of how much of your v4 traffic is actually carrying and encapsulating v6 traffic. Is that being potentially used as a predictor of future growth?


We're seeing such a low amount of v6 traffic.


In terms of what's being natively carried across the network?


In terms of v6 traffic that's actually carrying v6 encapsulated. That's not been looked at?




I think that's would be quite interesting. I thought I'd ask.

Thank you.


Any other questions?

OK. Thank you very much.

OK, next presentation is IPv6 migration challenges for the large networks, from Aruna Pidikiti, from Bharti.


IPv6 Migration challenges for large networks - Aruna Pidikiti


Good afternoon to everybody. I am Aruna from Airtel and I am the manager of network operations, based out of Delhi.

Today my topic is IPv6 migration challenges for large service providers. This I made in context to the current data network.

I'm going to talk on brief overview of our Airtel, what are the drivers for us, driving us towards the IPv6, what are the migration challenges, the design considerations which we are going to take in and the deployment plans.

We are the largest telecom service provider, that is the first service provider which is having footprint in 23 circles, almost all circles in India. We have 42 million subscribers and we have fixed lines in 94 cities across the country. We have used VSAT network. This is basically to reach towards remote locations. And we have fibre network across the country approximately of 40 kilometres. And we have international connectivity through i2i, SEAMEWE-4, for both services. We also provide integrated solutions for using this entire infrastructure.

This is the product portfolio of Airtel where we provide the data services, mobile, collocations, we provide the integrated solutions. So this I will be explaining later specific to the data services.

The infrastructure which we have, approximately we have 120 PoPs across the country having the additional capacity of 25 gig and peering with all major Tier-1 service providers and we also have peering with NIXI for our national capacity and also major local ISPs.

This is about our ISP infrastructure. We have the MPLS infrastructure approximately of 120 locations and having five international PoPs, we do provide the L 2 and L3 VPNs, we have NNIs with various carriers, like Singtel and Hong Kong. We do provide multicasting and QoS.

We have the ATM and the FR infrastructure. We provide the wholesale bandwidth to the carriers. We also have the downstream carriers who further sell bandwidth to their subscribers. We have the VSAT network through which we provide the connectivity to remote locations where our network is reachable, either to fibre or copper. Through VSAT, we also provide the L3 VPNs and this MPLS and VSAT, both networks are integrated. We have the metro Ethernet in all major cities, WiMAX for the access, GPRS and EDGE.

So this is about our services, which we currently have in our network, so based on these services, what are the drivers for Airtel for the IPv6 integration?

This is basically the predicted broadband growth, what is the kind of network we have. We have all kinds of access. We have dial-up, dial-up services, we have DSL where major growth is there, the fibre or the connectivity through copper, we have the Ethernet customers, WiFi and also WiMAX.

Where the growth for our network currently is, we have 1 million subscribers and the growth month on month is approximately 35 to 40 gig.

Moreover, on this broadband, we need to do the more IP s to the subscribers. Why do we need to do the more IPs? Because of the services. Like every home user know wants to have an IP-Phone, to have IP for the TV and for the music instruments and more appliances, so that is why the need for more IPs.

In summary, why we want, why the integration IPv6 integration is required? Because IPv6 address pool exhaustion, it is going to be deployed, increasing broadband users, impact - the Indian Government is more focused on the broadband subscribers so this requirement is increasing day by day.

The IPTV services, which is a potential market in the Indian market, class 5 VoIP and other applications, like mobile IP, which is going to come in future. So basically these are the drivers, which is driving us towards the integration of the IPv6.

The next one is what are the migration challenges in if we want to migrate our network towards the IPv6, what are the migration challenges we face?

First one, we need to assess the network. We have to assess in every aspect, every element of the network. We need to divide it into mainly two phases, one is infrastructure, the devices there in the network. The second phase is the servers, OSS, all these kinds of applications have to be assessed. And by assessing, we have to see what is the current hardware we have, what is the memory sizes, what are the kind of interfaces we have and what is the current CPU utilisation, because when you integrate, the memory and the CPU utilisation may increase, because when you are integrating from IPv4 to IPv6, your look-up will happen in 32 bit. You need to see all these aspects also. What is the current software version, what is the version you have to upgrade? Because this is not the equipment which we have deploying now.

The network we have is almost five or six years. That means from the day we have deployed the network element, those are the elements we need to assess. What is the capability we have? Do we have the capability? This is the assessment we need to do in each part of the network. In the infrastructure, again we can categorise, both core and access.

In the infrastructure, we have the core network - every large service provider has core and access. And again we see in the core. When we integrate, do we require this core has to be IP aware or IP unaware? If it has to be IP aware, what is our infrastructure we have? What kind of integration we can do? Whether the existing elements can support it. Did we need to do the upgrade? In case you have to upgrade, what is the investment plan? Everything has to be assessed and it is a big challenge, actually. If we take from the - if we want to make the core unaware, to start with the case one, probably we need not do any changes, we can start only with the access. At the access also, access actually starts from the customers. If the customer is ready right now to have an IPv6, because the applications, everything starts from the customer, so if the customer is IPv6 aware, also the customer has to be ready.

If we go for the native IPv4 or IPv6 only on the edge, then both see that the customer is equipment and the PE, which is the edge equipment, both have to be IPv6 capable. That means we need to see the compatibility of both the hardware and the software and the routing protocols in this aspect.

So specific to our network, we have an access network through broadband, where, if you see the elements, the customers, equipments, connected on the DSL modems, further connected to the DSLAMs, further connected through the metro Ethernet on the broadband RSA. So whenever we are integrating, we want to integrate the challenges we face, we need to see each and every element. It's not only the customer equipment. We have more routers, we have - what are the switches in between. Whether the DSLAM is supported or not. Whether the broadband RAS is supported or not. The broadband RAS further has to talk on the backend for the investigation purpose and the billing purpose. We have the billing based on the utilisation, based on the time, all these mediations, billing service based on the backend or not. And dial-ups. Dial-up also on the RAS that we have. Currently it supports the IPv6, so in this way every element has to be assessed and see what is the compatibility we have when we want to indicate or migrate to IPv6.

Similarly, the approach which I have taken is basically on the service line. When we want to integrate, it's better to approach the service lines, the entire network in one room. So the first thing is taking the access part - broadband, VSLANs or the dial-up. The next one is the IP TV. If we want the IP TV network to be IPv6 network, then is it supported on the set-top boxes and now we have various vendors on the set-top boxes, whether these boxes are supported, is it supported with OSS and BSS and are the DSLAMs supported because we have to enable multicast on the DSLAMs also. These are the elements which we need to see.

Then the Voice over IP. I believe the largest service provider has always these kinds of services. So Voice over IP, whether the soft switches are capable. If not, what is the applications we have to do, what is the software required, the IP phones and the media gateways and the DHCPs, the NMS services, the DNS and the BSS, OSS part, everything has to be assessed before we start any integration on the service.

One aspect is the network, which is the network infrastructure. We assess it on the services. We have seen what are the challenges we face or what are the aspects we need to look into when we want to integrate infrastructure. So for a large service provider, infrastructure is not important. The important aspect is the monitoring tools. What is the kind of NMS we have. What is the kind of tools we have for customers, logging, what kind of reports we provide to the customer, what are the troubleshooting methods we have. For all these, the NMS is a very important aspect. If we see the current NMS, which we have, then we have DHCP, so it creates the topology based on IPv4. So this NMS is capable of having the IPv6. Can it show the topology based on IPv6? So that also we need to see.

And the troubleshooting system, because there is a capacity to have an automatic troubleshooting system. What are the tools we have in terms of the network protocols, whether the SNMP supports that. These are the main things. These NMS applications, whether it supports - the dual stack is supported on these NMS applications. That is the one aspect which need to see and this is the major challenge in terms of the network monitoring and the management.

The other important aspect is network security. So we will be having the firewalls in our network or the appliances in our network, whether these firewalls are supported right now, what is the upgrading we have to do. So it is basically a broad thing where every element has to be mapped and see the compatibility, what is the operations we have to do in that aspect. And threat protection. The packet filtering, whatever is happening right now on the fine four is the same kind of packet filtering happens in the same network or we need to upgrade any infrastructure in our network, some elements have to be upgraded and the secure connectivity, can we connect on the IPsec or the hardware where we have the IPsec connected and the hardware based on IPsec, can we use the same? Probably not. In a few hardwares, we need to change GPSAT. That is one of the major challenges in terms of the security.

So as I was explaining, OSS/BSS a big challenge, based on these servers or the mediation services, the billing servers which we currently have on IPv4. We need to see the licence part and see how we make the dual stack enabled. And one important thing is legal interception. In India, actually, every ISP service provider should have legal interception facility. That means any packet which is going out of India should be monitored. So the current existing infrastructure may not be for Airtel network, it is for all the networks, is capable of having a dual stack, whether it is compatible of having the IPv6. And the customer portal, which we do to the customer, probably we may be giving the services still now. After the integration, we should be able to do the same services to the customer without any compromising on the quality and the service.

And as I explained on the FCAP, basically what is our configuration, what are the processes, accounting and the performances.

What are the other challenges? As I explained, though we may be ready for overcoming all these challenges, but the customer is ready? Why do we need to do all this investment if the customer is not ready? Probably one or two customers will be ready for the IPv6. Do we need to put this kind of investment only for two or three customers? What are our operational issues if we migrate from IPv4 to IPv6, even with the existing infrastructure also, within a stabilised IPv4, we will be having many operational issues in terms of the upgrade issues, in terms of software issues, all the down times or the card issues. Similarly, the same issues can happen in IPv6, probably even more because we don't know how that works. And the training. People have to be trained on this. Without training, we will not be able to maintain the services.

So these are the major challenges. If one has to migrate from the existing IPv4 to IPv6. So in every aspect, maybe a few services I have not covered because, as a large service provider, these are the major services. We also have the GPRS network. If we have to integrate the GPRS network, we need to see what is our infrastructure, how do we upgrade that and give those elements. Like any service provider, we see what are the challenges he faced when he has to upgrade.

Based on these challenges, we have considered few design objectives for our network.

Basically this would be the overall scope of the IPv6 deployment. The scope starts from the services, as I explained and also the access on the core, what kind of strategy we have to deploy and on the services, IPv6 services, once we deploy this IPv6 integration, what kind of services we can provide and what are our routing protocols, what kind of protocols we can use.

So if we see the overall scope, we will be approaching as a service device.

Our plan is basically to do in three phases. The first phase is do it at the access level. Then slowly come to the core level, make the IPv6 within the core infrastructure. Then the next one is interconnecting with IPv6 service providers.

For the first age or the first objectives is on the access level so what are the design objectives? What are the design parts we need to take care? First one is the IPv6 services to the end-users. And service-wise approach, first on the VPNs, broadband, like the service as we move forward. Less impact on the existing services, minimum operational issues and again low-risk deployment, low-cost, because keeping in mind the current markets, low-cost is very big impact. And easy troubleshooting.

Here will be many deployment strategies for integrating with the IPv6 on the access, mainly only access on IPv6 for those who are IPv6 unaware. Four types of methods are there that will be many for people using the tunnelling methods, IPv6 over dedicated lines, IPv6 over MPLS backbones or IPv6 using the dual stack. So the analysis which we did, keeping in view our network, so we will be approaching IPv6 over MPLS, because we have the existing network, the operation cost is less, comparatively, the tunnelling method approach is good because the troubleshooting is easy and the scalability also.

So other design considerations - the addressing design - we are already a member of the IPv6 so we need to see - whatever we do, we should do the first time right. So what is the addressing scheme we have? How do we plan for the addressing? What is the access design? Because here we have an access network. Might be coming and terminating at different elements. So if we have to move towards the IPv6 integration, what is our access design should be? What is the PoP design? Core - how it looks. Edge design and the RR. So if I make access path, like all these lines, on the VS LAN for the broadband, for the WiFi, the dial-ups, the metro network. So we connected the PEs, the provider edge router and further connects on the MPLS backbone. So this is what we are planning actually, but currently it is not like this.

Currently, this arrow, which I have shown in the earlier side. This is our approach for the integration.

And the PoP design. In the PoP, basically, we will be having our OSS/BSS part with the DNS servers and the collocation servers and all these things will be located. So, in the PoP, we will be connecting to one of the major PE routers. Further, we will be having the dual switches in the network and connected to the individual segment, keeping in mind the security.

Core design. There won't be any changes in the core design because we already - as explained in the previous slides - we already have an MPLS network. In the MPLS network, we are running the ITP as an ISIS and the edge peering we run between the loopbacks and even the IPv6 also, we can use the same method on the MPLS we run the LDP, and they talk between the loopback addresses and using the 6PE and 6VPE, we can use the same LSPs. So in summary, if we want to integrate, there won't be any changes in our MPLS network.

But definitely there will be change in the edge, because we need to make a dual stack. The core facing interfaces will have no changes but on the customer-facing interfaces definitely. We have to do the management configurations for the troubleshooting. Security configurations for the security aspects. The PE-PE routing design and for 6 PE, it's the peer-to-peer routing design. These changes have to be done in case of the Edge design. For the RR. Currently we have them in full latency and we will be using them keeping in mind the operational issues and will have dedicated boxes.

OK, this is on our design considerations. Moving to the deployment plan, what is our plan once we design considerations are finished, how are we planning to go ahead?

So we'll start with the service, with the VPNs or with the broadband or normal VSLAN service, where the customer demand is there. Identify those elements where we have to actually upgrade and start going through all the elements.

We have a task force, so we have a task force for this. Otherwise, you know, anyone - if you see the current operation scenario, everyone is busy in the operations, OK, so we have to have a dedicated task force for this, only working on this, to plan and having a testbed, to test the services, test the network for one or two customers, then move forward with other customers.

So plan for the dual-stack routers, whatever we want to do, segregate the routers, we can also have this plan and start doing on the same router, we can have a separate router also.

Setting up the DNS where the look-up happens both for IPv4 and IPv6 and the OSS and BSS should be should have an IP readiness. Then only will the broadband connectivity will happen. Internet - and then once these are ready, interconnect these IPv4 routers. So as we are planning on the IPv6 over MPLS infrastructure.

We are having many services in the MPLS. We have the VPNs, we have providing QoS, multicast, etc, but we are not touching anything in the current MPLS core backbone.

And on this IPv6 over MPLS, we can deploy various scenarios. We will deploy them based on customer requirement. There will be CE, where there is no impact on our PE routers. Only the customer end doubles will happen and L2 VPNs. The third option is IPv6 over the edge router, that is 6 PE. We have chosen this method because of the low cost, as I explained, and since we already have this network, we feel that is more scaleable.

So the core will be ready with this architecture. The MPLS core will be unaware of IPv6. PE will be upgraded to dual stack. IPv6 reachability among 6 PEs, and the MP-iBGP. And IPv6 packets Transported from 6 PE to 6 PE. So there won't be any change in the MPLS architecture, OK? So since the PE routers are not dual-stack, probably end-to-end troubleshooting of IPv6 will be a big issue but that will be taken care of when we move to the dual stack and the core also. So once the core is ready, then we will be integrate being the core on MPLS infrastructure without making the dual stack on the core. We will move the services first and then we will deploy the core also with the dual stack, OK?

So then enable the IPv6 peering, we can also provide the IPv6 transit services, services to the enterprise customers and also to the home users.

So this is basically our plan or the approach towards the deployment plan.

So in summary, now it is time to start for everyone towards the IPv6 deployment. Before doing that, we have to see what are the migration challenges in 360 degrees, the analysis has to be done in all directions. Major one is investment and planning, probably few have to take the approvals from the management for migrating towards this one. When we migrate, we have to show them. So what are the new business we are going to get when we integrate? What is written off? All this has to be taken care of and the phased approach for migration.

So that's all from me.

Any questions?

Thank you for this.


My first question is when you are talking about migration, you mean you are dropping IPv4?




You mean you are taking down IPv4?


It's not a complete migration.


That is not migration. It is co-existence. Migration suggests you're breaking IPv4 and you're not breaking IPv4.

OK, the other question I had is you said customers are not ready. Maybe you are considering customers are not ready because they don't have an IPv6-enabled CD. Why is the reason you think customers are not ready?


Actually, customers are more or less comfortable with the IPv4 right now. So IPv6, they might be - because, as subscribers, we have 1 million subscribers right now, we have not got even a single request that we need IPv6 services.


Do you think customers understand what's IP?




Do you think customers understand?


Yeah, I do. In India, our customers are more knowledgeable than service providers.


My view is your customers, especially if they're using Windows Vista, without you knowing, are already using IPv6 and I can show you that. We can do measurements in your network and show you that you have much more IPv6 traffic than what you believe. I can demonstrate you that right now in five minutes, OK? So I think saying the customers are not ready is wrong. Maybe the customers are not asking for IPv6, but in fact, today, most of the applications running in Windows Vista and most of the applications that are coming, use by default IPv6 and if you don't provide the IPv6 service, they are using automatic transition technologies like 6to4. If you are providing the customers a public IPv4 address, they will use 6to4. Otherwise, they will use Teredo.


In this conference network we have both 6to4 and native v6. And all I see is one flow per hour of v6 traffic. So something you say is pretty wrong. Don't think anybody does that.


We need to check how many users we have here with Vista. I can show you graphics taken in ISP networks where the traffic which IPv6 -


Maybe you're skimming the graph. We're running the network here and not seeing traffic and I'm sure everybody in this room has a v6-enabled computer.


That's the question. How many people are using Vista here?


Mac is native v6. I still don't see traffic.


Mac don't support Teredo for example.

The last question I have is you talk about a big number - a million of customers if I am not wrong. You have already got your allocation for v6?




You already got your allocation from APNIC?


Yeah, yeah.


You have a big number, a million of customers?


Currently on broadband we have a million subscribers.


A million subscribers and what is your idea of providing them with graphics? A /48 I guess or /56?


The current IP allocation?


For your IPv6 subscribers?


IPv6 subscribers we don't have right now.


What is your plan? You are going provide them what?


I'm not sure what kind of subnet we'll be giving to the customers. Depending on the customer requirement, we will decide on that.


Yeah, because if your allocation is, I believe, a /22, right, then probably it's really, really short for the number of subscribers you have. You will need to reconsider that very quickly.




OK. Thank you.


OK. Thanks.


Actually, my question to Aruna is you've done all this planning. How long do you think it will take to migrate?

Bijal said they took months and months of planning, three days of work. Have you looked at how long it would take to migrate your network? Your network is a bit more complex than FLAG.

Actually, the question to Bijal might be that they migrated the production network, which is for the customers. When they still can't migrate the internal IT to v6, so maybe that says a lot about why customers don't want v6 and they are able to provide v6 more easily. That probably says a lot.


I guess related in some way to what Gaurab and Jordi are saying, one of your drivers for the deployment of v6 was the potential for the v4 free pool to be exhausted, therefore you're no longer able to obtain further v4 addresses. How does your plan, as it stands at the moment, cater to the demand for - say you have a dual stack service for the customers but you are out of free v4 addresses. Are you currently allocating public v4 address to broadband subscribers or allocating space in v4 and how does that change in Euro v6-enabled service?


Sorry, come again?


So your subscriber broadband service as it stands today is v4 only. Are you using 1918 address space, private address space, or using public address space?


We are using public address space.


So presumably you have a pool of v4 address space which is reducing.


Definitely, yes. And we do use it on the dynamic mode.


So you have dynamically allocated whatever you're using. As the predicted exhaustion of the v4 free pool approaches, so 2009 to 2018, whenever that actually comes about, do you have a plan for, once that is gone and you're no longer going to be able to obtain predictable amounts of v4 address space, what you do with your v4 demands, assuming for a moment that v6 traffic, native v6 traffic, the demand for that doesn't increase over what you have right now? Do you have a plan to deal with the case where v4 demand stays the same and grows, and v4 address availability reduces, drops off, goes away, as the predictions currently are pointing towards?


See, currently we are actually in the very initial stages. We don't have any plan right now but, moving forward, yes, keeping in view the view which you have told, yes, we may be doing like that.


Is NAT is the most obvious -


Currently NAT is not there in the network. For a few services, GPRS kind of thing, NAT is there. We will not be able to use the NAT because we have to do the accounting logs to the regulatory people. In a few networks, we have not enabled any NAT.


Even if you say, further to the points of public connection, you have several NAT gateways and can record information there that can correlate between public and private address space, you can use NAT. There's no requirement that says you can't. So it is a possibility?


It's possible, yes. We can do.


Have you considered what to do then?


We can have a NAT. Suppose we have a NAT network, we can see who is the user logged in, what time he has logged in, what kind of IP allocated that can be seen on the logs.


But back to the question of is there a plan for increasing v4 content demand being dealt with in Eurov6 deployment scenario, there is no case that deals with that? You're assuming that your v4 pool will stretch far enough to cater to that or you have to turn that on?


I'm not clear on...


Take for example right now if you have a host that is dual stack, it can see any v6 content. That's fine as long as there is a v6 point of transit. If it needs to connect to a v4 host somewhere on the Internet, it needs to have a v4 address. If your available v4 pool has grown to the point where you don't have enough public v4 addresses to provide that connectivity, you have two choices - you either do some sort of proxying from a v6 connection, say on your CPE, and turn that into v6 native traffic - sorry, v4 native - or you do NAT.


Currently we don't have actually. That's what I am saying. We are in the initial stage. Probably when we don't have v4 we can see. Currently we have enough v4. We are a very large member of APNIC. We have enough v4 pool available in our network.


Is there any tie-in in your future predictions to, say, Geoff Huston's v4 address exhaustion reports? Are you predicting? Are you making your own predictions on when your v4 resources will run out?


Could be in another three or four years. That's why we have just started our plan and we'll be having a test. Testing will happen. We'll see how we will integrate or migrate, what kind of infrastructure we need. We'll be working on that actually.


It's far enough away not to worry about it just yet. OK. Thank you very much.

ADARSH SINGH: I am Adarsh from HCL. I have this question basically from Gaurab and Gaurab was talking that the migration from the backbone - I'm from a service provider background. I've been communicating with Gaurab in the IPv6 forum. The point is that the end customer today is not asking for IPv6 application. The end-user in India, outside India, is not asking for the IPv6 and that's the core reason that most of the service providers in India are IPv6-ready and still there is not much traffic on that. I wanted to share with Gaurab and the team out there.


You see, if the customer asked right now, we can also do that. We can also have an internal one created. As he rightly said, the customer is in the asking now. If we get the demand, we will definitely. The demand from the customer drives us moving faster towards the integration and the co-existence and integration strategy. Thank you. Thanks.



Thank you very much.

So finally we have to give a gift to the speakers.

OK, we can start the tea break.

Them the next session from 4:30 sharp.

OK, thank you very much. That's all.

(End of session)

APOPS Session Three

Wednesday 5 September 2007 1630-1900


Can everybody please take their seats?



Good afternoon, everyone. Those standing on the door, can you either come in or go out? Maybe they can't hear me?

So welcome back. We start with the third APOPS session today. We will start with a bunch of updates on points. We will have updates from NPIX, BDIX and NIXI updates. Followed by a ISOC update. OK. Thank you.

So we'll start with BDIX updates. And then we'll follow on with the BGP aggregation report by Philip Smith. Then we'll go in to the security session, which will start in maybe 30 or 45 minutes from now. And then with the security session, we'll go on. I would like to invite now for the BDIX Updates.


Good afternoon, everybody. I will give an update on BDIX.

For the newcomers, we have a bit of history. Inception in August, 2004. Funded by UNP and organised by SDNP. Initial plan was connecting ISPs through radio. But they didn't really work out.

So the problem is that we still in Bangladesh have problem with the content. Still we have the traffic. But we see some traffic. We see a rise in traffic. There is a operation from another IX called BSIX. Fortunately now we are peering together and working together. And altogether, 21 ISPs are peering in BDIX. In May 2006, the aggregate traffic in BDIX was 2 mbps and total capacity was around 200 mbps. The traffic is very significant. The traffic is going around 7 mbps and total international capacity is 800 mbps at this moment and going to be 1.2 gbps.

New challenges. We are facing some challenges. In one year, the quantity is very small but we are working for VoIP for more than eight years. Eventually, one month before, we declared the new BTRC policy and we have issues with four different licences. I don't know if the slide is visible or not from the far. One is IGW licences. Two licences will be given for international voice traffic. That will be just for voice traffic and delivered to the local mobile and providers. And the IX licence, two licences, the international data traffic licence and we will deliver a mobile address to. And, last licence is the IX licence, Internet account licence. That basically they are focusing on international Internet traffic to come in and there are another two organisations we have to beat for the licence to get the international video licences and the IX should also handle the local traffic, that is focused. Local traffic should be different from the Internet traffic, but it will take time to rectify because already they have everything so they cannot do anything at this moment. But the BSIX and BDIX don't need any legal licensing at the moment. That's the good news for us. IP telephony licences for the ISPs only. That is good news for ISPs. We're hoping we can introduce a telephony network to encourage people to get more broadband activity and get some business out of it and in the IX we should see a good amount of traffic there. That's all for today. Thank you very much.


Thank you so much. Thank you. Next up, I'd like to ask Jatinder Kumar from NIXI to come up.


Thank you very much. I have worked in IT for about 25 years. And this is all we started Internet Exchange in India. It started as a not for profit organisation under section 25 of the Companies Act 1956. In India, any company which has to be formed, has to go through the Companies Act. And that 1956 company, it was formed and section 25 is for those companies which are not for profit. That means we normally have expenditure or we blow it back.

On July 2003, the initial funding was from the Department of Information Technology, Government of India. It was about a million US dollars, which helped us to set up for exchanges.

This is a neutral Internet exchange. That means it is not controlled by any international provider. And this is being run by a board of directors. This is a company which is run by a board of directors in which five members are from the Government and some from Indian colleges. And nine members are from elsewhere.

In fact, when we started in 2003, our only line of business was Internet exchanges. But subsequently, in the year 2004, the Government took a decision that the top-level operations, that also handed over to the NIXI. And this project was also handed over, the JN project, and with more than 2.77 lakhs. But we went to set this up and we are carrying the registration through 46 who are located all over the globe. We have them in Germany, the US, many more countries, registrars, who are getting on the booking. And we have 2.77 lakhs Domains names. Incidentally, I'd like to give you a background. This industry was run by a government organisation, one of the society, in 10 years they were only to book 6,600. Imagine in 10 years, 6,600 Domain names, whereas in 2.5 years, we have crossed 2.77,000 lakhs. Internet exchange operations are presented at four places, which are all metros - Chennai, Delhi, Mumbai and Kolkatta. NIXI is planning to extend its Internet exchange project by establishing more NIXI Nodes in other status. In all the states who have Internet exchanges we have expanded. The targeted places for 2007 are Mohali in Punjab. Bangalore, Karnataka. Hyderabad - Andhra Pradesh and Jaipur.

This graph gives you roughly an idea that in September '03, when we started operations, we did not have any ISP connected to us. And this is how we have grown. Today we have 54 ISP connections in four exchanges, which are operating. The number of ISPs who are connected in four places. Delhi - 18. Mumbai - 21. Chennai - 11 and Kolkatta - 4. Mumbai is the commercial capital for India, so that's where we have maximum. Thank you. I was given instructions not for more than six slides, not more than five minutes. Any questions are welcome. Anybody. It was too short to ask any questions.


Are ISPs connected? Can one in Kolkatta and one in Delhi, can they be connected?


They are not but we have a plan and very shortly we are going to connect all four cities.


Will the other seven be connected?


Frankly speaking, in India we have three types of ISPs. Category A, Category B and Category C. A is the largest operation in the country. So the majority of ISPs are category A and we want B and C also to be connected. Thank you.


Thank you very much. Up next, we have Serge Radovcic.


I've been sick the last couple of days and I have to go to the little boy's room quite often and fast. Can everyone keep the bags away from the door because I have to run very quickly in that direction maybe. Thank you for that. I'm just here to give a short presentation on the European IXP scene. Next slide, please. A brief history of the exchanges in Europe. The earliest exchanges came around 1993. Nick Onslow was the first Internet exchange we had. After that it was a steady growth through the years. Until we get to this point around 2001 to 2003. We see this little burst of IXPs approved. This bubble of three years represents about 40% of today's IXPs in Europe. It was quite a boom time for the IXPs. Growth has been quite steady. Last year we had lots of IXPs. About two-thirds are non-for-profit and I would consider one-third to be commercial.

This is a map of Europe. All the black spots aren't where the IXPs are by the way. It might be a bit misleading. For example, you'll see down the bottom, Malta, a tiny little island, they've got an IXP. Iceland have an IXP. Luxembourg, landlocked country in Europe, they have an IXP. There's an IXP every corner in Europe. France have 19. The UK has 14. Also about 9 or 10 in London. And Germany has 13 but they're spread all over the country, unlike some of the other larger countries. Sweden has nine and I pointed out Russia also have eight. Not going to make my five minutes. There's 103 active IXPs in Europe today. 31 countries, 96 cities and 282 sites across Europe. Some of them only have one site. However, some of the larger IXPs have around 26 sites across Europe. And I think in about three or four of those cases, they are interconnected internationally.

This is a look at the statistics of all those IXPs. This is aggregated. Peak traffic - all the IXPs in Europe don't peak that same time. That would be wonderful. But it does make a lot of the northern IXPs peak at around 9 o'clock at night. Scandinavia is around 10 o'clock at night. After working hours. Currently this graph shows around 80% represents the IXPs in Europe. 60% comes from publicly available statistics and the other 20% comes from contact I have with the IXPs. The last 20% is a grey zone. I don't have a contact or they don't have the information to give me.

Currently I have 1.1 peak traffic has risen - 1.125 of peak traffic aggregated across all the IXPs in Europe.

This is a breakdown of the big boys in Europe. This is a top 20 IXPs. If you're interested in going to Europe and peering with the traffic, have a look at this list. This top 20 represents more than 80%. The top of the list is Amsterdam. This was taken last week but I think on Sunday they broke the 300GHz barrier. It could be the biggest single peering points. I couldn't get accurate figures so I didn't throw all in. Just for Curtis, NetNode have two in the top 20. There are another three in Sweden. They just missed out. So that's the top 20.

I've been looking at the stats for quite a number of years and what I've done here is circled the summers and you can see from about 2005, there's quite a big drop in the summer of traffic. This is due to the weather and summer vacations in Europe. Huge traffic drops. The most interesting ones are 2006. We had a long, warm summer and the world championship football was in Germany and no-one was at work. They were all watching the football and the traffic really dropped. That's the funny peak you see in 2006. 2007, it really showed how much weather plays an influence on traffic. In April of 2007, we had four weeks of really warm - not what the Indians call warm weather, but the Europeans call warm weather. 25 degrees is unheard of. Everyone finished work and went outside to the terrace or to the beach or whatever. And that weather stopped and it started raining and the traffic went back up again and then the holidays kicked in and the traffic went down and the holidays are over, so the traffic has gone back up again. Almost 500 million people in Europe tend to go on holiday at the same time. I have a little baby and went on a holiday and it's crazy on the roads. 500 million people on the road. To you guys in India that's nothing, you do that every day. But anyway. Sorry, six minutes I'm at. 4,000 participants in Europe. This is going to be made available, so you can download this slide.

Trend in ports. 10GHz on the way up. 10MHz disappeared in Europe just about. That's the port. Fees - I've highlighted the 1GHz price. This is the one port all IXPs in Europe tend to offer. 826 euro this is. This is only the connection. So you might have to pay a one-time fee or reoccurring, housing costs, etc. Lastly, this is a breakdown of the switchers for those vendors out there who might be interested. There's around 400 switches in use of IXPs around Europe. This seems to be here. Entering the market also, Force10. Glimmerglass are all patrons of this and all get involved in the IXPs. I'm sorry I had to rush it. I went 7 minutes and 30 seconds. Gaurab, you'll make the presentation available?




OK. Feel free to contact me. I'll be around for the next couple of days if you have any questions. Can I take any questions now? Are there any questions? One question? No.


How many Internet exchanges which is connected to you have IPv6?


I have 36 member ISPs in Europe. I'd say about 30, about 90% of them implementing, maybe more than 30% are implementing IPv6. The take-up is slow but it's getting there and there's a big push, especially for the take-up. We're doing what we can. It's slow, much slower than it should be but it is there. There are a couple of smaller IXPs. There's one in Switzerland that has around 30-40 members. And they have 100% take-up. It's not IPv6 exchange. Thank you.



I will do an update on NPIX. It's only volunteer staff, CEO and everything. Celebrating five years of NPIX in Nepal. There are 12 members now. One is in the process of joining. There is about /16 aggregate space. And leaking out for traffic and stuff like that. So I stopped counting the number of prefixes. But the address space around /16. 160% annual traffic growth. It's been going up and up and up. Second location upcoming - we have only one switch but very soon establish a second switch. Also issues with power, having to access networks and things like that. Things will be happening pretty soon. We are using Cisco switch, a looking glass, mailing lists and the works.

As of a few minutes ago, this is what it looked like. You can see that yesterday was a holiday. Festival thing, right. A holiday yesterday. And then people go to sleep. And then there's no traffic from 3 o'clock to 5 o'clock almost. And then it goes up and up. Then there's tea breaks and lunch breaks.

This month, you can observe all the weekends, the traffic level goes down. And during the week it picks up and then there's a big day. Fridays slow down. This is what it looks like over here. Some of those down times are not - access and going down badly. It's been pretty massive over the years. When we started five years ago, around 32 of traffic, we're happy about that. Questions? Thank you. APPLAUSE

Next up we have Philip Smith with the BGP aggregation report.

BGP aggregation report - Philip Smith


Sorry, everyone, for being slightly disorganised up here. I didn't realise the cable I was using didn't reach up to the podium. Let me start it again.

I'm going to try and go through this pretty quickly. I suppose, I've called it about aggregation but it became all about deaggregation as in the deaggregation of the Internet as received at the moment. This all kind of grew out of work that I did with co-authors within the RIPE European writing working group in the European Internet industry. I did work with Mike Hughes. It grew out of work that links or at least links membership, have been trying to come up with aggregation for membership. There's a whole big industry out of that. Michael Titley has been at one of the SANOG meetings before and talked about that.

We try and come up with some kind of documentation to I suppose document current practices and industry about proper aggregation of announcements to the Internet. So the RIPE 399 document discusses the history of aggregation and deaggregation. The impacts in the global routing system. All the available solutions. And come up with some recommendations for ISPs.

I'll quickly go through those. The history we covered included the clean-up efforts after migration from classful to classless migration. There was a report started by Tony Bates to encourage ISPs to move over from classful to classless writing. Mostly ignored through the late '90s. Apart from marketing departments who noticed ISPs were at the top of the list and thought it was a good thing, whereas in fact it wasn't. So it kind of languished and more or less disappeared. I took it on because Tony did bigger and better things with his employers and then Geoff Huston started getting a bit more interested. So he is now the current custodian of the CIDR Report.

Talked about introduction to the Regional Internet Registry System, but that goes through some of the claimed causes of deaggregation. These are some of the excuses and reasons that I got and my colleagues got and we were trying to encourage ISPs to be better about aggregation. /24s means that no-one else can DOS the network. Reduction of DOS attacks and miscreant activities. Announcing only address space in use attracts noise. We're attracting some noise here at the conference but not a huge amount of noise. We got answers saying, "Mind your own business." People were saying to me, "If you keep annoying me, I'll tell people to stop buying Cisco."

Leakage out iBGP outside the local AS. eBGP is not iBGP, it's very different. Traffic engineering for multihoming - that's a favoured excuse. All the /24s are caused by traffic engineering! Have a look at them, you'll find they are not. I reckon about 50% of the excess announcements are due to traffic engineering. Other people say, "It's all the legacy assignments that were made." The registries have been around - at least the RIPE NCC has been around since 1993. If you look at the other assignments, they are equally balanced pretty much, with the allocations coming out of the registry blocks.

So excuses. Now on to impacts. It affects router memory. It shortens router life time as vendors underestimate memory growth requirements. Depreciation life cycle shortened. The vendors love it because they can sell more stuff. The same thing for router processing power, you need to upgrade the route processing functionality. Vendors really love all this deaggregation because they can sell more kit more often. The routing system convergence with the router table - slowed convergence. This is easy to demonstrate. Network performance and stability. It will take longer to converge, you'll end up with a more unstable network. And you can't just say, it's probably your PC being rebooted and maybe the Internet will work properly after that. It's all about affecting the impact of the net or having an impact on the network. You get longer down time and unhappy customers.

Solutions - there's lots of them, surprisingly. CIDR Report has been around since 1994. The wonderful web page which goes with it, you scroll to the bottom, there's a box and you can put your AS number in and it will tell you how you're doing. Could you ever want a better tool than that?

I've been doing a routing table report which looks at the power of RIR power within the regions. Rather than the CIDR Report giving the world's worst 30, I can do it for each. The filtering recommendations, there have been loads of BGP tutorials, training, registry awareness and so forth. There was a CIDR police activity on as well that a few of us were involved in and it fell flat as well. The NOEXPORT community which some people use. NOPEER Community, the community seemed to want but nobody seems to be using. RFC3765. AS PATHLIMIT is still working through IETF/IDR Working group. So the recommendations from RIPE-399 is announcement of initial allocation as a single entity. The expectation is that while subsequent allocations should be aggregated. If they are contiguous and a bit-wise aligned. If the registry gives you the next block, you can put them together. Not spraying out /24s in the hope something actually works. This is the same thing applies to v6 as well. If you look at the v6 routing table, it's heading down the same messy disaster. So the CIDR Report, the routing table report, plenty of tools available. My routing table report, I've also put in the deaggregation factor thing. It looks that deaggregation is happening, rather than just the global deaggregation.

I also do something called work out the deaggregation that's possible. I take the routing table and aggregate it using the view that I have. I can actually reduce the Internet routing table by 100,000 prefixes simply by doing this. That's quite a significant number, almost 45%.

So if you look at the Internet in global table - today it crossed the 230,000 prefixes in my view. Deaggregation factors about 1.92. That means if I do my aggregation thing, I can probably pool the table down to 100,000 prefixes or there about. If we look at ARIN's reasons, there are 109,000 prefixes originated from that. The RIPE NCC region, while the Internet in Europe has been obsessive about doing the right thing and aggregating, 58,000 prefixs there. When you look at the next slide here the Asia Pacific region 2.1 deaggregation. Latin America, heading off in the front 3.73. Latin America, even though they're announcing 17,000 prefixes, they could reduce it by almost a factor of four if they bothered to deaggregate a little.

So we look at the graph that shows you what's been going on. This might even do something. The yellow line is a global increase in deaggregation. The green light is AfriNIC. You can see them bouncing around quite a bit. They have steadied off but now are shooting on again. LACNIC was founded there around 2002 and they've just headed off in to the stratosphere. If you look at the RIPE NCC, fairly stable, increasing slowly. This is the ARIN region, almost steady. They're not deaggregating much at all which is quite impressive. That's what happened. APNIC region is steadily heading onwards and upwards.

So we look at the potential savings. The top 20 deaggregators in Africa. Not interesting here, I don't think. If you look at the Asia Pacific region, in this part of the world, there are names who could be doing a lot better. They do know they have an issue and they're working hard at it. Other ones are like, "Mind your own business." If you look at some of the possible savings, Chinanet know they can make changes and their trying. So rather than announcing 500 /24s, they can probably reduce it a bit and so on through the list. Fairly major reductions that can be made. You see there's some impressive amount of savings if people pull their fingers out and did something. Look at North America, spectacular deaggregations there. Covad Communications could pull it down to 9,000 if they tried hard. And so on. Lots of examples like this. I'm emailing this list pretty much every week around various operational mailing lists. Take a look at it. You can see what you can do to improve your lot. Look at the European one again. Most of the European, Middle East region, it tends to be the edges rather than Western Europe. It has to be the newest service providers who are throwing prefixes all over the place.

Observations, pretty huge gulf between operational practices of the older and some of the newer part. The older part of the Internet ISPs know they get an address spot from the registries. The newer part still think they're getting lots of class Cs and then they tray them out. RIPE-399 is only a recommendation. It's not that you must do this. I go stronger and say that you must do it because it's a good practice thing. If you like, getting bigger and bigger routers and slow converging network. That's fine. I imagine there will come a time when the global providers may get upset at carrying 230,000 prefixes and stop chopping it. It has happened before. I can imagine it will happen again once they really don't like buying new equipment every few months from the vendors.

Conclusion - newer Internet is growing rapidly as is the deaggregation. RIPE-399 now exists. Go read it. Implement it in your network. That is all I have. Any questions at all?


The slide about Europe, you said it's one of the better regions. Is that something recent?


It has always been. If you look at the graphic up here. Let me pop it up, somehow. Let me see. It takes a while to come up. If you look at the RIPE - if you look back, February 1999, it was 1.2. So it's pretty spectacular.


I like to do an announcement that we are on the eve of starting to do a version of SANOG in the Latin American region. One of the things we're discussing formerly before we start this meeting - are you able to hear in the back? So the point is that we are going to start an operation group very soon in the Latin American region and one of the first things to discuss is this, the aggregation factor. So stay tuned. We hope it can participate in this list.


I'll be delighted to help. I'm not doing this presentation to complain, I'm doing this presentation offering help. If you people want help, I'm happy.


It's OK.


On some of the BPML lists, there is some saying that the provider independent, will make stables explode through incredible sizes and I was wondering what your take would be on that one?


My take on all this is /24s account for just over half of the IPv4, would provide a space or not, would it make any difference? The only people who need IPv6 independent address space would be those with a number. I need to multihome. If you have your own address space because you need to multihome, that seems an entirely reasonable use of the v6 addresses for yourself. Rigidly sticking it with just ISPs, trying to multihome with somebody's address space is really hard. So multihoming to me is legitimate. Random end-sites getting random bits of space would be hard. But on the other hand, I work for a router vendor and we'll sell you bigger routers.


We've seen this for a long time, and one day we'll find an appropriate way to thank you for your tireless efforts in continuing to present this and offering your help. I'm a bit saddened people don't take this help from such a nice guy that's you more often because when you look at the graph next to you, people aren't paying attention, are they? It looks to me this is getting worse and worse over time. And people are wondering why it's getting worse. There is one school of thought that we will discuss, trading markets for v4 addresses. And running out of routing slots and they'll start to charge you. You asked for it. And my question is more leading to short on having police or electric shocks, a lot of this is rumoured to be traffic engineered for the people who don't care, mind your own business, because they don't have the capacity to handle this. And basically the way it works is very poor. This is one of the few tools that people actually have. There are some discussions about of IETF to do more about this. And using other tools and other means.


Clearly this path isn't working. The current multihoming solution, whether for v4 or v6 doesn't work, it doesn't scale. Any work that's happening to look at anything else is a good thing. I know of one activity that looks pretty promising from my point of view. Again, it's whether it's going to be deployed or will be accepted to be deployed, that's another issue altogether.


Philip mentioned many years back a colleague of ours who worked together in an initiative called the CIDR Police took the weekly top 10 list, the top 5 off the top 10 list, and they said, "Do you know that one, you're doing this and, two, do you want some help?" 60% of the people were like, "Oh, I didn't know. Can we get some help?" So a lot of this is like a really big, big issue with it. Of course, we had a few out there who said, "No, no, no, we're going to deactivate because we found out it's the best thing. We don't want to listen to other ideas." We talked to others and they said, "OK, yeah, we can do that. They did it and had security issues because they had hijacks and then they went back and deaggregated again. So education is one of the things that maybe we can go back to.

Right now we don't have anybody doing that. That's something we did for like two years. Every week spending about six hours during the week knocking on doors and saying, paying attention to what you're doing on your network. If anybody here wants to take on that task, raise your hand. We can help out. One of the things that's interesting is whether we were doing this and had networks that said, like a network in Turkey, "We needed help." You'd have friends around the industry who volunteered and helped them out.


The most data is not that people need help doing this, they are doing it just fine, deliberately. And that's the problem, right?


I'm from Cisco and one question that I'd like to also understand is this traffic engineering thing is something that we also need to look in to a little bit more. And understand what exactly happens. I understand from one of the providers in India, they can only get a maximum link speed from many providers, upstream providers in this. This is actually one of the reasons where we see, you know, prefixes that are being, you know, primary, to take this in to consideration so they can have a push. I'm not sure this is really about - or is it something that's part of the problem that's causing this? I do understand there are, you know, providers who are making mistakes and they need to correct it. But I want more of this issue that we can look at. If we have an upstream provider getting - from the international site and this can be a problem of addressing this.


It's an engineering problem. Because this debate started on NANOG yesterday. So it's going on now. How people jump to the conclusion that everybody is doing it intentionally is wrong. One of the things that made me interested and to get somebody to look at is the bottom 50%. Because this is something that I'm looking at. We're saying we keep our focus on the top, what's going on at the bottom? And on the bottom, a lot of the big providers and how you multihome, you know, customers, customers who are multihoming and they go out to two different service providers links, you know, there's things big service providers can do. You'll be interested to get some research, if anybody is a researcher here and wants to get involved in this, look at the bottom 50%. There's things that can happen on there.


We're running out of time. If there's enough interest in this to talk about the deaggregation and other things, we have a BoF tomorrow. Can I get a show of hands how many people would like to come back to this topic again - to report and further discuss it? I'll make arrangements with the local host. One of the rooms downstairs will be used for BoF on deaggregation. Next up, Anne Lord.

IETF/ISOC updates - Anne Lord

ANNE LORD: I'm a senior member of the Internet Society. I'm going to present a short update on the activities of the Internet Society. For those of you who may not be familiar with the Internet Society, its mission is to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world. That's a bit of a mouthful and we tend to summarise that and say 'the Internet is for everyone'. It was established in 1991 as a not-for-profit chartable organisation. It is global but has a local perspective through 80 or so chapters which operate around the world and that includes a recently formed ISOC in Chennai here and also ISOC Delhi, which is subject to rejuvenation efforts in a BoF after this session. We also have 26,000 individual members and around 150 organisational members with the likes of Google and Microsoft.

Sole focus of the Internet Society is, not surprisingly is the Internet, and it does that through activities in education, policy and standards. So in the education area, it fosters opportunities for technical education and skills transfer and capacity building throughout the world. In the policy area, it's active in promoting policies that support Internet growth and providing leadership on issues that address the future of the Internet. And in the standards area, it is actually the organisational home of the IETF, the Internet Engineering Task Force, where the standards are made.

In terms of activities in the policy area, ISOC is active in promoting its policy principles and values that basically support the evolution of the Internet as an open, decentralised platform which supports innovation, creativity, and economic and social advancement and it's active in policy making bodies like the ITU, it's a member of ITU-D and ITU-T and also in OECD, where it's been invited to act as a representative of the technical Internet community.

ISOC is also active in leading the Internet governance debate and has been a participant in the WSIS process, the World Summit on the Information Society and now through the IGF, which is a multistakeholder forum where all of the parties and stakeholders are meeting on an equal footing. It's a dialogue forum. So the governments are coming together, civil society, ISPs, business interests and other stakeholders.

So basically, ISOC is involved in defending and promoting the Internet model. By the Internet model we mean open, transparent and bottom-up process that is have governed the Internet technical administration that we know in meetings like the RIR meetings and the IETF and so forth.

It's doing that by regionalising and localising Internet governance discussions as much as possible. Earlier this week, in fact, we had a seminar in which ISOC, APNIC, the government and civil society and the Internet Service Provider Association came together on a panel to discuss Internet governance issues for India and for 2008. And the next IGF is in fact in Rio and the one after that in 2008 is hosted by the Government in India so it's an important year that will be approaching for India.

In terms of education activities, ISOC is supporting regional network operations group such as this one. So it's funded the fellowship program here which enables individuals to participate. It also organises alongside RIR meetings and operator meetings, INET days which bring together technologists and policy makers to discuss issues of regional and national importance. ISOC facilitates the country ccTLD workshops and tutorials by providing hands-on technical training through its partners and it's involved in research and development small grants programs operated by the pan Asia. grants are available for R&D and ICT projects and it provides support to miscellaneous workshops in things like IPv6 and routing security and so forth.

Priorities for the Education Department are really to expand the support for training programs, working through with partners such as APNIC, SANOG and promote IPv6 education awareness and training and is also look to focus in the future on wireless and security training. It's looking at increasing its efforts to bring policy makers together with technologists to inform them and educate them and to deepen its engagement with regional and national organisations like the Pan Asia grants and UNDP. It's focused on enhancing awareness and visibility of the IETF and it's doing that through the ISOC fellowship program to the IETF. So the aim of the program is to raise global awareness about the IETF and to arouse a better understanding and fostering of opportunities for individuals to participate and become involved and it's especially focused on people from developing countries.

The first pilot was in May 2006 and the application process is now open for IETF 70 and 71. IETF 70 is being held in Vancouver in December and 71 is being held in Philadelphia in March next year. So the applications, the application form and details are all on that URL and the deadline for the applications has been extended by one week to the 14th of September. It's a very competitive application process and there are over 80 applications received for IETF 68 and 69 and in fact five fellows are selected for each IETF so you can see how competitive it is. There are a lot of applications from the academic community as well as professors and researchers as well as from operations people that are involved in technology.

So from this region, some of the past fellows have come from Sri Lanka, Pakistan, Nepal and Mongolia. So far nobody from India has been selected. So I guess that's throwing it out to all of you.

The mentorship program, really, the way it works is that every fellow is actually paired up with a very experienced IETF participant, so that they are not completely left on their own when they arrive at the IETF. It can be quite a daunting experience when there's 2,000 attendees at any one IETF. So they help with a fellow beforehand, preparing them for the meeting, advising them what kind of sessions to attend, what would be appropriate to match their skills with particular work areas that are taking place during the IETF and they help network them once they are there with people who have like-minded research interests and active areas of interest. There's also a fellows mailing list which every fellow will join once they come back and one of the criteria for being selected as a fellow is that you will come back and you will share your experiences with people in your community, so that might be for example with an ISOC chapter or a forum like this.

ISOC also has a number of publications that might be interesting to this community. It's got the IETF journal. I've got a copy here and there are copies outside. This journal is really just summarising some of the hot topics and debates that are being discussed at an IETF and it sort of gives you a kind of kick-start into the IETF. It's published three times a year and you can find an online version at that URL just there and people who are interested in educational materials might be interested to know about the online repository we have which is called the workshop resource centre. That's managed by NSRC and it's an online repository of presentations and material, training materials from Internet conferences such as this from all around the world. And you can find it at that URL.

One other thing that might be of interest to this community is the project funding initiative. The purpose of the project funding initiative is to assist ISOC members and chapters in any activities that might advance the mission and goals of ISOC. The scope of any projects is not limited but tends to focus on capacity-building projects and projects that enhance access, for example projects that are aimed at disadvantaged sectors of the community like the elderly or women or youth and aimed at getting them online or they might be - the recent projects were things like building telecentres in Sierra Leone and Liberia in Africa, or projects that provide core skills training like IPv6 or routing.

Up to $10,000 is available in two instalments of $5,000. You need to provide matching funds in any application, so you need to be working with other funding partners and agencies.

There is a competitive application process. It's held twice a year. The second round for 2007 was actually opened at the beginning of this week and closes at the end of the month. And all of the details are actually available on the website at the URL that I've given and if you're interested to know about it, just come and talk to me.

Just lastly, a little note about the chapters. ISOC chapters are very important to ISOC because they deliver a very local perspective whilst working within, you know, ISOC's overall mission. They advance ISOC's mission on a local level. They also inform ISOC about what's happening at the local level. ISOC Chennai was formed earlier this year in August and if anyone is from the Chennai area and is interested to join, they should contact Siva. He's at this meeting and I can introduce you to him. Or you can send him an e-mail. ISOC Delhi used to be run by Kapil and another. We're holding a BoF today at 6:30 to talk about how to rejuvenate ISOC Delhi. If you're interested, please do come along and join us. And that's all.

Thank you very much for listening. Thank you.


Thank you, Anne. Any questions? There's an ISOC BoF - ANNE LORD: I just talked about it.


Anne, of course, on behalf of SANOG and APNIC, I would like to thank ISOC for their sponsorship and fellowship program which helped 20 people come to this meeting from all over the AsiaPac region. So thank you very much. ANNE LORD: You're welcome.


So we'll now get back to the...


Now I'd like to invite the speakers for the next session, which is the security session, followed by the NSPSec BoF. I'd like to invite Mr HS Gupta, Yoshinobu Matsuzaki, Ed Lewis and Barry Greene up here to come and join us.

Promoting network security: a service provider perspective - HS Gupta

I'd like to welcome Mr Gupta from BSNL. He's based near Delhi. And today he's going to talk about a service provider's perspective on security.


Good evening.

I am HS Gupta. I am working with BSNL. Today I am talking about a service provider's perspective on promoting network security.

A little bit about BSNL. It is Bharat Sanchar Nigam Limited, it's the largest telecom service provider of India. It offers fixed line, mobile, Internet, broadband, VSAT, hosting, all kinds of services.

So the agenda is the points are the importance of network security for a service provider, then challenges in enhancing security in service provider environment, various security threats, the role of service provider in enhancing security, the role of customer - because with the increasing customer base, with the increasing broadband penetration and with the increasing bandwidth, customer plays a very important role in increasing the overall security environment and ways to minimise security threats and then finally conclusions.

So importance of network security for a service provider is very important. Like, to maintain service availability. The SLAs, which have been agreed upon to maintain those things, to reduce the outages, service outages, and it also results in reduction in man power and support costs. The customer satisfaction is increased and public image definitely is maintained. Then revenues are maintained and possibilities of getting involved in litigation are reduced. So all these factors are very important for a service provider. So if the security environment is - if the overall environment is secure, then service provider is also able to maintain all these things.

There are various challenges in enhancing security in service provider environment. The first and foremost is the multiple services which are being offered. Now, a lot of services are being offered like Internet, various mechanisms like narrowband, broadband, then PSTN, mobile, VPN, hosting and co-location and others. Then another thing is a service provider like BSNL has reach throughout the country, so more than 400 to 500 PoPs are available. And in all the places, we have different kinds of network elements, so coverage is very wide, so throughout the country, a lot of network elements are there, so that is a challenge to secure all the devices and to see that none of the devices are open and result in a security breach.

And increasing use of IT. Because of now the faster roll-out of services and to manage the large subscriber base, which is running into millions of customers now. So you need definitely a good OSS and BSS system, so increasing use of IT is there, so definitely that portion of the network has to be secure.

Then different vendors are there now, because there is no one vendor with which a service provider can interact. Service provider has to interact with a large number of vendors and all these solutions needs to be integrated. So definitely whenever integration is involved, one has to be careful about the security.

Then managing multiple vendors - this is particularly important problem, important area in public sector, because you can't buy and be tied to a single service provider. You can have - it can be from different software providers whenever expansion happens, so different vendors next time can be a different vendor from the one you chose earlier. Then a number of maintenance contracts are there with different vendors for maintenance of hardware and software. One has to take care because of a lot of these vendors enter the premises, they have to deal with a lot of things and that is a challenge for a service provider, to see that all those vendors and their employees, they are into the security policy and things like that.

Then systems and processes, they have to be updated and they should keep pace with technology and of course rapid technological evolution which is happening, so it poses another challenge to the service provider.

The number of attacks and vulnerabilities, they continue to grow and a lot of sharing happens amongst the hacker community and those kind of people. Immediately when some vulnerability is discovered, it is discussed and then... so one has to be really proactive and ready for this kind of thing.

Then applications and products - the defaults which are there, they continue to be insecure so one has to make sure that whatever defaults are there, they are made secure.

Then one has to also look at the cost, because one has to see the ease of availability of service, ease of operations and things like that, so one has to balance the overprotection with underprotection, so a balance has to be maintained.

Then, of course, new services and applications which are coming, they add to the complexity.

Now, various security threats. Now any use of Internet, any customer who uses or connects to the Internet, some security risks are there, privacy risks are there and broadband poses a higher security risk because the bandwidth is more, it's Always-on nature is there. So whatever - like, a customer was there, it could be through a dial-up, it could be only 50 or 60 minutes but now 2 mbps availability connected to a customer, if some - maybe some few thousand or few hundred customers are affected, then the whole service provider also gets impacted because of that.

Other security threats, like e-mail, like we have threats due to spam, phishing, cybercrime, forged e-mails, so a lot of such cases are coming up daily and a lot of law enforcement interaction goes on with the law enforcement agencies to deal with such things and spam, as everybody knows, is growing like anything.

Then impact of spam, it's increasing the hardware sizing on service provider side, on the mail servers and applications, the requirement increases and bandwidth requirement increases. The customer quality of service gets impacted because of spam. And customer also gets impacted because of this, because they have to download more data, they have to take more time to download and, of course, there is a also of productivity.

The cases of open proxies are there where, because a lot of PCs, a lot of customers, their PCs are not fully updated so they become targets of third parties who use their PCs to send spam or to send to launch attack from their PCs. So this type of open proxy situation is quite prominent. A lot of such issues arise daily. The list is there, containing the open proxies which are there in the network.

Then we have viruses, worms, spywares, and open mail relay is another security threat. Then we have distributed denial of service attacks, which somebody says these are the modern weapons of mass destruction. Then we have Bot nets, intrusion, malicious traffic, then managing multivendor scenario. So, as the vendors increases, as the number of vendors increase, the service provider has to react, so it poses a security threat to the service provider. And managing multiple hardware and software.

And, of course, the number of services.

Then we have things like application and OS vulnerabilities, former employees, insider threats, hackers - and, in fact, a lot of phishing sites are hosted on customers pretty much, so customer don't even know that somebody has hosted a phishing site unless the ISP tells them that some site has been ghosted, so this he have to remove that then.

So all these things reduces confidence for online activities.

So there have been live case studies of a number of banks and things like that, those phishing sites have been hosted by various - hosted on various customer connections and then we have a lot of spam, mail scams which contains, like Nigerian scam containing e-mail for giving you some amount for some consideration. A lot of such mails are received by customers. Then defacement of websites has been done. Network traffic increased CPU utilisation due to malicious traffic. The malicious traffic increases the CPU utilisation and network utilisations, it increases like anything.

So to deal with all such issues, definitely the equipment and applications, newer applications are becoming more advanced, things are coming, but basic things are policies and procedures are very important to deal with different security tools. We have tools like access control system, and intrusion detection prevention, firewall, anti-virus, anti-spam, vulnerability assessment.

And another important thing is baselining of the network. This is very important, where one keeps a tab on what is the usual course of activity during the time when the attack is not there and traffic is normal. So one must baseline the network, like this is the normal traffic during the normal course of action. So when any unusual activity happens, then definitely one can usually come to know that, OK, CPU utilisation has increased beyond the normal levels or the network utilisation has increased beyond the normal levels or maybe our outbound traffic has increased much more than inbound traffic. If things like this can be tracked, one can immediately come to know that something is suspicious. Then out-of-band management needs to be there and time synchronisation has to be there to correlate different activities which are happening across the network.

Documentation, physical security, so all these are very important tools which are required to be deployed and coordinated, because one solution, that is all these things have to work together because, if they are working in isolation, then there is no point. So everything has to work in coordination.

So service provider has to protect its own infrastructure from customers, employees and outside world. So that is very important to maintain the service levels and to maintain the network. The service provider has to also help protect peers because because that is - because, if peers are affected, then definitely you may also get impact dates, so you have to help protect other peers. Then customers have to be made aware of Internet security. Because anything that happens to the customers and the kind of customer base which is there right now and which is projected in the next three or four years, which is going to be a huge, huge customer base. Broadband itself is predicted to grow up to 18 million or something, 27 million by 2010.

So huge customer base is expected to come up, so customer awareness is definitely required to make Internet a secure place because attacks which are targeted to a particular customer can and do affect a service provider infrastructure.

Then customers should also be protected from outside world as also from each other.

So customer has to play an important role in improving the overall security. He should be made aware about various Internet security developments, fraud developments etc. They should be encouraged to use virus protection, firewall, and they should be told about the current security threats and by organising certain workshops and having interaction with them. So this results in increasing the overall security environment.

So they should also be taught about restricting access to the Internet, leased line or broadband connection, visiting only trusted websites, turning off the computer when not in use and downloading or installing the new patches on the computer as needed. So that way, the customer has to play a very important role in increasing the overall security environment.

So, in short, we have to - the ways to minimise security threats are deployment of proper technology, then increasing customer awareness, increasing employee awareness, and updated systems and procedures. And one has to keep, definitely keep updated with the latest trends in security, whatever is happening, so one is not behind the attackers and one is updated about what is happening. So security is not just a technology problem, so technology definitely helps a lot, but 80% of the risk can be avoided by taking basic precautions.

So, as the networks are becoming less secure, the cost to defend them is increasing. So definitely, prevention is always better than cure. So prevention is definitely the foundation in security.

So conclusions are - security is not to be treated as a mere hardware and software issue.

Static and passive approach to security is inadequate. A proactive approach towards security has to be there.

Customer and employee awareness, as I mentioned earlier, is important.

And then point solutions are no good. So one has to take a holistic view and see that end-to-end - all the security solutions which are deployed, they are working in a coordinated manner and one is having an overall view about those things. Otherwise, deploying the best of the firewall or the best of the intrusion prevention system alone will not help.

Then, while designing the network, security needs have to be kept in mind. And systems and procedures must be in place to deal with multiservice, multivendor, multihardware and software network. So concentrating on preventive aspects will be cheaper and effective.

So that's all I wanted to say. Thank you.


Mr Gupta, you raised an interesting point about normal baseline. You have a very large network and I imagine you must be using some kind of tools to monitor all that stuff. So how do you actually baseline the - what you mentioned, the baseline for the normal. Is there any kind of automation which you use? Is it done manually?


There are certain rules, but most of them are scripts, kind of thing, so they are used to do that thing and to keep a record of what is the normal tools of activity and what is current activity which is happening. And definitely certain solutions are also there that have been deployed.


So that, in the end, the normally or the, you know, the normality of the data is reported to certain people and then it gets passed on and some action is taken?




OK. Thanks.


Hello. I'm from Guavas. Again, an interesting presentation. Are you seeing, or are your customers willing to pay you for doing proactive security, whether it's managed firewalls or managed IBS?


That is what I'm talking of. There is a separate service altogether to managed Security Service. But the customer base, which are in millions, they are the real customers.


What about your enterprise?


Enterprises, of course, because enterprise customers, when they take the solutions, they are much more aware and much more educated than the normal retail customers and their numbers are also very small and so you can take a very focused approach on enterprise customers, but when it comes to the retail customers, then the numbers are like 2 million, 3 million, and they are connecting to your network, then the problem becomes difficult because so many customers - so as I said, everybody is having two and three connections and now if it's PCs affected, it will definitely affect the upstream links to the activation points, wherever the customer traffic is getting aggregated. So I think that makes a difference if the customers are taken along. It's not a cost but it's an education outreach to the customer so as to make the overall environment secure.


OK, so the business case is still focused on, you know, securing your whole network, your core network?


Yeah, definitely, because you have to definitely protect the user. If you are not protected, then definitely, that is the first step, of course.


OK, thanks.


Any more questions? None?

Thank you very much.


The next speaker is Yoshinobu Matsuzaki from IIJ. Matsuzaki-san is going to talk about root attack from IIJ's perspective.

OK. We've been asked to make a five-minute break so that the smoke can clear out.

OK, we are going to restart so people on the outside can come in now. Probably need a bit of help there.

OK, we'll get going again. Thank you very much.

Root attack - end user view


OK. Good afternoon. This is Yoshinobu Matsuzaki from IIJ. Today I would like to present my study about DDoS.

There was a DDoS attack against root server on this February. It started around the 10:00 U TC and the attacker used a UDP packet for this attack and the destination port was 53, so this is DNS port.

The packet was a usual DNS packet and it contained bogus data, so it seems the packet was malformed.

Since then, several reports are published. So there is - there are how much the traffic, or how ops work together against this attack and so on.

So then my question is was there any effect for end-users?

In this presentation, I'd like to show my analysis about this.

IIJ provides DNS cache service for its customers, as every ISP does. And end-users sent DNS query to our cache server. If this cache server is OK, the server simply replies answer to end-user. Otherwise, the cache server asks the other servers about the name.

And so, if there was attack, probably I can see some delays and that would be here. And that causes effect for end-users.

During the attack, there was data from our DNS servers, so I matched the query and the reply one by one and checked the response delay and the loss using a special power script.

This graph shows the response delay to end-users of the cache server.

This axis is response delay - 0 to 6 in seconds. And this axis is attack. And the vertical axis is packet count. The vertical axis shows almost all the DNS replies is returned in less than one second. So here. The attack was started around 10 am, here, but we can not see significant delays on this graph.

I zoomed in the graph on less than 1-second delays. The vertical axis is the packet count again. So this graph shows that almost all the DNS replies were returned in less than 100 milliseconds. And, again, the attack started around 10 am, but there is no significant delays on this graph. So I changed the method.

I picked up A for A query. A for A query that request a record of IP address. Usually we do not need a report of IP address because it's already resolve. I heard that some buddy software sends an A for A query. The A for A query is like IP address so the end of the query name is number, in this case, it's a 1. And it's a typical query that only root servers can reply NXDOMAIN because there is no such domain in the Internet.

End-user is sending the A for A query to cache server, then the cache server sends a query to root server. And root server replies in NXDOMAIN and the reply goes back too the end-user. We see many A for A queries to the cache server constantly so I thought we can estimate the root server's performance by changing the delay for A for A queries.

So this is a graph of the result. The blue line indicates a response loss per second from root servers. So after 10 am, we can see many losses. Green dot indicates a response delay of each reply from root servers. The delay increased a little after 10 am. And the red dot indicates a response delay to end-users. And it's interesting that it's a stable root server delay. So it seems the cache server sends a query to several root servers at once and uses the reply to the end-users.

Next, I checked the response delay to end-users. This graph shows a mean and a median of response delay from cache server to end-users. Again, the attack started at 10 am, which shows some changes around 10 am on the mean graph, but median is very stable. So I can say a few replies changed a lot. And then I picked out a .UK query only, because .UK is not popular in Japan so we thought we can see the effect of the attack against the route servers more clearly if we used this. But it seems very stable. Like this.

And next I picked out a .org query and replies. We see significant delay on this mean graph, so I suppose something happened to .org.

I plotted the delay and the loss with the same way as the other graph. The green dot indicates a delay from .UK servers. No loss and the delays are very stable. So .UK servers are very stable during the attack.

Next, this is the .org query server. Blue line indicates a response loss. So there were some losses around 10 am like this, and we see some response delays here indicating the red dot as well. And I heard that the .org servers attacked also at the same time with the root servers. This graph shows a query per second towards each root servers. So this cache server for M and F and I-Root servers among others.

Next I focus on the response delay on each root servers. This is the response delay from M root server. The left graph shows the delay during attack. And the right side is a delay of one week later. There was an attack against the M root server, but it's very stable.

M root server uses anycast technique to review its performance. IIJ have peering with them like this and provides transit for the M root servers.

During the attack, as IIJ provides the transit for M root server, so IIJ transit the attack traffic as well like this, but this cache server select another M root instance because it's the nearest instance from the server. So this attack traffic did not affect our cache server. Of course, IIJ cooperated with operations of the root servers and took necessary actions during the attack.

This is the response delay of F-Root server. It's very stable too.

And next, this is the I-Root server delay graph. We see delay and a loss during the attack. And one week later, it's very stable. So this is caused by attack.

In detail, DNS cache server has the capability of server selection. Our DNS cache server selects stable servers automatically. This graph shows root server selection of this cache server in percentage.

So 40% of queries go to the M root server constantly. And 20% goes each F and I.

After 10 am, the cache server select F instead of I and around 17 pm, the status of the server selection came back as usual.

I compared this graph with the delay graph of the I-Root server. When there was a delay on I-Root server here, 10 am, the cache server select F instead of I. And when the delay and the loss are gone, the server selection comes back as usual. Like this.

IIJ uses GNS as its cache server and I heard by BIND 9 also have a future server selection. Of course, this feature depends on its implementation, so it may be different if you use BIND 9.

So I can say this is an installation on the application layer.

And these are the root server delays, which is some delays on B root server since 10 am, a little. This is G root server. We see a lot of delays and losses during attack and one week later, it's stable, so this is effect of attack.

This is K. We see a loss during the attack and, interestingly, we see some delays one week later. But I'm not sure what's happening on this. Maybe some link is congested or something. I think.

And this is I. We see loss during the attacking. But, we see more loss on one week later graph. Strange. I'm not sure what's happened.

And I counted the number of queries to root servers based on its queried TLD. The server sent 1.2 million queries to root servers in this 12 hours and most of those returned invalid TLD. 90% of queries are A for A, because there was a few users who sent A for A address query. And the other TLDs are typos, Internet domains and so on.

Then only 0.4% of queries have a valid TLD. And most of them are .arpa and the others are either .jp or something. Because TLDs are only on the root server so we do not need much queries to root server.

Conclusion. There was an attack, but we can say that the effect to end-user is minimal or ignorable, because anycast works fine. And application layer restoration works fine. This is the server selection. And thanks for the lock TTL, cache servers need to send a query to root servers sparsely.

But we found delays on .org response. We need further research about this.

Thanks. Questions?


Maybe you mentioned this but how big was the attack? Do you know?


Sorry. I don't have the actual traffic.


Kurtis, I think this is a question for you. How big was the attack in late January, early February around Christmas, after Christmas, on the roots?


What's the question?


Attacks on the root servers on the 6th?


How big it was?


Yeah, how big.


So, um, some of the root servers were attacked, the only public one was L. I don't think anyone has said much else about this except my colleague made a presentation. So maybe you can conclude that we had some operational data as well.

I don't think there's been any disclosure of how big it was in terms of bandwidth. I'm not sure that we know that. The way that it's attacked the anycast root servers, it's very hard to estimate an aggregated attack. I would say that the system works. We had enough capacity to handle it.


So no more questions. I'll ask Ed Lewis to come up. Thank you Matsuzaki-san.


For those who've been asking for slides, whatever we've been able to collect so far is available on the local conference website, conference.sanog.org/slides. We're working on the second sessions, about IPv6 and the IXP updates are all up there. And as I get more of those, I'll put them up there, assuming I get them. Thanks. Ed Lewis from Neustar will talk about surviving DDoS. I'll let him introduce himself.

Surviving DDoS - Ed Lewis


My name is Ed Lewis. I work for a company called Neustar in North America. We do some work trying to counter DDoS attacks. What I'm going to talk about here is encountering DDoS attacks in a position of providing information. I'm not talking as an ISP or someone doing e-commerce necessarily but as someone who's putting out information that can be easily placed around the network.

So the question we've had here is how does a provider of information counter DDoS attacks? I have a trade-off here that says that usually defending any kind of system that's simple and constrained is pretty easy. However, we get a system that's functional enough to serve the world is really rather complex. We do some trade-offs there. This is a thought that we'll come back to at the end of the presentation. The first thing is what is a DDoS so we have an understanding of what we are trying to defend against. The second thing is a strategy for defending this, a theoretical defence, trying to give an overall idea of what we want to do to get rid of a DDoS attack. I'll talk about anycast a bit because there are people who still have some fear of using anycast and don't know if it really works. I want to describe a little bit about anycast. I won't go very much into that and then finally talk about some of the strategies and there are two strategies that are used to affect the defences that I want to talk about different approaches to the two defences.

So the first part I'm going to talk about is these are Dos attacks, a denial-of-service attack. The early kind of denial-of-service attack is if I flood a lot of information into a process, I can slow it down and the purpose of slowing this down means no-one else gets into it. That's a denial of service. I might have the desire to just crash the entire process. That's done basically by throwing a lot of traffic at this process over here and then just things fall apart. For a person providing information, the person running this process, this attack really hits on my turf. I see it happening at my feet. It's taking me out. A network level denial of service is a little bit different. In this case here, the traffic is coming in and providing information through an ISP or some other exchange point or some other network connection. The actual attack is going to be happening outside of my premises. This is where I am. I am the customer router. I own the services element, my process. The attack is happening outside my door. And that means that inside my company, I don't see a problem. I get very little traffic because of all this crashing out here, the ISP that's absorbing the amount of traffic that's coming towards me. So that's a problem here because now I have to go to my ISP and have them help me. A DDoS attack is much more complex because that now says that the same situation with my ISP is seeing a lot of traffic coming to my front door but it can't come in to me. What's worse with that is that the traffic is coming from other ISPs so this ISP here can't see the entire attack. It just sees where the attack is focused.

So now the strategy. How do I defend against denial of service? It's a basic idea. If I can get rid of the bad packets fast enough, they respect a problem any more. How do I do that? I have to do three things. There are three things involved with handling these bad packets. First of all, you need to know it's a bad packet. If all the packets are bad, that doesn't matter anyway, because there's no good ones around. The idea is you have good packets coming in and bad packets trying to stop the good packets. I have to figure out which is which.

Once I identify a packet as being bad, what do I do with it? How quickly can I get it out of the way? Sometimes I just handle it because at this point it's too late. Sometimes if the packet is going to request a lot of work out of me, I don't want to do the work. People may have heard of TCP sin attacks years ago. I hope a connection but it's the resources on my computer. That's an example of disposing it as fast as possible.

The other part of the equation is to be able to give yourself more time to act because a packet comes in, I look at it, identify it and figure out what to do with it. If I can do that fast enough before the next one comes in, I'm OK. So the third part of this is how fast they come in. In this case, I can't decide how quickly they're going to be sent to me. If you look at the previous slide here, the attackers sending these packets Are not asking me. They're just sending the packets. I can't control that. What I can do is worry about how many places they send the packets to. And basically spread the load across lots of servers and increase the amount of time between packets arriving. If I want to apply a little math to, this the equation I used to use was if the time to identify plus the time to dispose is greater than the time between arrivals, you have a problem. And for those who are more familiar with protocol diagrams like this, this is the sending of the packet from client to server. And after that time period, I'm going to identify it and dispose of it. Is the time we're spending on the bad packet. The packets are coming in like the red lines here, you can see that this one is coming in before I'm done with the first one and over time I've built up a backlog and that's when I have a problem. I'm getting packets too quickly. If I manage to set it up so the packet arrives after I've had time to breathe after getting red of the attack packet, I'm going to be able to survive this. It's going to slow me down a bit but it's not going to stop me keeping up with my workload. I want to force packets not to be in the red area but in the green area.

So getting back to the generic idea of defending against DDoS, there's three parts. The first part is identifying the packet. There's a lot of ways to do that. If an attack is new, you have to sit down and look at this double packets and find a pattern. Once you find a pattern, you can start filtering that away. We can give it time to identify and just work on recognising repeat attacks, that's number one. Disposing them, we've decided we just get rid of them. I'm going to spend most of this talk on the last part here, the interarrival time, which looks at how do I make packets arrive to me slowly? I do it by having more places to service.

It's more to say if you have a lot of traffic, add more capacity, there's different ways of doing that. It's not just like having different things, it's putting it in good places. One of the important blocks is anycast.

So let's look at anycast basics. This is a single-homed service provider, so it's pretty basic. You put a host on an ISP, the world sees it. Multihoming, you take that same host and you attach it to two different ISPs you give people get to you two different ways. Anycast is pretty much the same as multihoming except you have separate machines on different ISPs. It's kind of the same as multihoming. The thing is you have to coordinate those machines and it's that coordination that says you want a service here at this point or you want to make sure that traffic only goes to one at a time while you're serving a customer. Anycast gives you more instances throughout the network. You can also, if some of the routing - and I'll talk about that in a second - have anycast to serve part of the network for you, you control which parts of the network I go to which of your instances, and again, I recommend only - it's ideal if it's a stateless type of service. It's pretty much stateless. E-commerce, have you to think about that. You might want to think about that. It might work. It depends how quickly your transactions occur. Without anycast, the world looks like this. All of the ISPs have to send traffic from ISP to ISP and so on to where you are homed. With anycast, I can have certain ISPs see lots of localised traffic and some ISPs translate to where you are. Earlier on, I said that if I can simplify the whole thing. If we want a simplified system, it's easy to defend that.

If we come down here, I'm simplifying the amount of traffic through my service. But anycast is a trade-off. It saves addresses. I can put 25 servers around the world and only have one IP address used for that. A /24 in the routing table. I can divide and conquer in the network, I can put servers all over the place, I pick and choose where I can serve better. On the other hand, I have to - anycast does cost more. Have you to have places as well as machines. And you've got to coordinate all of this, the routing activity and the application activity to kind of digress for this. DNS is a fallback mechanism. If you have DNS servers out there and you go to one all the time and it suddenly has a problem, you go to a second one. DNS has that built into the application layer. Anycast helps you deal with routing problems in another layer. So have you to be careful when your routing layer fallback mechanism and your application layer fallback mechanism are both interacting, they don't clash. So it takes some engineering to do anycast right but it is pretty solid technology.

Anycast is routing magic, that's what it is. It's where to do I send the packets to go to the service point. I have to make sure that I do it reliably. Basically, to make this reliable, it had to have a stable routing stable. People said that rout something stable in one part of the world and not so much in the second one. People do not want to do IPv6 anycast because IPv6 is not as stable. As soon as IPv6 routing is stable, it will be as good as before. But anycast is a lot about the routing.

This is the final part of the talk.

I want to talk about two approaches that people have used anycast to handle DDoS. One is access provisioning. That's a slide coming up. - excess provisioning. Another way of doing this is what I call pre-positioning, putting data in different places. Anycast is involved with this and that's what I wanted to talk about on this topic.

Excess provisioning says I'm going to put out as much capacity as I possibly can and if there's an attack, I'm going to pull more capacity in. Of course, you have to be able to have servers around the world you can call up and you want to be able to work with the ISPs to help get rid of the flow.

In an excess provisioning approach, you will have one server here and this one black line here, the normal operating capacity. You're paying for that rate of service. Have you a reserve saying that you say to your ISP, I may want to burst it higher and that's what that's for. You may want reserve servers down here. It costs money to operate them and you want to have them there. You may want to have this other ISP out here which is the source of the traffic and you want filters before it gets to your ISP, before they filter and so on. So have you to have not just more hardware and the ability to run the hardware. You have to be able to talk to people with whom you don't otherwise have a business relationship.

The limits of this are how much excess capacity you can afford to have on the shelf. The other one is how many different ISPs out there can you talk to. There are main ISPs in the world. You say to them the traffic is coming from this part of the world. Can you stop it. They may be more reluctant to help you in that situation.

Now the other approach that I want to talk about is called pre-positioning, which is basically people that deploy servers closer to the date it's going to be used. What this counts on is I will update my information over time, but when I do the updates, I can do that any time I want. There's also the time to look data up and that's what I'm going to try and protect.

I'm going to try and put data ahead of different places and use routing to control who gets to see what in the network.

And the best kind of diagram I have here - this is one ISP and a lot of ISPs run - every ISP runs DNS for their customers. These are all the different customer networks, whether they are home users on a cable network or enterprises or whatever. They all get default DNS as part of the service from the ISP. The ISP can then have a DNS server down here which it asks for information, so the recursive server is going to say - people will say, "Find me www.apnic.net" and the server will have to go through the network and found that. If the information is sitting down there that has been put in by the DNS service provider, it is safe within the network. The same DNS server will be out in the public Internet with other ISPs getting to that. What happens in a normal situation is that this server is only seen by this recursive DNS server through routing magic. There would never be this route for this address will never be seen on this point here. All traffic going for will go down here and anything outside here will stay out there. This will stop some of the ISP-to-ISP DDoS traffic. That's what we're trying to do is stem cross-ISP traffic in this.

If this mean were to break, there would be a back-up route to go out to the world. We don't want to lose the DNS ability to get there but we want to keep as much traffic within the ISP. The strategy here is if I can keep all of the traffic goes going to the DNS within the same ISP, if there is a DDoS attack that has been turned on and says, "Everybody that's listening to me, send traffic to this DNS," this ISP will see all the customers that have infected machines, source the traffic and sink the traffic and then the ISP can handle its part of the load. It won't have to go and contact the other ISPs. That's one of the things we'd like to see happen.

The advantages of pre-positioning is that this approach puts more control in the hands of an ISP. When there's a DDoS attack, the ISP can say there is too much data flowing here, I see where it's coming from and where it's going to and I can identify it more easily. It's not co-mingled with good traffic.

Also, where there is no attack going on, having a source of data closer to me is also a good thing too. That's another benefit.

So, the conclusion of this talk is that DDoS attacks are going to be there. We can't stop the attacks from occurring. We found through some of the attacks we have seen is you can't scale past the size of the attack. They're not paying for their capacity. They're stealing space.

You find that data servers in the ISP benefits the service provider, the ISP and benefits the one getting the information out and making it more accessible and it's a benefit for the customers of the ISP. They get faster turn around time. PT it's one thing people are concerned around is the ISP has it trust the service provider. They have to be able to trust to the people who are running DNS to do the right thing by putting the machine in as part of their network and that's the one question people have on this.

So with that I'll go to discussion, questions.


No questions for Ed?

OK. Thank you very much, Ed. I think we are heading towards the BoF with Barry. But before that, a few formalities here and also get the stenographers a bit of rest. From very fast-talking people. Barry is being set up, is anybody from ISPAI here in the group.

(End of session)