Due to the difficulties capturing a live speaker's words, it is possible this transcript may contain errors and mistranslations. APNIC accepts no liability for any event or action resulting from the transcripts.

Wednesday, 26 August 2009, 11:00�12:30

SRINIVAS CHENDI: OK, we're going to start in a minute. Please take your seats. Welcome back. I would like to invite Philip Smith of what the APOPS is and why you're here in this room today. After that, he's going to pass on the chair to Tomoya Yoshida who will be chairing the session this afternoon. Thank you.

PHILIP SMITH: OK, good morning everyone. Sorry for the highly delayed start of this APOPS session. So, before we get into the presentations, I just wanted to quickly cover what APOPS is. As Sunny was saying.

Just a quick background, the three of us are chairing APOPS. We pretty much try to organize the operations forum that we have here today. Myself, Philip Smith, we have Tomoya Yoshida sitting on my right here. He will be introducing the speakers for the morning session, and then at the back of the room, Maz will be there chairing the second session. You can go to and we have a mailing list if you want to participate. APOPS has been a part of the APNIC meeting for quite a few years now. We started out as a mailing list, but we wanted to do a little bit more than a mailing list, so what we did was we gathered the operational presentations. Those with operation interest, we're gathering them here into one general session rather than having the special interest groups as we had there.

So, there's a general call for contributions and the SIG chairs worked with the APNIC chairs and the Secretariat and really, without much further ado, we'll look at the agenda side and I will hand the introduction of the speakers over to Tomoya.

TOMOYA YOSHIDA: My name is Tomoya Yoshida from NTT Communications. And I am one of the co�chairs of the APOPS session, and this morning's session, I will be the chair. And this time, we have the five presentations. You can see we have the DNSSEC deployment in New Zealand from Andy. IPv6 representation from Seiichi. And the third one is careful planning is needed for introducing NAT from Ashida�San. And the fourth one is challenges in the large IP network deployment from Echo Liu. And the last one is the strategic value of introducing v6 from Cancan from China Telecom. So, we have just one and a half hours, so before the presentation and we have just for each, 15 minutes like that. So first of all, we have Andy. He'll be talking about the DNSSEC deployment in New Zealand.

ANDY LINTON: um, well, I'm really pleased to be here. As well as my contribution on the original thing says that I work at the University of Wellington. One of the other things I do in New Zealand is I'm a director on the Domain Name Commission, which is the organization that sets the policy for the .NZ. Our structure is a policy selling group with the domain commission and we have a registry operation, the New Zealand registry service and we run the main servers and registry servers for New Zealand. So, we've had a long�held view that we should go to DNSSEC. We took the stance that we would wait until the NSEC 3 was completed and DNSSEC be would be something that we wouldn't have to complete with and then when we saw that the root was going to be there before we deployed it.

At the ICANN meeting in Sydney, there was a very strong and clear indication that we would move to being there by the end of the year. If that doesn't happen then, it would be fairly soon after that. So we've moved from a position of if they get signed to the one of when the root gets signed and we think that will be close to happening this year or very early next year.

As I say, I work in the domain name commission involved in the policy set for the New Zealand domain. It is not my job to do the implementation. That's something that our registry service will do, so we're thinking about what the policy things are, and we're looking at some things that would perhaps be useful to ask questions or challenge people with some ideas and say � these are things that you may need to think about if you haven't already done that.

So, the technical implementation, I'm going to talk a little bit about what we're going to do, but I'm not going to go into a lot of detail. Basically the thrust is that we will use a piece of software that's being developed at the moment called the din six, and you can see the URL at the top of the page there and you can go and have a look at that. That's software that's being developed, the open platform on the Unix and Linux systems and I believe also if you run some others there. And this can be scaled to various levels and so on. The difference here, and the thing that's important, if you look at the diagram here. At the top, we have the unsigned zone, which is roughly where we are now, and what we would do is add in some extra complexity to this.

We have the Signer engine and the KASP there which eventually, data will flow through and give us a signed zone. There's some side processes there.

The thing that I would like to talk mostly about is this thing here. We can pop the list on to here. Is there a... if you look at the bottom right here. It doesn't matter.

At the bottom right of the slide is the key and signing policy and that's what I'll spend some time on this morning. OK, just so that people are aware about this open DNSSEC software, I'm not going to go through these, but there's a number of features in there that will make this thing easier for people. While I was waiting this morning, I actually downloaded the software from the site, and the actual process of configuring this and building it is relatively straight forward for someone who has done some software installations and so on. And we believe that this is where our future lies, we will be using this process to do it. And so, it works with versions of UNIX. And going with multiple zones, and you can have different policies, which is useful, because we have a number of domains at the second level in New Zealand, we have a .NZ domain and that's a name where people can't get a registration unless they're in an official Government department, or Government agency.

But we also have other domains where they're open domains where people can register like the, which is similar to the .com. And there's other features in here. We're going to talk a little bit about that.

So the things that are exercising us and the things that we're concerned about is the number of issues with policy. When you decide that you're going to do DNSSEC, you really have to be aware that this is something that's a one�way thing. You turn DNSSEC on, it's not going to be a trivial process, if at all possible to meaningfully turn it off again. So, when you do this, you get into the process and then you have to commit and you have to stay there.

We have some new stuff to deal with. We have keys, which are actually � all of that stuff that goes around with the process management. Once we've done this and it is unsigned, people will want to know that there's another process that says � yes, this has been signed and there's not a way for them to see the process and get information in there that's incorrect or malicious. And, I think it is probably fair to say that there are a few people who understand DNSSEC. I think I probably include myself in that group, because I wouldn't claim to be a DNSSEC expert. I understand the principles but I think there is a limited resource, and this is quite similar in a way to some of the problems we have with the deployment of IPv6 in that plenty of people say, "Oh, yes, we should do it."

But one of the issues is training people and having people understand how it works. And because we have the complexity and the extra hardware and extra management process, it changes the structure, and so we increase the process.

So in New Zealand, we have ccTLDs with the registrars and looking at the domain name registration. The registrars have new things to think about. When you come to the registrar and say you want a new domain name, then you're in a position to say "Here's my name and address." And they check that the domain nameservers are working. And when you look at this and see that there's authorization, that completes it and the registrars have to do work for that. You can't simply make a back�up of the zone and restore it. The process becomes much more complex if you have a failure and you want to restart things and change things. And our registry will have certain standards for some keys, but do so the registrars. They need to have standards for managing keys.

What are those going to look like?

So, if you're a registrant who has asked for a domain and you go to the registrar and you hold the keys, what happens if you want to change registrar? So, if you've registered with one registrar and you want to move to a different one, how would you deal with the keys? Can you move the key? Or do you have to go to the registrar and create a new key when you've done that? Should the registrar cooperate with the key rollover? What if they don't? What if they've gone out of business? Should there be a process for keys being placed there so that if something bad happens, then there is a way of to get at the keys? And what do we do when the registrars fail? Like, you can sort of think, what does all of this have to do with the number space which is what all of us are interested in, but when we start implementing DNSSEC for the forward zones for names mapping to numbers, there will need to be a meaningful process for doing reverse domains as well, and so that will be one that will happen for ISPs and for regional Internet registries and local Internet registries and the National Internet Registries.

That is something that they'll have to deal with.

So, for registrants, so, for the end�user, for you and me who register the domain name, you can't change your mind. You decide that you're going to do this and you're struck with it, and so, you need to be careful that you don't oversell this, and we need to be careful that we're not saying that this will solve all of the problems if we're going to create some new ones. And certainly, in the early periods, what's likely to happen, we're going to see a very restricted number of registrars you can go to. Currently in New Zealand, we have a couple of hundred registrars. It's likely in the early days, we might have one or two who support DNSSEC, in the same way that we have a small number who support IPv6 records in themselves.

If you have a registrant who is holding a private key, do they have to go to the registrar to do that? Will the registrar be the weakest link here?

So, it breaks the current model of the way many things work at the moment. And what happens when your key is compromised? So, in the same way as what do you do when your passport gets compromised and you use your credit card, what happens then? That's what has to be worked out in advance. It's very interesting to think about this as a technical. If you think about this just as a technical problem or you go into the domain name servers for the ccTLD or even for your own domain, and do all of the work that gets the key signing done, there's a set of problems that lie outside of that, which are the things that will really make or break us. And so, some technical considerations. How often will your ccTLD allow the rollover in the domain names?

Why is that important? If you have a domain name that's important, if you're adding in new information, every time you do a zone update, you have to think of how that gets propagated to make sure that it is signed with the new records which are validated and you can check them out. And what's the equivalent in DNSSEC of delegation servers? We haven't got all of the answers to this, but we know that these are questions that need answers. And we think we're not unusual in that we know that the questions are going to have to be answered so these are things that you can think about and say � have we thought about this, how do we deal with this? Now, when you come to APNIC meetings, one of the things that's propagated and pushed back from APNIC is that we do training, we help people understand how things work.

We're going to have to do this sort of thing for DNSSEC. We're going to have to think � who is going to be responsible? Is it the registrars? Is it the registry? Whose job is that? To make sure that the DNSSEC comes in and starts making the domain name system more reliable and robust? Do we just educate, or do we do promotion? Do we go out there and say, "It would be fantastic if you did this." Or say, "It's available, come and get it." And some of the others who come back and do the DNSSEC stuff have pushed this. They've pushed and said this is a good idea to think about doing this.

So, what resources are the registrars going to need? As the core registry in New Zealand, we have an agreement with our registrars that says you will meet these standards, you will behave in this way ethically and so on, and we're going to impose new standards on them. We're going to change their core structure. How are we going to deal with those things? Will there need to be, before someone can be a DNSSEC registrar, will there be a process of accreditation and say these are the contact details for spam, and we're not going to allow people to do or hold on to domains and hold them for ransom to people and so on, and so do we have a special process there?

And we're pretty much at the price. This clearly changes the pricing structure. You change the complexity. There's more stuff to manage and the costs go up. What does that mean to prices? Most TLDs operate on a cost recovery basis so they try to get the money back to cover their operations, so it will actually be them pushing that cost back to the user.

Could we charge more for DNSSEC? So, we've got something that we want that is arable behaviour and we want them to take up. How do you persuade them that that is something to do. One method that you can do for this is that you can mandate it. That's going to be hard. Another method would be to make it cheaper. So perhaps we get to the point where we say � if you've got DNSSEC, your costs will be $10 a year. But if you haven't got DNSSEC, it will be $15, because you're more likely to cause trouble on the Internet. Those are things we haven't made decisions on, those are things that we're thinking about, and I think those are things that we all sort of need to have some thought about these things.

So we haven't got answers for all of these. We have the questions. I'd love to hear if people have a set of answers for me, but I would also like you to go away and think about the questions for your own domain. Whether it is at the domain level, whether it is for your enterprise or whether for your registry or the whole range of things that are there. So, that's my way. Any questions now?

TOMOYA YOSHIDA: So, is there any questions?

ANDY LINTON: I'm happy to talk to people afterwards if people want to talk to me afterwards. I know people are sometimes shy. So I'm very happy to talk or discuss either directly or my e�mail. I would love to have a discussion.



SRINIVAS CHENDI: May I request all of the other speakers to come on the stage end.

TOMOYA YOSHIDA: So the next speaker is Seiichi Kawamura who will be presenting about IPv6 representation. Could you please come to...

SEIICHI KAWAMURA: Hello everybody. I'm from NEC Biglobe, and we're an ISP in Japan and we have IPv6 in our networks and I'm going to take 15 minutes of your time to talk about the problems encountered with IPv6 address extra representation.

First, I'm going to present to you about the having and then I will go on to talk about how we can, operators can get around those problems, or at least try to get around the problems.

This is a brief review of how IPv6 addresses are represented in text. Examples in blue are actual... except for the very last one, are from actual implementations. These are all the same addresses from different implementations and you can see, it is really hard to see. But RFC 4291 which is the IPv6 address architecture, and note that all of these fine. And so, what is the problem with this?

There are a couple of problems. One of the biggest things that we had is searching for a particular address in files. For example, text files or Excels or power points or whatever, you try to find an IPv6 address in a file and you're very unlikely to match on the first try. That one single address can be there in many different ways and this is the same in text files and diagrams and whatever, and operators do have in their daily operating routines, in IPv4, have to search for an address in there. I don't think that's really going to change much with IPv6 also.

And what's more depressing is that it is not just engineers that have to deal with IP addresses. There are people that are not engineers and also, they deal with IP addresses and manage addresses, so it is going to be a problem for them also.

And here's a quick example of what happens. This is a sample trace route from my network to someone else's network. The prefixes have been changed to the documentation prefix.

This is the natural trace route, and if I would like to take the address in the box grey and try to switch to the address in a file like the address management file. Excel or whatever, but if those files were written in a way that did not match that way, this is just an example where there are two is, but if you I(s), and then you start to see that it can cause problems in times of trouble shooting.

And here's another example. You might have like a directory full of router configurations and you wanted to search for an address and see which on which router this was configured, so you'll have to try many possibilities to actually find the address this is one of the things there to talk about how to get around it.

Another problem is with logs. Logs from different applications will show different output. For example, application A might take an IPv6 address and spell it out in full. Or an application V will take the same address and make it very short. And here is a quick example. The first red box address is from a kernel. And the second is from a routing process. It's the same address, but it is shown differently. And if you try to take these addresses and try to match it with some Firewall logs, the first thing you're going to have to do is take the addresses and format them to show in the same format, or else you're going to have a hard time trying to show the actual logs that you're trying to look like. So this is giving me a very big headache in doing that, because going back to the slide here.

Also, configuration can be a problem too. Configure auditing can be a problem too. And if you switch to a different router and try to find an IP address that is configured. The same address, like, it will show differently. So, this will be one of the confusing things.

So, we thought that they would be very nice to have a canonical format for the IPv6 address representation, and I think it would be good. If it was fairly well widespread. IPv6 implementations are already up there today, and changing everything is basically impossible. So, we try to find what is most common, and we did find one, and I'll talk about that later. And we also want to see that the canonical format is compliant with RFC 4291 and also, the operators are human, so we think that it would be better if the canonical format was human friendly. And so, we wrote an informational document and submitted it to the IETF and this is not just for developers, but it is for software engineers, everyone. The title of the draft is draft�IETF�6 man�text�addr�respectation�00.

And it was the original title there, draft�condition kawamura�IPv6�text�respectation�03. And what this document does is it talks about the problems that I've just talked about. And it also defines canonical format and if you ever have doubts, follow the canonical format. It's not just for developers, as I mentioned earlier, but it's very important for operators to know that there are differences in implementations. And it's also important to note that this is about text representation. It is not trying to regulate an input. For example, I would like to configure a router, and if I put in an extra zero, I would like the router to take the address and route it and represent it in a canonical format. And so, this is the canonical format.

There we go.

And there are six rules. Rule number one is to omit the leading zeros in the 16 bit field. You'll see, for example, 2001:db 8::1. Number two, use in places that shorten the address the most. There are ones that see whether there's a use to use :: there in the second string of zeros or in the latter string of zeros and you want to use it in the place that shows it the most. And if there's a tie breaker, number three, then shorten the former.

Number four, use :: to shorten all of the second zeros. But if you have four fields of zeros, don't just try to leave the rest of the two. Share all four zeros.

Number five, use :: when there are more than two zero consecutive fields. And number six, lower case is needed. And this has been checked with trace routes and ifconfig, or ip configure or whatever your preference or on major PC operating systems and it will show this kind of output.

And major routers do have the same implementation, or something that comes close to it.

And since this is in an operator's I think I have some hints for operators and this is actually what we do in our ISP, or what we try to do, what we do is try to ask vendors how IPv6 addresses are represented. The easiest way is to point out to the draft and say, is this compatible. And the thing is to be aware of how the implementation is done. And it is not, this is not just with our routers, it is also so with software development and open source. And another one is to see what we try to do is we avoid using zero fields in the first four fields. We reserve that loop backs or DNS. And this will avoid any :: confusion. And also, we try to ask everyone in our company to TRY to respect the address in the same way, you can't always do this, but we try.

But since we can't force them, we try to make tools. Tools to help the IPv6 address in canonical format. So that's about it. Any time for questions?

TOMOYA YOSHIDA: Any questions?

SEIICHI KAWAMURA: If you're interested in knowing about what kind of implementations do what kind of representation, I'll try to look through a few and maybe show you examples, so please contact me during lunch or during tonight's social. And I can talk to you about this. Thank you.

TOMOYA YOSHIDA: This presentation is from Hiroyuki Ashida.

HIROYUKI ASHIDA: In this presentation, I'm talking about NAT. Before the subject, I'm talking about my company a little. My company provides TV broadcasts and broadband Internet access and telephone. My job is technical design and construction. And the DBS network and the backbone. We are here on the slide. About one million household users.

Why I'm talking about NAT? Most customers are using IPv4. All the customers using IPv4 private addresses with large�scale provided since 1998.

We have 10 years experience of large scale NAT. I'm listing some proposals of CGN, MUN, based on my experience. And there are many ISPs examine introduction of LSN before IPv4 exhaustion. Today I think some technical advice of LSN from real�world experience. There are technical and quantitative knowledge, based on analysis, actual traffic and actual equipment.

We have three items. I have three items. I have to say, for instance, I don't always recommend LSN. I think as think, the best solution IPv6 is the best solution for IPv4 exhaustion. Our customers in local activity.

In the future, we are going to share an IPv4 address in the future with our customer. We will have provided our services with the quality.

We use because my experience is based on this model after 10 years.

Do you see in these pictures, there are famous pictures of LSN. This is a Google map. These pictures are asking us how many sessions we should provide for our customers?

Using this technology, this technology, I've captured the traffic. This is a regional POP, night of the weekend, there were over 7,000 customers. I have counted 360,000 sessions in this. One user uses about 50 sessions average.

The next, I have counted in different network size, different services, different area. If size is a bit different, the average number can be different. I think a fifth of statistical multiplexing. On the other hand, I have no correlation with access speed.

I have seen a big drop in different areas. In area one, many young people are leaving. In area two, is a regional area where more people access to the network.

This slide here is average of 50�300 sessions per one user. However, there are different conditions. If the block is small, there are many sessions per user. And I have seen a big difference in these regions and there is no correlation with access speed.

If we provide this, we should introduce very large�scale system. This as a result very close in my experience. We should also consider the routing issue. If you surprise the network, you can see this. However, if it's mixed, you will use policy routing.

This means that you, we cannot use this way. The next is about IP address. ISP, if we provide, then the ISP will use 10/8. If the customer use 10 /8, it might need to be a duplicate. It maybe no problem because most routers use 192.168. In some cases, you cannot use 10 /8. It already has been assigned to existing hosts. For example, cable modems and VoIP terminals. The customer generally use the 10 /8, VPN or Enterprise network.

This slide will show my actual experience. And this user, she access to the NAT. And access denied. And I should be going through NAT using source IP.

For example, N users sharing one IP with others. Address consumption speed slows down 1/N. And we have address consumption � it will not stop.

1,000 addresses per month. If 50 customers share one IP with others, the consumption rate becomes 20 addresses a month. It only works for 12 months for a /24. A /21 works 100 months.

The summary of the presentation. The management port number. The routing program. And the management of IP addresses. If you enter the NAT, it escapes the implementation. And new deployment of IPv6 together.


TOMOYA YOSHIDA: Any questions? I think they're very quiet. This is very important. If we have any comments for Hiroyuki?

HIROYUKI ASHIDA: I have more detailed data in my laptop. If you have more questions, you can ask me after the time. Thank you.

TOMOYA YOSHIDA: Thank you. The next speaker is Mrs Echo Liu. And the title is challenges in large IP networks.

ECHO LIU: Network design and we work on many network. The customers to design, management their network. So today I'm very pleased to share my experience about network deployments.

Here is my agenda. First of all, we face challenges in large network deployments. And then we will have a summary of typical practice. And typical management level. And I will talk about the SNMP network. Finally, I will speak about Command Line Face. Or conjunction with the network, we are explaining IP structure into hundreds or even thousands of routers. So the challenge is how do you management this network?

The common solution here is simply divide and conquer. The network we will divide in to some more manageable modules. You can divide your network in to a north, east, south or west, for areas. And each area can be matched by different groups.

You should aim to use only one vendor. Problems can arise when there are so many vendors. If are high, how can we understand each other?

There are two ways. One is common international language and the other is a translator, who can help.

The SNMP and the CLI. The CLI is a translator. Because the network operators matching the level of this, CLI will need to know the different languages as well. Network management can be combined in order to get a better result.

SNMP is usually used by PM and FM. For configurations, there can be changes and trouble. CLI is preferable. CLI is now day by day management and application for this.

Because SNMP is the more general and more used, so today I will not spend time on it. Instead, I'll focus on other vendor issues, involving CLI. It can be tricky.

CLI can be a good translator. To get a better configuration, you can use different vendors. For vendor 2, they can display current config. For vendor 3, admin display�config. And vendor 4, show running. And vendor 5, show configuration detail. So the operators need to pay attention to the difference of these. This can make their jobs difficult. This sample of CLI outputs.

The configuration five includes lots of information. If you want to construct a network model, or a systematic data for organization and simulation, the configuration 5 is very good.

We have an example here that sometimes the same word has different meaning. They can be quite different. In some cases, you have to be extra careful.

In this slide, you can see the R1, R2, R3, R4. It means a path protection. That's in vendor 1. And vendor 2, means all the nodes can be automatically created to protect against link.

And vendor 2, in addition to others, you also need to configure R1, R2, R3.

And some vendors you can see from this picture. Some vendors treat MPLSTE Tunnels as interfaces. Others just call it Label Switched Paths. And they configure it to a protected level.

Some vendors assume the unit is Kbits per second. Where others just a couple of seconds. If you configure a certain way, it means one meg. We have a lot of challenges from CLI. And a lot of multi�vendor issues. Let's talk about using the CLI to discover this. The network has OSPF database. It can show all OSPF database. And you can prepare. You can look at this picture for the discovery process. R1 and R2, 3, 4, 5, 6. And up to 6.

So this is the case that you can use the CLI output to discover. This is another example of how CLI can discover an entire MPLSTE topology. You can show with vendor 1 the TED database extensive.

We can also get other useful information like from these outputs. All this information can be used for further modelling.

After you get the topology information, you can use the graphic on this page. You can see from the pictures there are four letters there. Making use of this and this multi�vendor network, we can discover no problem.

Furthermore, you can also show and extract some internal outputs by passing on the information. And you can monitor the tunnel.

To sum up, I will use this picture to draw a summary. You can use the CLI method here. And using the CLI for configuration and then to go to the next step of network organization and simulation. You need to verify this to the network. And after you make everything is OK, then you can use this.

This is basic network management using SNMP. OK, that's all. Thank you.


TOMOYA YOSHIDA: Any questions? It's so quiet. It's hard work for everybody. Is there any good solution to the configuration? Those people haven't experienced � how about the people who have experienced this?

SEIICHI KAWAMURA: Hi. That was pretty interesting presentation. We don't run NTLs in our network. Yes, we run multi�vendor NAT and we do have vendor A won't have such a command and vendor B will have a different type of command. The simplest and oldest implementation can be represented differently with each vendor. I think it's worth operators knowing the difference. Especially, as we have to go and learn each and every implementation. That's kind of part of our job right now. It can, if it can be made better, I don't know? Netcom can make our life better? Maybe, maybe not. I won't get in to that but I think the same thing as you.


ANDY LINTON: You've identified many of the problems of using CLI because each set of commands is different but you actually have to create tools to match this? Some of the commands you did to do the Shocom thing, you can store it with a piece of software and say I want to see the config, so issue this command? And the other pieces that seem to me that were in this sort of area where tools like RANCID, which is useful so you can do some of this work. And I think the other piece that is certainly in this area is the work that you might want to look at with RPSL, policy specification, and it deals with this kind of problem. You can express the policy in a comical way, and then use it to build writer conflicts. I think it's nut a complete picture in a number of different ways to do this.

You exchange it for five different tools that you have to learn. They make life easier because geography or by function, you have a particular tool that helps you with particular sets of function, whether you're running these. Always interesting. Thank you.

ECHO LIU: Any other questions?

DAPING LIU: Hi. Thank you for your presentation. Could you show me the previous slide? We had a lot of information from some engineering aspect. Do you have any idea of the summary from some management of this information? Do you have any idea? So, we hope that going to the executor we can find this information? Just a comment.

ECHO LIU: I hope my manager can answer your questions. We have to discuss these issues offline. Maybe tonight I can talk. OK, thank you.



The next speaker is Cancan Huang.

CANCAN HUANG: Thank you. I'd like to thank APNIC staff for helping me when I was in trouble. OK.

I'm pleased to be here to have this opportunity on behalf of China Telecom to share all our experiences on the subjects of IPv6 deployment with you.

Well, the title of my presentation is IPv6 in China Telecom Policies and Try. As we all know, China Telecom has one of the most complicated and largest network in the world, so it will be a huge project for us to deploy IPv6.

I don't have a lot of time in the next 15 minutes, so I'll make a start.

OK, let's look at who will be involved? There is an equipment provider and service network, and service terminals.

This is a project which will last for many years. So it's necessary for us to formulate into a plan and detail everything at each stage. Let's see the last two years. CT's goal here with the IP backbone. About 1�2 self�running services fulfil end�to�end. And sample MANs are able to provide end�to�end IPv6 services until 2011. Well, in the progress of evolution, there are a number of principles we have to work with. Protecting investment, keeping user experience, reducing impact on network and minimizing price. And second, fundamental networks come first, service networks soon after and CP/SPs come later. Conducting tests add selected points come first. Popularizing all�round come after.

And existing services need to be seamlessly moved.

First, let's look at network part. For backbone, we use dual�stack for transition. And for network, we use dual�stack transition. A small number of subscribers will use tunnel in case they don't have IPv6 ability. For a small number of applications, protocol translation mechanisms will be put in to use. Old DNSs software systems need to upgrade to support IPv6 record. The new built DNS are required to support IPv6 access. The new built dual�stack network management system should support IPv6 MIB database. AAA system doesn't need to be changed. Only need supporting IPv6 subscriber certification by software upgrading. The details of deployment policy service. New services should be IPv6 enabled as early as possible. Those platforms come are directly exchanging information with the client terminals should be IPv6 ready as quickly as possible.

Back�end management systems are not so urgent.

Mobile core networks and soft�switching networks should be transited to IPv6 gradually, according to the requirement of services. The new built IMS system should be equipped with IPv6 directly.

For third party, CT should push the government to work out mandatory policies to give incentives to the CP�SP to introduce IPv6.

And for IT support systems. Existing system should be equipped with IPv6 gradually. New systems supports IPv6 directly. The existing terminal. The set�top box should continually using IPv4. And cell phone doesn't support the one that doesn't support IPv6 continue using IPv4. Support IPv6 by upgrading soft terminal. Enable IPv6 function directly. We have done a lot of work here.

First of all, we have participated in the CNGI construction which includes core network, management centre, exchange centre and residential network. And we have developed many applications on this platform, including multimedia conference, radio surveillance system, and so on.

CNGI is a task network, so applying this to the network is very important. We have just finished a certain network. It has been constructed to support IPv6. In addition, we are designing and building up IPv6 network for the World University Games held in 2011. China Telecom not only test the network here, but also develop products. The industry chain includes three parts. Content provider, network provider and end�users. I think the three can bed. They keep waiting for the other two parts to go first. China Telecom has to win this to deploy IPv6 and start to develop the other two parts, content and end�users, to deploy this quickly and easily.

First, let's look at a picture. Before the public can explore the Internet, we have to set this up. There are too many options to choose. The subscribers' operation system may not support any at the same time. The service providers do need assistance to help them solve this problem. And we provide our customers' product named intelligent communication assistance. They want to help use this automatically and get whatever permission they want in the Internet, using either IPv4 or IPv6. And some customers don't like to install. Sometimes it can be convenient to install it on the cell phone. In this situation, how can IPv6 users browse if they don't use certain ways? China Telecom can provide a translation gateway.

To use the gateway to get the IPv6 website translated from IPv4 website. The customer can go on the IPv4 website and then transfer IPv6 website, translating from the IPv4 website. We're now designing products to help the provider to translate the information from IPv4 to IPv6 easily and quickly.

Well, we first referred to the deployment policies with IPv6. Who will be involved. CT's goal for the next two years. What are the principles. Detailed deployment policies. And then we talked about what we have done with IPv6 deployment, including CNGI, commercial trial. Product development. And intelligent communication assistant. Thank you.

TOMOYA YOSHIDA: Thank you very much. Any questions?

DAPENG LIU: Thank you. I have a question regarding to � I'm from China Mobile. My question is regarding to the application � most applications nowadays can only support IPv4, IPv6, so my question what is China Telecom's consideration about this issue? Even though the network can support IPv6 and if the application can not support it, the customer can now use the application? Thanks.

CANCAN HUANG: First, we should push the government to make some policies to make incentives to deploy IPv6 because the current provider is not controlled by China Telecom. The master must deploy their own application by themselves. But we can ask the Government to help us to stimulate, how to provide and support applications.

And, second, as I can show you a point. We have to do something to help the content provider to translate IPv4 information to IPv6 information and we are now developing this. IPv6 upgrading helper.

DAPENG LIU: What's the difference between the this?

CANCAN HUANG: I think it's similar. HTTP and URLs. Not all applications, but just for the website now. We plan to expand it to other applications.

RAN HUNG HWANG: OK, thank you.

CANCAN HUANG: Thank you.

RAN HUNG HWANG: I want to ask you, do you expect how much will increase deploying IPv6 within the next two years? The revenue increase for IPv6? Your income?

CANCAN HUANG: It's out of my � I don't know! I'm just an engineer.

RAN HUNG HWANG: I have the same question and I'll follow up. Which money will be better spent to transfer from somebody else, so you don't have to spend money including others?

CANCAN HUANG: Why don't we spend any money on it?

RAN HUNG HWANG: If you can get this and transfer from somebody else, from China Mobile, for example � will you still be spending money to deploy IPv6 network? Essentially, you're providing the same service?

CANCAN HUANG: We're now deploying. I will, are we going to deploy IPv6, is that your question?

RAN HUNG HWANG: No, my question is, if there are IPv4 address available for you. We spend a lot of money to get an IPv4 address, will you still be spending money to deploy IPv6?

CANCAN HUANG: Yes. We think that we need to.

YI CHU: So you have technical reasons for this?

CANCAN HUANG: From the technical aspect.

ANDY LINTON: I have some thoughts on that comment. First of all, the question was if you can get IPv4, IPv6 requests. I assume you can do that but they'll by a bit more expensive because there will be a short supply. One of the other questions is not whether you have IPv4 addresses but whether the people you are communicating with have IPv4 addresses? If you get to the stage where there is even a reasonable proportion of people on the Internet who have only IPv6 or limited access to IPv4, perhaps you won't be able to talk to them. There's a dual, it's not just about what you can do and have, it's about what other people will be able to do as this thing grows.

I think the other one to think about is a number of us here, with the sort of grey hair sort of thing, remember when dual protocol stack networks were a very commonplace thing, and there's a very large number of people in the room who have never known anything other than a single protocol stack, because that's been a common thing for them. But a dual stack network isn't everywhere. People talk about the large amount of money people will spend on deployment. If you do it in a gradual way, it seems that we're talking about it here. It doesn't have to be a major budget to do this. Some people identify it as being a few percent of their total AT spend. It may be different in your organization, but, yeah.

TOMOYA YOSHIDA: OK. Do we have any last questions? OK, so this is the end of the morning APOPS session. Thank you to all the speakers and questions.


SRINIVAS CHENDI: Thank you for chairing the session. There will be lunch in the Terminal Room. You can bring you lunch and join the session if you want to or you can stay. We're going to discuss about talking about IPv6. If you have any ideas to share, please join this one. It's going to be the lunch hour, so you have two options. And, also, I do apologize for delaying it today, because of the morning session we had overran. But we're going to come back in one hour's time back in this room for the next half of the day. Thank you very much.


Wednesday, 26 August 2009, 1400�1530

SRINIVAS CHENDI: I would like to welcome the chair of the session for the presentation.

MATSUZAKI YOSHINOBU: OK, let's get started. This is the APOPS session and we will have five speakers. Part one is APIX update from Katsuyasu Toyama from JPNIC.

KATSUYASU TOYAMA: Thank you. I'm from JPNIC. And today I would like to talk about the new APIX forum, and today I will make a presentation on all of the volunteers. And maybe almost all of you are not familiar. APIX stands for Asia�Pacific Internet exchange. So the forum of the area exchange providers. The object of the forum for the technical, operational, and business issues and solutions regarding Internet exchange.

So, I would like to talk about the background of this. As you know, the area exchange itself have become more and more important infrastructures for the Internet. And also, the many area exchanges in the Asia�Pacific regions. But, at this point in time, we have no force of Asia�Pacific providers to share in the IX technology operations, business and so on.

You may know that that is a kind of a model of the APIX forum.

So yesterday, some volunteers from exchange point in the Asia�Pacific regions gathered last night and we had a brief meeting about this activity. And they were from China, Hong Kong, India, Japan, Nepal, Singapore and so on. And also, the advisors from APNIC. And we discussed whether this activity is necessary and what kind of topics we had to discuss. And among other things like schedule and yesterday, finally, we agreed to step forward with this activity.

The scope of this activity and the discussion topics in the APIX forum were here. And just like on the thing like exchange point architecture or technologies for Internet exchanges, and there were some issues about operational issues. So, sometimes we have to request, we have to make requests to vendors because sometimes the switches that we use are sometimes lagging and they do not implement the functions that we need. So, at this forum, it will be kind of pushing organization.

And also, from the operational point of view, with the Internet exchange for the traffic. We can analyse the trend of the traffic and look at the traffic trend. And also, we need tools for the operation of the Internet exchange and the standardization and the peer issues and research and development and education. Those kind of issues and topics are discussed, will be discussed in this forum. So, we decided on the next step and we are planning that first meeting in Kuala Lumpar on March 2010 with an APRICOT meeting. And it is a very well�organized forum, but we are... we start with a small basis. So first, we gather in the small rooms and then discuss these things of things and discuss this, and after that, we look at the next meeting.

So, by next March, 2010, we will discuss the mailing list about our charter, scopes and memberships and those kinds of things. And also prepare the first meeting.

Thank you very much. This is a brief introduction of the APIX forum.

MATSUZAKI YOSHINOBU: So the next speaker is Geoff Huston.

GEOFF HUSTON: Good afternoon all, I'm with APNIC. I was asked to give you an update this afternoon about AS numbers. So, I thought that I would talk to you about AS numbers � yet again!

There are only 65,000 of the little ones, so of these 65,000, 65,536 of them, you can see 31,750 of them in BGP and there are another 14,983 out there that we can't see, so of the little numbers that are left, 9,200 are now happily residing with IANA and there are only some left there in case you're interested. If you like it in colours, here are the colours. That's kind of a map of the entire space. The early ones on the left are the AS numbers of the gateway of the early '90s. They go from blue to red for some reason. Nobody likes using old AS numbers, they like bright shiny new ones.

But the real issue is � how long have we got to go? So, this is a graph of the daily allocation rate of AS numbers since early 2006. And, we basically get through somewhere between 8 and 20 AS numbers every single day, and the launch on average is 12.3 AS numbers every single day, which is quite remarkable. It's actually quite remarkable. There's some interesting little artefacts in there. I like what happened right at the end of last year. You kind of wonder if the slump of AS number allocations from 18�8 is something to do with the global financial crisis. And you kind of wonder, instead of looking at the stock market index, you look at the AS number index as an indication of global economic performance!

But anyway, in figuring out the question of how long have we got? If you take the numbers and add them in a graph, you have a pretty colour like that, and the only real indication there is in the bottom right is when everything turns into disaster. When will that happen? Sometime in February 2011, IANA will give away the last block of numbers and the first RIR to exhaust, if nothing else happens, will be RIPE and it is going to run out of its little batch of AS numbers sometime in September 2011. That's two years from today.

So you sort of think, we should have done something about this! We should have figured this out earlier. This slide dates from March 2003, and when we did the maths back in March 2003, we worked out the exhaustion of the AS number pool. But in fact, we actually figured out that we know how to, we're engineers and we're professionals, so we said, we could do some standards about this. And because the IETF is blindingly fast, we would allocate an entire 24 months to do the minor changes with BGP. And the RIR source, it takes six months. Vendors who know what they're doing are also professionals to do the minor changes for BGP and by 2008, it would all be done. Yes? No! Oops! The IETF started work in 2001, and after great deliberation, made four proposed changes to the BGP and published the RFC six years later.

By God, we're quick when we put our minds to it! So, the IETF has done its work quickly. The RIRs, we started a policy proposal back in 2005 and eventually got something through, and the whole idea was that it offered some clear milestones to vendors and to assist them in what was going on. 2007 to make it available. 2009, unless you said something, you would get a bit number. And by 2010, four months from now, it was all meant to have been done.

Well, you know, the policy is in place, but let's see how we're going with this. What about vendors? Are they co�operating? Actually, they've finally done something. In the last six months or so, we're starting to see some implementations of BGP that support the numbers. And if you want to know about your favourite vendor or BGP, the URL at the bottom gives you the full up�to�date information if you want the details. So OK, we're almost there. Three out of four.

What about you guys? What about the deployers, what about you people? Have you been listening or have you been asleep? Wake up!

So, let's have a look at the numbers. Let's see what should really be doing. This is the history of allocation data of 32�bit AS numbers. The red line is the total and it looks impressive until you realize that the scale goes up to 300. So, we've actually managed 300 AS numbers. Over the same period in the 2�byte stage. APNIC has done a lot, RIPE has done a lot. ARIN has done nothing, LACNIC has done nothing. And let's have a look at how many are in BGP � 70. So, as far as I can see, things are looking pretty slim. There are 70 advertised numbers and 221 have been handed out but we can't see them. Over that same period in 2009, we allocated it, 683 ASN numbers and 294 bit numbers. So, you know, you guys just aren't doing the job.

Whoops! So, how do we help you do the job? Well, the first thing is presentations like this. Go figure! And secondly, start actually upgrading your equipment. And maybe also we should address some common misunderstandings about 4 byte AS numbers and BGP. Let me address this.

I need to upgrade all of my BGP for AS numbers, do you need to do that? Of course you don't. If you have a 2 byte AS number, leave it alone, it just works. All of the 4 byte numbers work through you through tunnelling. Do you need to upgrade your BGP systems? No. Only if you're deploying a new AS number in a new network do you need to worry about this. Everyone else can go back to sleep.

But OK, I'm an ISP and I have customers and they come to me with AS numbers and someone comes to me with 4 billion and so and so numbers, can I cope? could do I need to upgrade my BGP just because a customer has a big number? No. Things technically work just fine. So, even if your customer has a very big AS number and you have one of the old versions of BGP that only does 16 bits, everything will work just fine.

But... if you're running these fancy operating support systems that make the coffee and do whatever else you want it to do, and if it is call keyed with the database and if the primary key is an AS number, watch out, things are going to get just a little bit confusing, because all of these upstreams and customers using the big numbers will appear on your router as AS 23456. And you might start confusing one customer for another, and generally, I've been told that when you start doing that, that's a very bad thing. A Bad Thing.

So, what else do they do in BGP? They have fun. Can I use 4�byte numbers in BGP communities? No. It won't work just yet, because we spend all of that time learning the standards for BGP and we're still working on the standards for 4�byte communities. Draft IETF, blah, blah, blah, it's still sitting there. Any decade now, it will pop out.

What about if I upgrade BGP. Will BGP crash as soon as I try to run 4 byte BGP. Oops! You know, it actually hasn't been an unmitigated success. Certainly, it latest security advisory I saw from Cisco a few weeks ago, that if you managed to cram 1,000 AS numbers in an AS path and you need to work very hard to do that. You will find that the router will be less than happy. So, for those kind of folks, the max AS limit statement is your friend, and if you can read the fine print, that's the advisory. But, it's not just that. BGP is a very weird protocol. If you tell me something that I don't understand, I'm going to have to kill myself! Because every time BGP send an update that's weird, I send you back a notification saying � that was awfully weird and then I shut down the session and die.

So we start up the session again. You send me the same update. I don't like it. I shut down the session and die. This will go on forever! And we have noticed that there are some elements in the 4 byte and 2 byte transition, and in particular with the AS confederation elements that if you're cunning enough to get one into the 4 byte AS path, bad things will happen.

But on the other hand, some people have also noticed, 20 years later, you know, I'm waiting to hear this before that if you manage somehow to get AS 0 into the update path, you die. That's a bad thing. And if you tell me a bad thing, I have to shoot myself! So, BGP itself is not exactly the world's cleverest standard, and we've actually started to figure out that maybe we should do something else. That if you tell me something bad, maybe I should just not listen. It's a lot easier than killing myself and that standard is coming out any decade now.

What about if I see AS 2346 and I'm a new modern 4 byte speaker? That should never happen, should it? So, if I see it, is it the Internet going to crash and die? Get real!

So, here I see on the 4 byte system, the number of people announcing 23456. These people are weird. Who are they? They're down there. Pick up the presentation pack. But if your name happens to be JSC telecom... what are you doing? Similarly, TKT Telecom, and Baharati Airtell, one of ours I believe. So yes, 23456 does exist. It may be something abnormal, but quite frankly, it doesn't make a difference. BGP still works, oddly enough. So calm down, don't worry. And of course, we haven't really come of age, unless you see bogons and what about 4 bytes, have they got enough people to start lying yet? But of course, we're professionals. We do this for a number. And 6% of the 4 byte ASN numbers are bogons.

DIANET, whoever you are, don't do that. 207690136 is too big a number. Stop now.

So, if you want to get more details, rather than my ranting, go and read all of the following things up here. In particular the RFC, great reading. And the RFC about the RFC, trying to fix the bits that they got wrong, which is the BIS one, there's also an Internet Protocol that goes through this in great detail and some reports on how long we have to live before we all die, and a wiki to ought your information. Which I think is now 15 minutes.

Thank you very much, any questions?


TOMOYA YOSHIDA: Could you please go back to the stand. So question number three in the slide. So, I believe that most vendor equipment, the draft, I believe that Cisco is with the other vendors?

GEOFF HUSTON: That is correct. Some of the vendors have leapt ahead and put some communities in. However, if you're using a community for me to send a signal to you, even if I'm a 2�byte BGP speaker, I need to know about 4�byte communities, so I need to upgrade my BGP to know about the full communities to send it to you, so things do get a little bit weirder than you think.

TOMOYA YOSHIDA: Yeah, that's why I want to clarify that point. And another point, what about the set for the multicast, we have to do many work.

GEOFF HUSTON: As I said here, operational support systems, things that configure routers, things that describe the network more than the router itself need to figure out how to handle large numbers. Absolutely. Yes.


GEOFF HUSTON: Thank you.

MATSUZAKI YOSHINOBU: So the next presentation is George Michaelson. The DNS systems?

GEORGE MICHAELSON: Yes. So, this is where we do the laptop dance and the next four minutes is lost as I try to get this to work. Is that displaying correctly? We're on. I'd like to talk about the day in the life and do some comparison between the 2008 and the 2009 data. What I would really like to do is win a competition with Geoff. We've had a private agreement to see who could get most slides into a 10 minute slot. I have ten more than him, but he has more content on the slides, so I think, technically, although I will win, actually he will have actually won!

So, as you know, APNIC is high in the DNS for reverse address management for the ARPA with master for the Asia�Pacific region, but also with the secondary DNS for the other RIRs to have RTT within the area. We have three locations: in Brisbane, Tokyo and Hong Kong, and these are really quite good locations.

So, we've got two forms. The stuff that's about us, the addresses we have in the region, stuff that we do for other people, and question stands � what are these things doing? What are they up to?

This is the monitoring system which is a product called DSC, and as you can see, this is about a month's view, and there's a regular pattern of behaviour. So pretty much, one month, most days will be the same. So, if we drill down and look at a week's view, you'll notice some extra data coming out of here with interesting spikes emerging. The far right is where we lost data in the collection system. So then you get down to the level of detail of the day. What is going on here? There's something happening every day across our systems that we need to know about. And DITL, day in the life, is a mechanism that's allowed us to go back and look at DNS data and find out over the line what is going on with the data and maybe we need to do some research.

So, day in the life � it's about a continuous collection of data over at least one 24�hour period and it's organized by CAIDA. The data warehousing is provided by OARC and the data I used in this come from OARC and from ISC and also from a company called Measurement Factory.

The first collection was three years ago and had four participants. It was really just a small local exercise. We had the fourth earlier this year, and as you can see, it's grown to be really a very large of view of what's going on. 37 participants and is the 0 nodes of collection and 4tb of collection.

And it was really very simple for us. What we did was we included a small active packet design in there. It's a well�designed box to fall over if there's a power loss, so we're quite comfortable it wouldn't fall over if it interrupted the system.

This is to give you an idea of how much data we had to process to be involved in the collection. You can see between the two graphs. Sorry, blue is a bad choice, but it has just about doubled over the collection period following the data. OK, so this has been a brief pop quiz. If you were running a DNS system, do you think that you would use the same IP address on your DNS in 2009? Well, I would. I don't really go out and randomly change the address of the infrastructure. So, if you have these addresses and you saw them in 2008, how many of them do you think you would see in 2009, and I certainly expected that I would see a lot. Well, I was wrong. The answer is that there's only about one third of the addresses that are actually seen persistently over a one year measurement system as using our systems.

There's pretty much an equal third split there. So, what are we saying there? This seems counterintuitive. Stable addresses, that is not what we're seeing. So it begs the question, what is going on here? So, if you're doing reverse DNS, don't you think that you do a lot? I mean, if you bothered to invest energy to look up reverse addresses, you're not only going to do one, right? So the curve shape would probably be � that you get a kind of a usual bell curve of a lot of people looking up and a lot of addresses and a small number doing a few and a small number doing an obscene amount. Most of the queries we see, we only see that IP address once or twice in the activity over 24 hours and they only do one or two things, and there's a declining curve of people that do 24,000 or 1 million, and there's a very small number of people, really only 10, who do millions and millions and millions of queries.

And that was really not what I expected. It just seems like what we call infrastructure, certainly myself, but other people I talk to, we haven't really understood what is going on here.

So, we talked a bit about the ideas and we got a feeling about what is emerging and maybe it Firewalls and probe tests and other applications, so there's a lot of work here to find out what's really going on. The other thing is that when people talk about how many DNS servers are out there that have to be upgraded to take account of behaviours, if it really is infrastructure, the number is probably far lower than we thought. I mean, I'm suggesting that the number of boxes that do DNS in this space in particular is very high, but the number that are infrastructure boxes buried rooms is still very high.

So, by comparison, I love the graph, mainly because of the way the lines fit up on the numbers scale. What you're seeing is the variation in the protocol across the two years, and the thing here is that you get the amazing 10:100:1,000:10000 ratio with true v6 and true v4. I would like to say that it was an artefact in slide ware, but it was looking at 2008 and 2009. It was really that consistent over the two measurement times.

You know what was funny, the other thing is that I'm noticing addresses that are using v6 as the preferred transport. That means infrastructure such as this is actually using v6 when it is v6 over tunnels as well. And so, the other observation is that 6rd, which is the French initiative to have the end points in the ISP, I can detect that as measurable activity in the process, and that has a very strong message for some people who are interested in improving v6 update that that kind of intervention local to your customer base can be seen in a global context, and it might give you signals that you can improve v6 update.

OK, so, is there enough v6 usage that we're going to head off the emerging problem? Well, unfortunately, no. OK, so I have some more details on tunnels.

This is the comparison of the overall v6 and v4 ratio. The top two say that over the sample in this exercise for three days, we were seeing on average, 100,000 unique v4 addresses and that was completely consistent over the two separate years. There's no significant change in the identities that are using your service by volume in this measure, but if you look at the one and two lines, the red is the 2008 camp and the green is the 2009 camp for v6. It's a long scale. We went from somewhere around the 200, 500 mark, up to 1,000 � double. So, there was significant growth in v6.

However, if I compare that with the SEC services on behalf of the other RIRs, you'll see the behaviour was divergent. They were already doing a significant volume of v6 and didn't see that surge in load. So this is not something which is consistent across the two kinds of DNS. So, where there is some optimism here, there's room for hope and seeing the v6 update, but it is still two orders of magnitude less than for v4.

It's not just about the Asia�Pacific, these addresses came from everywhere in the global Internet. So the lessons learned in this exercise, the first time around, we have too large a sample size and we see that there.

The other observation that I want to make is that I really don't think that we understand what reverse DNS is and we're going to have to do a study. I am going to have to go through some of the slides quickly. I want to show something that relates specifically to the Asia region DNS. You'll notice in the curve shape, there's a huge spike in the one�day. Well, it turns out that that is almost totally coming from east Asia, and if you drill down into east Asia, it is coming almost completely from Japan, and what we're able to expose from this is that we can measure down to the national economy level, the behaviour that we're seeing in the DNS. Geoff, you won!

Yeah, do I carry on? Do I run on? Or do I give up? OK.

I can't talk as quick as him, despite what people say!

So, if we do the comparison against the rest of the world, the other thing that's nice here is that you can see that there's a time frame shift between people looking at the DNS and the European addresses and people look up against the Asian addresses. I think this is a very strong reinforcement of the aspect of what's going on here relates to the specific localities. This isn't just machines running 24/7, it has engagement with local time zones.

The other observation that I'd like to make is that we're able to do an analysis that maps the reporting for economies and we can actually show different relativities in the uptake of v4 and v6 in the economic region, and even down to the east Asia region, which of course, with a lot of people here, we can see some emerging differences.

The other point to make is that there doesn't seem to be a strong relationship between DNS use of addressing and the silent usage. So when we talk about how much address we've given out, that doesn't correlate strongly to the amount of address we see in use in the network. But I think people have to bear in mind that giving out addresses are part of the deployment, that's not just the question here. There is the issue that to what extent people can use it when they've been given them to do the service.

So, we think we're going to do this again for 2010. If we can get two points that give a line and three with a trend, we'd like to see continuity here. It's actually an achievement exercise because we have the technology deployed, and we have fairly significant DNS changes for APNIC, particularly with DNS. Thank you very much.

RANDY BUSH: Isn't this talk a little mistitled? Shouldn't it be a day in the life of the DNS? And what this is as far as Internet measurement is a very unique point of view with no control, so you don't know whether you're measuring changes in the DNS or changes in the Internet?

GEORGE MICHAELSON: You are right that it was mistitled. The presentation focused specifically on DNS, but I would point out that DITL is more general although it has large parts of DNS information, it is not just about DNS. The data we provide is DNS focus. To the other part, I actually have wondered sometimes what it is we're measuring and Geoff and I have finalized arguments about just what it is we see. But the observation of that variance in the source IP address is querying, really interested me. Because if we're genuinely sampling around 500,000 IPs in a 24�hour window and these are not infrastructure and they vary, there's an argument that this has something to the actual end�users and there may be better measures people can come up with about what percentage of allocated networks go there that measure the Internet growth.

It's possible, I'm not going to make a strong case, because I'm not convinced myself, but I wouldn't reject this as one of the potential measure it's. You're right and on the money with that question, Randy.

ROLAND DOBBINS: I think it was a very interesting presentation and data is always good. Regarding your comments about reverse queries, we do see reverse queries and there's actually been a lot of work done on this. There's a presentation, in particular done by that. For 2006 and 2007 where they actually gave some demonstrations of reconnaissance using that.

GEORGE MICHAELSON: Geoff Huston said that we should look at it with binary section or source address query that's built up with the population. The other observation is that if these people are prepared to sign the appropriate one, the DITL stands that they can use as a confirmation of what they see. So, I may not be able to do the analysis, but at least I can help get the data out. Maybe you can get some people to do the work out.

ROLAND DOBBINS: Absolutely, those are very good points.

MATSUZAKI YOSHINOBU: Next speaker is Ji�Young Lee.

JI�YOUNG LEE: Hello, everyone. My name is Ji�Young Lee. I'm from KRNIC. My talk is about DDoS attacking. You have already heard about these accidents happening in Korea maybe not? But I'll explain it shortly.

This is a DDoS timeline. The first attack occurred, first targets were in the US, such as the White House, the Department of Homeland Security and some others. And second attack occurred on July 7, this time second targets were some Wiki pages in the US and some in Korea. Targets in Korea were Blue House, which is presidential office in Korea and Ministry of National Defense, National Assembly and seven more websites and Naver Portal. When the first attack occurred in Korea, we called it the July 7 DDoS attack. The third attack occurred on July 8, and the fourth attack occurred the next day, July 9.

Mainly targets were on Korean websites, for example, government web pages, banks, local portals and computer companies. Before the first attack, some expert and investigators found out the attack schedule and the target victims, so they announced it and warned the public to prepare for this attack. Only one home page was on our service and other web pages were OK. This is the comparison of DDoS attack between the past and now. Traditionally, hackers make some malicious code and distributed through home pages, or P2P sites. When computer downloads malicious code from the compromised home page, they are infected and also compromised.

Using email and messenger, they distribute malicious code. When hacker sends real time command to some PCs, some PCs send unlimited hackers to the victim's system. So the victim's system is flooded with traffic from the zombie PCs. So this traffic overwhelms the victim systems and resources. But this time the attack was a little bit different from the traditional attack. The main difference was the target and attack schedule were programmed in the malicious code, so there was no communication between zombie PCs and command and control service. So it made it really difficult to track down the location of hacker or server.

Some zombies were scheduled to delete the data in the hard disc. This slide shows how we reacted. First of all, our organization KRNIC, we connected zombie IP addresses from the victim's sites and sent it to Korean IPs. Currently we have 127 ISPs in Korea. Uploaded vaccines in the major Korean portals and game sites and recommended Internet users to update them. We opened KRNIC Whois to the victim sites to identify the zombie PCs. Some of them are already aware of the zombie IP addresses from their IDS, or they were provided with IP, zombie IP addresses from us. So they contacted the subscribers and let them update their vaccines and delete the malicious code. In some cases, they disconnected their access.

This table shows the number of zombie PCs from major ISPs in Korea. The number is actually different from the investigators to investigators. So some say the number of zombie PCs was 200,000 and some say it was 170,000. But according to our � this is the confirmed number of zombie PCs investigated by our organization. Total of 77,875 zombie PCs were confirmed. And in four days, 97% were deleted.

This slide shows lessons we learnt from this accident. The first thing is it is helpful if ISPs distribute vaccines to protect their customers and their networks. In Korea, actually, there are some ISPs who freely distribute vaccines and recommend users to update it.

And secondly, keeping correct data is very important. It is not easy to identify command and control servers and zombie PCs. Especially when they are translated. It is really hard to track down. And to identify the location of zombie PCs, we need collaborations among many countries. So if you're willing to, we are welcome to get your advice or help or any kind of collaboration.

This is it. Thank you.



ROLAND DOBBINS: I have a couple of questions. Thank you. It was a very interesting presentation. A couple of questions for you. In your presentation, you stated that some advance notice was given of the attack. Can you be a little more specific? I was unaware of that. As far as I know, the larger operational security community seem to be unaware there was advanced notice provided to any organizations, either in the RoK or elsewhere. That's my first question if you could expand on that comment?

The second question I have is the conclusion that the entire attack was driven by timers. If you can describe the data � I'd be interested to hear that as well? Thank you.

I'll restate the second question. You stated that the attack was entirely by timers. This is the first time that I heard that particular assertion and would love to understand the data behind that assertion? Thank you.

JI�YOUNG LEE: Originally, I'm not a security expert, so I can't explain in very technical part. Actually, there was a new slide. OK, I explained the four attacks happened in Korea. From the first to the third, we didn't know the schedule. But the fourth one, there was a news release that there are several targets and attacks scheduled. So they said from 6pm there would be attack in Korea and seven or eight web pages are targeted. So we monitored the system and we actually saw one home page was out of service at that moment.

ROLAND DOBBINS: Was that made in Korean language only or English as well?

JI�YOUNG LEE: I'm not sure about English ones. News release from the broadcasting companies, so even a person like me, knew this kind of attack schedule.

BILL WOODCOCK: If I understood your presentation correctly, there were about 77,000 infected PCs that were identified and you contacted 130 Internet service providers in Korea about this. And you said you succeeded in cleaning up 97% of the infected PCs. I'm curious how many participating machines in the attack were outside of Korea, how many of the 77,000 were not customers of Korean ISPs? I don't need a number. I'm curious whether there is a majority of them were in Korea?

JI�YOUNG LEE: Yes, I understand. At the time, we were also very curious about the original attack or other participants from other countries. But there was no one user, there were several from each address. So I can't say exactly or correctly but I heard more than 50 countries were involved in those kind of attacks.

BILL WOODCOCK: You believe 97% of the attackers were?

JI�YOUNG LEE: No. 77,000 zombie PCs in Korea, 97% were clean.

BILL WOODCOCK: Of the ones in Korea?

JI�YOUNG LEE: Only in Korea, yes.

BILL WOODCOCK: So, there were many more than 77,000 total participants, just the others were outside of Korea?

JI�YOUNG LEE: It hasn't been tracked yet. Because they are in Korea and KRNIC has a control over our ISPs, so we can get numbers from ISPs in Korea but overseas cases, we're not sure.


TERRY MANDERSON: I notice the Korean AS takes priority of place with the number of BGP updates which are seen in many of the reports. There any correlation to this event and why those BGP updates are still so high? And can you perhaps draw any conclusions to that? Is this continued work going on?

JI�YOUNG LEE: I'm sorry. I didn't get any correlation between the updates and those kind of attacks. I can't answer your question properly.

TERRY MANDERSON: Thank you. Perhaps for some further investigation.

MATSUZAKI YOSHINOBU: Thank you. Next is the Emperor's New Cloud. Now it's Roland Dobbins.

ROLAND DOBBINS: Hello, everyone. I'd like to thank the APNIC committee for selecting this talk. My name is Roland Dobbins. I know many of you at my 10 years at Cisco. This is sort of a follow�on presentation to Ms Lee's presentation. We're going to talk about some of the details. I entitled it the emperor's new cloud to go on a varies, a fairytale. If you're familiar with the fairytale, you'll understand the recent title of the talk shortly, and if you're not familiar with it, you can take a look on the Internet and you might find it to be interesting. You can't solve it by and see that he's over and over again but especially when we take a look and so, it's very important to keep in mind that there is a lot of history here with the systems that we use to be together with this things that we call the Internet. There are problems that we inadvertently have created over time, and it requires fresh thinking and fresh approaches to overcome those problems. So, over time, what we've seen is the evolution of the threats that people face on the Internet.

We've seen a move from a very manual type of exploit that require a high degree of technical skill towards more sophisticated tools that have actually greatly increased the threats to service providers and to their end customers on the Internet. At the same time, the sophistication of the tools has gone up and the amount of technical knowledge required on the part of the successful attacker has gone down. And so, this is that in the year 2009, almost anyone can become an Internet super villain without having a strong technical background. And this is greatly changed the dynamics of the last decade of what we consider to be the Internet security posture.

The number one security threat online today is botnets. And we talk about DDoS, spam, phishing, identity theft, fraud, and just about every other type of online misbehaviour and crime that you know about, it is generally Botnets that are behind the attacks. They empower individuals to harness legions of computers, unbeknownst to the owners of the computers to commit the crimes. Botnets have been with us for a long time and go back to the days of Internet chat for IRC where they were used for benign purposes to colour the challenges and they've evolved into a pervasive attack. We're seeing botnets that are composed of home routers and we've also seen botnets with the concept for mobile phones. So the Botnet, it is with us and will stay with us.

The attackers will continue to try to compromise machines in order to try to build the attack tools that they use for various purposes.

DDoS attacks are a fact of life on the Internet today. As we're having this discussion here at APNIC 28, there are many DDoS attacks taking place throughout the world today right now. Subjectively speaking, based upon the data that we have access at Arbor and based on our own experiences working to defend against DDoS attacks over the years, we see that approximately 15% of DDoS attacks are criminally motivated. And then see that they can take down a website for other types of online property, and then they will demand protection money to stop the DDoS and let the affected site come back up.

About 15% of the DDoS attacks that we observe appear to be criminal retributions. These are criminals who are going after people who are working to defend against their criminal activities. For example, phishers and spammers who hire the same bot masters to send out their spam e�mail and their phishing e�mail, who hire the bot masters to DDoS anti�spam organizations, for example. We also see a lot of attacks where one set of criminals is attacking another set of criminals in order to try to take over the victim's botnets or disrupt the activities in different kinds of ways. We see that about roughly 1% of the DDoS attacks that we observed appear to be ideologically motivated, political, religious or ethnic conflicts tend to spark about 1% of observed DDoS attacks and about 69% or 70% is just kids making trouble, people making trouble for themselves and for other organizations.

So, that's the breakdown that we tend to see today. Anybody, any organization can be the target of a DDoS attack. Either a deliberate target or you can be a victim through collateral damage. It can happen to anyone. It's not just high profile organizations who are subject to DDoS attacks. Outbound DDoS can be just as devastating to the end customer and to the service provider as inbound DDoS. When you have boted hosts on a broadband access network or an enterprise LAN or a 3G network which are sending out DDoS bandwidth, they're sending it out, in the case of wireless networks. They're consuming scarce spectrum resources. They can end up causing collateral damage to other users who are on those same enterprise lands or those same broadband access networks.

So outband DDoS from the bots can be just as disruption as inbound DDoS from the bots. It's very important to have situational awareness. It involves having a disability into your network and traffic and understanding the traffic patterns that you're seeing and being alerted and being able to investigate and seeing abnormal traffic or other types of network behaviours. It also involves things that have nothing to do with technology. It involves following global events in the news. Understanding when there are high profile political, ideological, ethnic or other types of conflicts in the world that may spur ideologically motivated attackers to launch attacks. It also involves a knowledge of history, understanding the significant anniversaries to various groups which are coming up, because sometimes those seem to spur DDoS attacks as well.

And again, the bad guys attack one another with regularity. When the bad guys attack one another, they're often attacking you or your friends or your family, because they are attacking compromised machines which are used by the attackers themselves from commanding control centres, for example.

So, an attack by one criminal organization on another is how the victims of crime, in terms of the general Internet user population.

Cloud computing with the trend. We've been hearing about cloud computing. What is called cloud computing. For the last 18 months to two years, basically, we can summarize the data model by saying it seems to involve a shift away from a lot of localized processing on local devices and more processing and data to centralized services and Internet centres. We hear a lot of talk about security for the cloud. Security concerns.

Most of the security talk about the cloud seems to centre on issues of privacy and confidentiality and separation of data in the infrastructure. Those are all important issues and there's a lot of work to be done in those areas. But, the security elephant in the room in terms of cloud computing, that we don't seem to hear a lot about are DDoS attacks. DDoS attacks are actually the number one security threat that threatens to cause problems for the cloud model. The cloud model by definition involves remote computing, when you have users and you have applications and data that are posted in a different location. Yet, the denial of a service attack successfully takes out cloud resources. That means that potentially, many, many users do not have access to their applications or their data, and a DDoS against one particular user, one particular organization making use of the cloud infrastructure will almost certainly, by definition again, because clouds are architectures, have a lot of collateral damage which will occur to other organizations who are making use of the same cloud provider or cloud data centre and data applications.

Cloud providers don't seem to talk a lot about DDoS. Most of them don't tend to talk about security at all. But when they talk about security again, they're focused on privacy, confidentiality. They don't seem to lead with the fact that they have robust defences against DDoS attacks. Why is this? In some cases it may be that it simply hasn't occurred to them. In some cases, organizations tend to be somewhat close�mouthed about the security capability and in some cases, perhaps then don't have defences available. In the online security research community, we see a constant stream of papers and presentations which talk about many different types of compromise and some exploits in ways for the assistants.

However, with a few notable exceptions, we don't see a lot of security researchers talking about DDoS. Why is this? My answer is that in many cases, security researchers maybe think that DDoS is a solved problem. They're mistaken.

In other cases, it may be that DDoS is hard, it's a complex problem and perhaps it is easier to do some research or write a paper on another exploit against Microsoft Windows or the Mozilla web browser or another service than it is to roll up the sleeves. When it comes to cloud reality, is that we've all been dependent on the cloud for many, many years. A few years ago, I learned that most people these days do not type URLs into web browser and they don't use bookmarks. What they do is go to the search engine of their choice like Google or Yahoo or Bing, and they simply type in what they're looking for and get a list of results. This may be a resource that they use multiple times of the day, but they don't make use of book marking and look at the search engines.

Imagine what your Internet life would be like for one�day if you had the access to the mail or the instant messaging or the social networking applications which you use to maintain contact, not only with your friends and family, but with your professional colleagues. DDoS is a big problem. It is a security threat to the cloud model, and therefore, it is a security threat to us all � not just cloud providers, not just ISPs, not just network operators, but ordinary Internet users.

Something that Geoff touched upon in his presentation talking about BGP, he noted some various conditions that weren't really accounted for when BGP code was written some two decades ago. The reality is that even though we like to think of the Internet as this hi�tech metrics full of cool stuff, the reality is that they have an underlying infrastructure which goes back in many cases, 25 years or more in cases. We can see that happens later in seven with all of the cool whizz�bang applications. But all of the underlying protocols which allow all of this to take place were mostly designed for use in laboratory environments, in academic settings and other closed do mains. We owe a debt of gratitude to that to be success beyond the wildest dreams of the authors.

Nevertheless, security was not generally a primary concern when most of the protocols were designed. As a result of this, we had a foundation for the global Internet which is very open to abuse, and this is something that we need to consider. Now, to counter that, over the last couple of decades, and especially in the last 15 years or so, we've seen a large increase in the amount of operational security knowledge and best practices which folks have devised and written about and evangelized. So, we have a lot of tools and techniques and processes which can be used to mask and to compensate for some of the shortcomings or security stand points of the Internet infrastructure. Unfortunately, even though folks have done a lot of work in creating these devices and documenting the processes, they seem many times to be honoured more in the breach than they are in implementation.

We also see the same types of problems with regard to Internet operational security than we see in any Internet large group undertaking. We see pervasive disconnect between the people who design the architectures and the people who deploy and maintain them. They see disconnects between the people who write applications and design them, the people who run them, and the networking staff who have to actually ensure that the applications in their data are made available to users on a global basis. We see disconnects between different operational groups within the same service provider and enterprise organizations. Much less across organizational areas. We see disconnects between different groups responsible for different aspects of security within single organizations and functionally across organizations.

We see disconnects and mis�set expectations between management and technical and security personnel. There's also something of a Polyanna�ish attitude, where because the Internet is big and because there are lots of people and lots of applications on the Internet, there is an attitude of "Me, why would anyone attack me? I'm not a target. That's something else over there." The reality is that based solely upon the criminal motivations that all of us have, we're all targets, we're all targets for DDoS, either directly or as a result of collateral damage.

Finally, there seems to be, in regard to security, a curious inability or unwillingness that folks have to properly assess abstract risk models and apply them to their own situations. I'm not sure why this is. It may be a necessary psychological way, but this also contributes to the general low level of the security posture that you see across the Internet today.

So moving into the specifics of last month's RoK attacks, as was said in the earlier presentation, the first attacks observed were against US targeted sites on July 5, here at APAC. July 4 in the United States, because of the international date line. This has significance. We don't believe that the State was chosen at random. The date was chosen at random. This was July 4, the Independence Day holiday in the United States and most people were using the long weekend. Then tend to have the Friday off or the Monday off, depending on how the dates go, so that this is a three�day weekend. So we believe that there was political significance as well as for the generally low level of operational readiness for some of the organizations which may have been expected to have during this holiday period, that this was a deliberate selection of the day.

So it is a weekend, it's a long weekend and a holiday weekend. Many people take vacation time around this long weekend, so there were some slow responses in some cases. The attacks continued over this long holiday weekend in the United States until the following week. The bless and the blogosphere started to pick up on this on Monday July 6 in the United States. That would be July 7 here at APAC. And was indicated, the first attacks that were observed against targets in the RoK were seen late on 7 July, 09. It is very interesting to note that July 8, 09, happened to be the 15th anniversary of the death of Kim Il�Sung, who was a former leader of North Korea, and it also happened to be an officially declared national day of mourning in North Korea.

There was a lot of confusion and lack of communication, which hinders responses in many cases, and again, it was indicated that the Botnet started to self destruct over the following weekend.

If we take a look at the attack record. We saw TCP/80 at play. This is the most kind of DDoS attack, with UDP/80 packet flooding. This is largely a nonsensical attack, except for the number of interrupts called by the packets. We see this fairly often. One thing about the folks who write the attack tools and the folks who use them, is that in many cases, they don't know a lot about TCP/IP and they don't know a lot about networking and so we see some nonsensical traffic. And I believe that this may be a commonly�enabled default setting on some shared attack tool code that we see in the Botnets. We see a lot of ICMP echo reply flooding, commonly known as ping floods. This is something that we see commonly in many DDoS attacks in the years.

There were also some layer 7 attacks component to the attack. We saw valid TCP sessions attacking them to do it with just a slash after it to try to get whatever the base in text was to that particular site. This is indicative of a lack of prior reconnaissance. Attackers who are serious about taking down the targets, we will see them do reconnaissance and they will identify URLs to do database queries and things like that. We didn't see that during these attacks.

We also saw some http GET for China. I have no idea of the significance of that.

I have no idea what the significance of that was for that string. If you know, we would love to hear from you. We also saw a lot of protocol zero packets. This is very interesting. There are 254 valid ones and UDP is protocol 17. In reality, there are other protocols not used or used for other applications.

We see the attackers from time to time use non�standard protocol numbers because the access control numbers have built the capabilities around UDP and ICMP and if they haven't taken into account other protocols, then attack traffic can make it through the access control list and the filtering mechanisms can have an impact.

The attacks were relatively small in nature, we see DDoS attacks like this. About 5�20 megabits a second. Around 50�100 kbps, relatively low kbps. The largest single bits per second that we saw against a single target was about 140 megabits a second and 500 kbps. There were some unconfirmed reports of 2gb a second. We have no data to support this. There were some standard headers and some non�standard headers.

The attack methodology as Miss Lee indicated was timed during the attack. During the attack, it appeared to me and I was directly involved in mitigating some of the attacks in Korea. It appeared that the attacker would shift his target after noticing that a site had been successfully defended. Could that be an artefact of the timer? Possibly. But it happened many times, and so it seems a little bit coincidental. The attack traffic was never varied. This indicates a lack of technical acumen on the part of the attacker. They actually look at the effect that the target is having on the target and if the defender successfully defends and against one attack mechanism, the attacker will shift methodologies.

We did not see this during these attacks. We believe that the conjunction of the USA July 4 holiday and the 15th anniversary of the death of Kim Il�sung was the time frame for the attack. We saw outages for attacks. We did not observe a DNS attack component during the attacks. We believe that it was collateral damage with people refreshing and trying to get to the sites and perhaps for the bots to be there. We talked about the bot net and it was this presentation. This was all code based and mainly 5:55 years old. There were multiple command servers and various countries outside of the RoK. About 95% of the bots were inside of the RoK. bots were located inside the RoK were PCs which belonged to RoK nationals who were studying or working abroad.

About 130,000 verified and 200,000 unreported. The malware that was used to compromise the machines was Korea�specific. It wasn't technically innovative but there was a social engineering involved which was a little bit clever. The bots used the timers to self destruct. The impact � some USA Governmental sites experienced the total and partial service outages. Some of them recovered quickly and some of them recovered relatively slowly. ASPs, commercial sites that were attacked either didn't flinch. In our observation, many of the RoK Governmental attacks attacked suffered some significant outages.

Whoever did the target selection had a greater knowledge of RoK online culture because the target selection for the RoK was more clueful than what we saw in the USA. New sites, governmental sites that people actually used, seemed to be targeted in the RoK. In the USA, most of the sites targeted don't really matter to ordinary citizens, and so, the impact to ordinary users seemed to be greater in the UK as opposed to the USA.

The overall impact of the attack was flooded with a lack of a technical acumen. These were stupid and unoriginal attacks and the observed impact was not due to any kind of innovation or scale on the part of the attacker. And it was due almost solely to the unpreparedness of the defenders. Organizations were strong communication plans, offset teams who knew who the SPs were did very well during the attacks. They are scalable architectures and deployed the well known best current practices. They did well. If they had the ability for the insight into the network traffic and to react, they did well. Resources like black hole did very well.

Organizations with no communication plans, with no contacts within the organization, with larger operational security community or with their search providers fared very poorly. Organizations without strong scalable architectures suffered continuous outages. People who couldn't look at the network traffic had trouble knowing that there was an attack taking place. Organizations who had perfect reaction tools did not understand what was happening, nor how to react.

Firewalls and IDS IPS in front of servers, this is indicated. DDoS attacks are attacks against the State. We saw Firewalls going down under very, very low PPS. Do not put your servers behind Firewalls. It does not help.

We have ways to defend. I have a lot of slides here and not really enough time. The main thing I want to get across and the slides are available to download is that there are methodologies developed for dealing with operational security. We need to have a good architecture. You need to have the right tools for the right job. There are various network infrastructure and hosted application and best practices. They should be implemented, including DNS. You need to have the right people for the right job. We've learned that there are a lot of best current practices that if they had been implemented, could have resulted in little or no impact from the attacks. We keep repeating the mistakes of the past over and over again.

Organizations that plan quickly, these were ordinary attacks and they were dumb attacks and yet they had a disproportionate attack.

There are lots of tools and techniques available today to harbour your network and applications against attack. We need more education. The cloud has the potential to bring a higher degree of resilience and security to ordinary Internet users if cloud providers do the right thing. Automation is a good thing, but it is also no substitute for a resilient architecture and hard work. Thank you very. Questions?


MATSUZAKI YOSHINOBU: Thank you very much. Any questions?

ROLAND DOBBINS: OK, thank you all very much.

MATSUZAKI YOSHINOBU: Tomorrow we have lightning talk sessions so there will be short lightning talks.

SRINIVAS CHENDI: Thank you for chairing the session and thank you for all of the speakers for your talks and keeping within the time limits. We will take a short break for afternoon tea and after the break, we will look at Policy SIG and setting the scene and briefly looking at the policies that will be discussed tomorrow morning.

Also, we have a voting booth. We are going to go to Gong Wang Fu Museum. And they're organizing a 30 minute tour of the museum. The first bus leaves at 6:30 and the last bus at 7:00. But otherwise, we'll come back at 4:00 for the Policy SIG � setting the scene. Thank you.