FSDN: PostgreSQL Techdocs GNU Art Savannah ANSOL
[GNU-Friends] Sections: Front Page News Interviews GNU-Friends Diaries
Menu: About Submit Story FAQ Donate Search
Breaking the domain name monopoly
By marcus, Section Diaries
Posted on Thu Feb 12th, 2004 at 02:33:22 GMT
Have you ever used a domain name? Of course you have. Many of you even posess a domain name. This usually means that they are paying an annual fee to a company or organization responsible for the management of the top level domain they use. However, why do you need to pay an annual fee to uphold your registration? And why do you have to pay at all for such a virtual entity as a domain name? Read on to find out, and what can be done about it.


In Germany the organization responsible for managing the .de top level domain is DENIC eG . Under certain conditions, you can become a member of DENIC and make domain name registrations yourself. DENIC is a non-profit organization, but the high buerocracy involved ensures that you can only become a member of DENIC if you are willing to spend thousands and thousands and ten thousands of Euros for annual fees, courses, and staff.

As a non-member, you can register a .de domain name at DENIC directly, for 116.00 EUR per year. Or you can register it at a company that is a member of DENIC, for a price that the company calculates itself. Of course, you can also register domain names at companies which contracted DENIC members (or other companies which have). Because these companies usually provide the domain name in bundles with other services, such as web hosting, and have many clients, they can offer a better price. In Germany, you can get a domain name for under 10 EUR a year.

This made me thinking. You have to pay a fee to register an entry in a name server? And then you have to pay an annual fee to make sure that they don't remove the entry again? I can understand that somebody wants money for the boring task of typing in entries into a database. But why do they want money for not deleting entries from a database? That must be an even duller task. And after all, why can't I type in the entry myself, in a web form?

Of course, the answer is pretty obvious. Domain names are a scarce resource. They are scarce because everyone wants the same or similar domain names, so much is clear, but why is that the case? The answer lies in the way the domain names are used. Domain names are first of all and primarily a human interface. Real people have to type them in manually in various places. For example in the URL entry field of a web browser, as part of the URL, or in the address field of an email client, as part of the email address. Because of that, you want the names to be short, memorizable, convenient, but also unique and fail-proof.

On the other hand, there are many possible application for the same convenient name. The letters "coke" can be a synonym for two types of drugs, one based on coffein, and one on cocaine. The difference is that the first is legal, and protected by a trademark in the food sector, while the other is illegal, and not protected by any trademark. This is a realistic example (there have been domain disputes related to the coke trademark), and there are many more. This problem is well known in the area of trademarks, and because of this trademarks (in Germany) always apply to a certain sector of the economy only. A trademark in IT will not protect you against similar names in the food business. The reason is that taking one for the other is unlikely because of the context. With domain names the context might be hidden or unknown.

Being able to have domain names for many different (but similarly named) things, and having very short, memorizable, unique and robust names are conflicting goals. If you want to make sure that a name like coke is not used for anything but the coffeinated drink, then you must register all the variations (including misspellings) on the name, too (or sue anybody else who tries to use such names). Furthermore, companies are often forced to defend their trademark actively, or they might lose its legal protection. There are many patently absurd examples of this.

If a resource is contented, people start to take money for it. This is even true if the resource is completely virtual, as domain names are, and there is no real value behind it to back up the price. The reason for this is that the contented resource "domain names" is then tied to another scarce, contented resource "money". We already have figured out how to distribute money, and thus this provides an easy way how to decide how to distribute domain names (the answer is, in both cases, who has more money gets more of the resource).

However, this whole analysis holds only true if domain names must also be globally unique. Only then a central, hierarchical organization is required, which causes conflicts to arise. Here we now enter the technical aspects of the domain name issue. Domain names are not only used in the human interface, but also in the technical implementation of the internet. The service that maps domain names to IP addresses is DNS (domain name service). This is used in routing of mail, for example, or in finding the right server for a given URL. It is crucial to understand that the system is designed with the assumption that domain names are stable and globally unique. While the IP addresses they are mapped to are also globally unique, they are not stable: The IP address of a web server providing a certain URL might change. The IP address of a mail server might change. The consequence is that the domain name is the primary name to identify the destination of a HTTP request or email.

This is a crucial design fault. It means that the same entity (domain name string) is used for two totally distinct purposes: As a human friendly identifier, and as a destination identifier in routing. This mixture is the reason why it is a requirement today to register domain name, while the fact that domain names are used by humans causes the domain names to be contented.

I propose that these two applications are separated and implemented by two different entities. Only the names used in the technical implementation of routing protocols etc must be globally unique. There is no requirement on these identifiers to be human-friendly. They can be long and convoluted, even numbers. However, they must be assigned in a stable way. IP addresses are not stable, so they don't fit this purpose. The names used in the human interface (browser, email client) must be human friendly, and easy to use. They must be short, memorizable, and much more. However, they do not need to be unique! A user in Germany can very well associate the string "gov" with the German government site, while a user in France could mean the French government site with the same string "gov". Longer, less often strings like "fr-gov" could be used by German users to access the French government site.

Now, surely you will ask me who gets to decide which strings maps to which unique identifier. And here the solution is simple. The user gets to decide, of course! He can trust this assignment to a service, which can be just another service by his internet provider. He can also trust it to the internet communities. He can use several lists from several locations, and sort them by priority. If there is a conflict, the priorities can be used to resolve it, or the user could be prompted to select the right identifier in a multiple-choice list (and setting it as the default in the future). The configuration possibilities are of course endless. If the user lacks a certain identifier, he can use a generic or specific search engine to find it.

I want to take this concept one step further. I can imagine that local lists of such identifiers could be sent as part of email messages (in a specific MIME segment) and installed into the user's list on request. Similarily, web sites could provide identifiers in anchor tags (links) or meta tags (header). The user could then install a set of such identifiers. The user could also choose to prefix a set of such identifiers with a common tag, putting them into a common namespace, or otherwise manipulate them before installing them.

I am also sure that it could prove useful to connect such tags with public keys used for encryption. There are certain properties shared by both (trust issues, for example), so we already have some experience how to handle them. And they are useful together, in the sense that an identifier can then also be used to identify a key, and that a key can provide the reason to trust an identifier (or a set of them). Also, if you consider it, you might find it useful that we already have two such trust models (the web of trust in OpenPGP and the hierarchical trust model of X.509).

As soon as you make the separation of global identifier and human-readable identifiers, you will find that the global identifiers are no longer a scarce resource, but can be freely and generously allocated from an arbitrarily complex name space. The human readable identifier are still a scarce resource (humans only can remember so and so long names), but, and this is the crucial difference, they are managed locally and thus are not automatically (or globally) contented.

It is possible to implement such a new system on top of the existing system, by mapping such user-defined names to domain names. In this case, sub-domain names (foo.domain.tld) could be used to provide free or inexpensive domain names to many users. This could be enough to build a prototype and experiment with the idea.

However, for the real thing, you would have to eliminate the domain name concept alltogether. My suggestion is to use OIDs as replacements. OIDs are globally unique identifiers allocated by a standards body. OIDs are hierarchically managed, just like the IP name space. However, it is much larger (in fact, infinitely large). OIDs are a fully generic concept. A company would allocate a single OID for all purposes, then assign sub-OIDs for individual services, like mail and www, and advocate the OIDs plus their A/MX mappings to IP addresses to routing services, and the OIDs plus human readable identifiers to their clients and users (and services that provide such lists).

If this were done, central domain name registration would be superfluous.

I want to point out that I am sure that organizations like DENIC are doing other things beside creating buerocracy and circulating money. Some of the things they do are certainly a great service for the internet community. For example, they sponsor root-DNS services, and those are also needed in the above model (to map stable global identifiers to unstable global identifiers such as IP addresses). However, at least in Germany DENIC is non-profit, and thus if given everything else being the same, the companies can only benefit by not being burdened with the costly domain name management, while having routing services is indispensible. I am not sure that there would be any fiscal disadvantages without a domain name business. Of course, I don't have any studies or numbers about this, so if you know more, make a comment.

A second note: The user interface would become more complex with my suggestions. However, compare this with the complexity of public key cryptography. It is critical that public key cryptography will be understood and used by most computer users in the future. Given that the problem of mapping identifiers to resources has many analogies to real life situations (looking someone up in a phone book etc), and can be tied to other technologies like search engines and cryptography, I am confident that feasible solutions are possible.

A second note: It is not clear if all users would benefit. If the software is monopolized, so can be the access to the services of human identifier mappings. Users of proprietary software are particularly in danger, so are users of huge ISPs. Then individual companies would control default mappings for their user base. While this can, at least theoretically, corrected by each individual user at home, it can be a huge problem for public terminals, such as in libraries and colleges. However, a similar situation already exists (think of internet filters, and default desktop configuration battles), so I am not sure if this would become a permanent problem. Users are increasingly aware of such issues, so with good public standards and good alternatives, I think that enough pressure can be build up to soften such effects.

I want to thank Harald and Neal for their comments on this idea, in particular their criticism.

< AFFS January Newsletter and Conference Reminder (4 comments) | Variable Tracking contributed to GCC (2 comments) >