By Vinton G. Cerf, VP, chief Internet evangelist, Google
2011 may prove to be a pivotal year in several dimensions. It is anticipated that the Internet Corporation for Assigned Names and Numbers will have allocated all of its remaining IPv4 address space to the Regional Internet Registries by end of March or sooner. This places even more pressure on implementation of IPv6 with its 128-bit address space (versus 32 bits for IPv4). The emergence of standards for the so-called Smart Grid (a program initiated by the departments of Energy and Commerce) will heighten demand for additional address space owing to the large number of “smart” appliances that will inhabit the electrical grid in the future. At the same time, billions of mobiles are becoming Internet enabled, increasing further the need for additional Internet address space.
A major, worldwide test and demonstration of widespread IPv6 connectivity is now proposed for June 8, 2011. Google plans to participate along with many other companies, whose business is deeply dependent on the growth and connectivity of the Internet.
At the same time, the global Domain Name System is also expected to grow at the top level, with the introduction of new non-Latin top-level domains and, more generally, new generic top-level domains. This expansion highlights an important matter: improved defense of the Domain Name System against various attacks that affect the integrity of the domain name/IP address binding. The mechanism for responding to this need is called Domain Name System Security Extensions, or DNSSEC, and it is now implemented at the root zone of the Domain Name System by ICANN and VeriSign, and is increasingly in use among top-level domains. DNSSEC should be a major focus of attention in the campaign to secure the Internet.
Finally, 2011 is certain to see substantial movement toward the use of cloud computing for economic and pragmatic reasons. On the economic side, cloud-based systems can expand or contract delivered computing and memory capacity in real time in accordance with demand. Moreover, software application updates can happen uniformly and instantly compared to updating laptop or desktop applications, improving interoperability during upgrades of software. The cost per unit of demand is spread across aggregate average demand rather than aggregate peak load, making cloud-computing methods more economically attractive.
There are multiple providers of cloud-computing services, but standards for interoperation are not yet defined. In consequence, there is an important need for well-defined ways to extract information from the cloud in the event the user wishes to move data from one cloud to another. At Google, we’ve termed this “data liberation.” Ultimately, it will be important to establish standard ways to move data between and among clouds operated by distinct entities, providing choice for government agencies and conferring a kind of genetic robustness to the cloud-computing environment. Whether the clouds are shared, private, government-owned or private-sector operated, it should be possible for them to interwork.
Plainly, any shared environment raises the question of protection of information while it is in the cloud. Only authorized parties should have access to the data to which they are entitled. Strong authentication methods, data labeling for access control, end-to-end encryption during transport and possibly encryption while in storage may all play a role in enhancing the protection of cloud-based information. Cloud operators will need to demonstrate their ability to protect information. They will also need to help develop standards so that the protections can be replicated if and when data is moved from one cloud system to another.
Taking these points, and many others not mentioned, into account, 2011 is shaping up to be a pivotal year in the evolution of networked computing.
I hope the Mayans were wrong about 2012! ♦