On Interplanetary Internet (IPN)

During the ACM Turing award, one of the future trends presented by Cerf and Khan is the extension the terrestrial Internet for interplanetary communication. Here’s a brief discussion of the Intreplanetary Internet.

For 40 years NASA has used point to point radio link to send data from the surface of Mars directly to the Deep space stations located in three locations around the globe. Vint Cerf together with some of his colleagues in the Jet Propulsion Lab (JPL) decided to improve the communication between spacecrafts. They decided to use a richer networking architecture than a point to point radio link. Vint Cerf thought that since TCP/IP seems to work well in the terrestrial Internet why don’t they extend and use the same architecture on Mars.  The plan is also an advantage since Mars’ atmosphere is a low delay environment.

The problems arise from Interplanetary communication. The first problem is that the speed of light is too slow. To be able to communicate between Mars and Earth a total of 7 to 40 minutes round trip time is needed, and TCP/IP doesn’t work well with long time delays. Another problem is the celestial motion. Objects in space are constantly moving. Also, transmission of data can be disrupted by other spacecrafts. So to be able to solve the said issues, they decided to create a network architecture that deals with the long delay and disruption between communicating spacecrafts, which they call Delay/Disruption Tolerant Network (DTN). So they implemented and test the model here on Earth.

In January 2004, several rovers landed on Mars, and they are programmed to send data directly back to Earth (using point to point radio link). With the expected speed of 28 kbps, which is very slow. Then, when they turned the radio on, these rovers overheated. They realized that there exists X-band radio on board and in the orbiters of Mars. These orbiters were previously used  to map the surface of Mars and since that project is already finished, they reprogrammed the orbiters and the overheated rovers. The data from the rovers were collected by the orbiters. The orbiters stored the data and when it gets to the right place, it transmits the data to the deep space on Earth. So they used a store and forward technique which is used in terrestrial mobile networks.

When Phoenix Lander landed on Mars, it is not programmed to directly send the data back to Earth, thus it uses the same store and forward technique which is used by the previously overheated networks. So they planned to standardize the store and forward approach for interplanetary communication.

The solution for this is to use the Delay and Disruption Tolerant Network with Bundle protocol for communicating spacecrafts. The details for the DTN and Bundle Protocol is presented in detailed in [1,2,3]. Another transport protocol which is named Saratoga is proposed in [4] to run on top of User Datagram Protocol (UDP). The Saratoga transport protocol is used by the satellites  to communicate with the ground stations on Earth.


[1]  Vinton Cerf, et. al., Interplanetary Internet (IPN): Architectural Design, Jet Propulsion Laboratory, The MITRE Corporation, 2001

[2] Kevin Fall. A delay-tolerant network architecture for challenging internets. In Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications (SIGCOMM ’03). ACM, New York, NY, USA, 27-34. DOI=10.1145/863955.863960 http://doi.acm.org/10.1145/863955.863960, 2003

[3] Kevin Fall, Wei Hong, Samuel Madden, Custody Transfer for Reliable Delivery in Delay Tolerant Networks, RT Journal Article, ID 1966918, 2003

[4] Lloyd Wood , Wesley M. Eddy , Will Ivancic , Jim Mckim , Chris Jackson Saratoga: A Delay-Tolerant Networking convergence layer with efficient link utilization, Third International Workshop on Satellite and Space Communications (IWSSC ’07), 2007

Review on the ACM Turing Award Lecture given by Cerf and Khan

Vinton Cerf (left) and Bob Khan (right) (Image source)

In the ACM Turing award lecture given by the awardees (Dr. Vinton Cerf and Dr. Bob Khan), they discussed several of their opinions regarding the evolution of the Internet. They also discussed  challenges and future trends of the Internet architecture.

As seen in previous literatures [1,2,3,4], the Internet follows an end-to-end set of policies that enables communication between two hosts. To support a wide variety of services, functions should be built on the application layer rather than on the network itself [3].

According to Cerf and Khan, the motivation for the initial Internet architecture (that is to establish communication between two processes in different networks) is different to what is the actual need of the users today. As for computer scientists, the Internet is a medium for computational purposes, in a wide range of users, the Internet is used to share files and resources. Although the initial motivation defined by the authors is not the case today, the  Internet remains stable. The basic architecture  has evolved  by introducing different layers over the network. It is said in the interview that, if they have foreseen this set of requirements, the basic principles or architecture of the Internet might be different from what we now have today.

The lecture also mentions different challenges that the Internet is facing today. One of such issue is on network security  examples of which are spam and viruses. Improvements in security side are working in parallel with the improvements in the network. Another issue mentioned is on the existence of outgrowing number of mobile devices connected to the internet. It’s an issue because these devices are not stationary in position and are not always connected to the Internet. This issue was not foreseen in the original Internetwork architecture because at that time the hosts were most of the time connected, and topology of the network is almost fixed (aside from additional hosts in the network). Which became one of the future trends they mentioned. The problem of mobile hosts extends to the problem of interplanetary Internet. Although TCP/IP works fine ( allows fast and reliable communication between processes) in terrestrial Internet, it is not effective in space for the following problems. The first problem is the physical distance between two communicating processes. Second, planets or objects in space are constantly moving. Third, TCP is not well suited for file transfers. The details of interplanetary internet (IPN) is posted here.


[1]  Marjory S. Blumenthal and David D. Clark. 2001. Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world. ACM Trans. Internet Technol. 1, 1 (August 2001), 70-109. DOI=10.1145/383034.383037 http://doi.acm.org/10.1145/383034.383037

[2] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[3] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[4] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423

Review on “Rethinking the design of the Internet: The end to end arguments vs. the brave new world”

The increasing number of users and so its changing demands is questioning the capability of the end to end argument design of the Internet. These sudden changes and increasing application of the Internet are compromising the original design principles of the Internet. To be able to understand the issue, let us first recall what end to end argument is all about. It is said in previous memorandum and publications [2, 3, 4] that the end-to-end functions should be implemented to knowledgeable end-to-end entities of the network and not on the network itself to support a variety of applications and services. The implementation should not be built on the lower level of the internet, rather it should be on the application layer.

As we increase the number of hosts connecting to a network, the Internet itself is  becoming uncontrollable. For the operations going on the internet, there is a basic assumption that there are only security measures in the end-to-end host, which in fact are untrustworthy. There is a  growing number of Internet users and the internet itself cannot filter out which users has good behaviors and  which of them has a purpose to annoy. Examples of which are spammers. This instance is in fact less alarming when we think of transactions involving secured and private data. To solve this, the end-to-end communicating parties should enforce security measures on the application layer. The issue of having third parties in an end-to-end communication also has some security issues and therefore needed to check whether the design principle still applies on that context.

Additionally, more demanding applications such as media streaming doesn’t seem to have  an advantage using the end-to-end design. In fact, streaming services would involve multi-party of clients connecting into one server. To speed up communication between server and clients, the application layer provides an option to   sacrifice the fidelity of the information knowing facts that it is impossible to have a perfect transmission of data  from one host to another, not to mention that the longer this data travels, the higher the chance is for data corruption. Another solution for this is the creation of an intermediary server where it provides services to closer recipients.

Different user requirements are also listed down in the paper, such as trust, anonymity, involvement with other communicating parties, and  multi-way communications. As a solution the paper introduces several network functions that are needed based from the current requirements of the users. It introduces firewalls, traffic filters, and Network Address Translation (NAT). It goes to show that the initial argument, that is no functions should be implemented on the network is nullified. Perhaps the Internet architecture before doesn’t foresee the kinds of issue that is presented in this paper.



[1]  Marjory S. Blumenthal and David D. Clark. 2001. Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world. ACM Trans. Internet Technol. 1, 1 (August 2001), 70-109. DOI=10.1145/383034.383037 http://doi.acm.org/10.1145/383034.383037

[2] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[3] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[4] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423

Review on “Architectural Principles of the Internet”

This is a review on the memorandum (known as Request for Comments)  entitled “Architectural Principles of the Internet” written by the International Engineering Task Force (IETF).  Basic facts, policies, and issues about the Internet was pointed out in the memorandum. 

The internet as we know it, was not created with a great plan. Instead, it evolved depending on the trends of technology. I agree with the statement in the paper that change really is the only thing that is permanent. (If I will relate this to the evolution of man) Although there are a lot of environmental changes happened and several new strains of viruses came out, human still survives. Likewise, technology changes through time depending on the current needs of the society but still the Internet survives and is continuing to provide service mankind.

Going back to the more technical side of the memorandum, it pointed out several things regarding the Internet architecture. Although it is said that the Internet has no architecture, it has a set of traditions. These traditions are aiming at a specific goal. That is to connect, using the Internet Protocol with the end-to-end intelligence rather than hidden in the network itself. It also points out that although there is one layer of protocol (that is the Internet protocol), several networks implements more than one. The need for having a multilayer protocol is due to inevitable facts that there will be new requirements needed in the network, thus needing another protocol for that. Additionally there is a need for transitioning from another version of IP to another, mainly for the purpose of data transmission.

Since there is no centralized body that owns or manages the Internet, it is therefore necessary to check whether we can still support the main objective (communication) of the Internet itself. Several basic principles should be implemented, despite of the changing technology and needs. One of the goals of the Internet is to support several types of network architectures. Therefore the internet must not depend on the individual specification of the hardwares. It is also reiterated that end-to-end functions should be catered by end-to-end protocols. This is because these functions are subject to failure of transmission and security. It is also mentioned by Saltzer that these functions should be completely implemented only on the end-to-end communicating processes, and is not possible through the communicating system.

The paper list down all the design rules of the Internet. Since there is no body that administrates the Internet, it is a good thing that there are several policies that are published in the form of memoranda. This is to  answer the previous issue that Clark also mentioned in his paper [2]. This memorandum presents several design issues that a user, or a network administrator should know to be able to work together as the Internet. It also points out  practicality and efficiency in terms of solving a network related problem. For instance,  if  there exist several solutions to problems, a user or a network administrator should use one of these solutions and not wait for a perfect one. It also mentions about security which, I think is a very much important aspect nowadays because (1) of the growing number of hosts in the internet and (2) the lack of security mechanisms that are available on the network itself (based from papers [2] [3] which do not give much emphasis on the issue of data security).



[1] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[2] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[3] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423

Review on “The Design Philosophy of the DARPA Internet Protocols”

This paper [1] presents a discussion on the design philosophies of the internet protocol presented in [2]. It enumerates the main  and specific goals of the internet and explains how the proposed internet protocol serves its purpose. This paper also provides a lot of suggestions regarding on how to improve the original design of the protocol.

According to the paper, the main goal of the Internet architecture (Internetwork architecture [1]) was to develop an effective communication scheme for independent network architectures. As a background history, the initial motivation was to connect the original ARPANET to ARPA packet radio network. This is to provide radio network users an access to large scale machines in ARPANET.

According to [1], In order to support an effective internet communication scheme, the following properties should hold true.

  1. Survivability: The internet should continue to work even if there will be gateway or network failures. Between two communicating processes, there should be no interruption (or failure) between their communication not unless there will be a total partitioning of the two communicating processes. As a solution for this, the state information must be protected. The paper presents a fate-sharing model for this matter. The model suggests that “it is acceptable to lose the state information associated with an entity if the same, the entity itself is lost.” The advantages of this model are (1) It’s easy to  implement and (2) It protects intermediate entities from failure. However this model has consequences, the entities should not contain state information about ongoing connections so packet switches should be stateless.
  2. Should support different types of services: The original TCP was published 15 years before the publication of this paper [1]. The basic functionality of the internetnetwork TCP was remote login and file transfer. Although the internetwork TCP support each service, it doesn’t adapt depending on its priorities. File transfer for example, was less concerned with delay but is very concerned with bandwidth. Remote login however is more concerned with low delay and less with the bandwidth. Another example is audio streaming, it is more concerned with low delay but It doesn’t care if all the data is received. To support different types of services, this paper suggests a two layer protocol: the TCP and IP. It also introduces the concept of UDP, which is for services that need high speed no  with a little concern with data accuracy.
  3. Should support a variety of networks: The internet architecture supports a wide variety of networks because of their minimum set of assumptions. Their basic assumption is that a network is capable of transmitting a packet  or a datagram. Including a lot of assumptions of network informations is undesirable because these communication protocols would have to be re-engineered for the specific network type. Thus, limiting  the types of networks that are supported.

The internet communication protocol must permit distributed management of its resources, cost effective, permit host attachment easily, and must be accountable. Take note that the ordering of the goals is based on its importance, which may not apply today. The last goals are said to be less important than first three goals mentioned above. And therefore, they are not given much attention in the original communication protocol.

This paper introduces the concept of the datagram, which in essence a packet which doesn’t guarantee the success of its transmission. In a wide variety of applications, the datagram serves its purpose in providing an optimized transmission of data. However, this paper doesn’t address the issue of the least priority goals of the original TCP, like data accountability and resource management. The author made a statement about not having a solution for these goals using the concept of the datagram.



[1] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[2] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423

Review on “A Protocol for Packet Network Intercommunication”


Communication and resource sharing between different packet switching networks is not possible because of individual network differences specifically in addressing, packet sizes, error checking, and routing.


packet, protocol, gateway, port,


This paper presented a protocol for different packet switching networks. The protocol is made to allow communication between networks with different addressing conventions,  packet sizes, transmission, end-to-end restoration procedures, routing, and fault detection. One of the main goal of this paper is to resolve these individual differences.

  1. To solve the issue of  different  addressing convention, the paper presents a uniform addressing scheme that is understood by each individual network.
  2. To solve the issue of unique maximum packet sizes, an intermediary process that fragments larger data packets into two or more smaller packets is proposed. This solution resolves the issue without changing the maximum packet sizes of each network.
  3. Time delays associated with accepting and delivering messages affects the performance of the transmission of each individual network. To solve this issue, the paper develops internetwork timing procedures that ensure successful transmissions.
  4. To solve the issue of unrecoverable data due to data corruption, end-to-end restoration procedures are proposed.
  5. To avoid having dead destinations or inaccessible hosts, the status information associated to a network such as routing, fault detection and isolation which are different for each network should be coordinated properly between communicating networks.

The solutions presented in the paper lie on creating an interface between independent networks. The interface, which they call gateway is responsible for providing a route for a data packet, fragmenting a packet if it is too large for the destination network, reformatting the packet such that it is understood by the local network where the data packet is being transmitted, coordinating status information between communicating networks. To show how vital the gateway is for communication between independent networks an example message delivery is shown below.

Let A,B,C be independent networks, M, N be gateways and X,Y be processes in networks A and C respectively. There is an assumption that a network can have one or more gateways as in network B. To deliver the packet from process X to Y. The packet should contain necessary information about the source, destination, and the data being transmitted. The paper presented a standard packet formatting as shown below.

The local header is used by the local network in their specific format to route the packet inside the network. The Internetwork Header is a standard set of information that is readable by all gateways to determine the address of the source and destination.  The usage of each field is discussed in detail in the paper.

Given the network and format of the packet shown above, message from process X in network A will follow the procedure below.

  1. A packet will reach gateway M from network A and will be reformatted/fragmented to meet  the requirement of  network B
  2. A packet will reach network be and will be routed according to gateway N.
  3. A packet will reach gateway N and will be reformatted or fragmented to the requirements of network C.
  4. Finally network C will send the packet to the host where process Y resides. Reassembly and error checking of the message will happen in the destination host.

Take note that in routing the standard addresses written in the packet together with the network ID plays a vital role in choosing what network and gateways will the packet be transmitted.

In this paper they also presented a set of instructions needed by each host connected in a network. This set of instructions is written in the transmission control program (TCP) which manage the sending of messages into packets. It is also in charge of organizing the data received and passing it on to specific processes.

The paper also introduces the concept of ports and port addressing to designate a stream of message to a specific process. This information is also written in a more detailed format of the packet presented in the paper in Fig. 5 and 6.

TCP is involved in the reconstruction of messages, information such as the sequence number and the process address carried in the packet is necessary for the construction of the segments received. Along with this information is an information called checksum, which identifies if a certain data is corrupted. In case of data corruption, request for retransmission is needed. Also, for messages that are sent successfully, positive acknowledgement is being sent back to the sender.

Several other issues are presented in the paper in terms of  flow control and construction/destruction of process-process associations.


I want to emphasize several points mentioned in the paper.

  1. Fragmentation is done by gateways to allow larger data packets to be transmitted in a network with lower maximum packet sizes. The idea that reassembly is done by the hosts is brilliant because it avoids congestion that will speed up the transmission of data. Otherwise, solution such as adding computational power to resources which implements the interface will be expensive.
  2. A packet may be fragmented by one or more gateways, leading to more packets with smaller data content. An issue arises from subsequent networks with higher maximum packet sizes (even higher than the defragmented packet). Since the cost of the transmitting the packet is proportional to the capacity of the packet there will be no increase (or will have a decrease) in the cost of transmitting the packet regardless of fragmentations.
  3. The transport control program (TCP) manages the set of message segments to be sent to specific processes in another network. There are two options in packaging message segments. Packaging can be according to the host destination or to the specific process in the host destination. I agree with the decision of choosing the option where messages are packaged according to specific processes in the destination because this will lessen up the workload of the recipient host.
  4. Although fragmentation of larger packets has no effect on transmission cost as pointed out in item 2,  additional computational cost is inevitable since the  the task of reassembly and delegating message segments to specific processes is the work of the recipient TCP host.
  5. Introduction of ports will aid up the receiving TCP to designate messages to specific processes.



Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423

You can download a copy of the paper here.

Understanding TCP/IP


TCP/IP (Transmission Control Protocol/Internet Protocol)

TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be used as a communications protocol in a private network (either an intranet or an extranet). When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just as every other computer that you may send messages to or get information from also has a copy of TCP/IP.

TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles theaddress part of each packet so that it gets to the right destination. Each gatewaycomputer on the network checks this address to see where to forward the message. Even though some packets from the same message are routed differently than others, they’ll be reassembled at the destination.


TCP/IP uses the client/server model of communication in which a computer user (a client) requests and is provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP communication is primarily point-to-point, meaning each communication is from one point (or host computer) in the network to another point or host computer. TCP/IP and the higher-level applications that use it are collectively said to be “stateless” because each client request is considered a new request unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for the call duration). Being stateless frees network paths so that everyone can use them continuously. (Note that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in place until all packets in a message have been received.)

Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to the Internet. These include the World Wide Web’s Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet) which lets you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a “suite.”

Personal computer users with an analog phone modem connection to the Internet usually get to the Internet through the Serial Line Internet Protocol (SLIP) or the Point-to-Point Protocol (PPP). These protocols encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access provider’s modem.

Protocols related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for special purposes. Other protocols are used by network host computers for exchanging router information. These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP).



This was last updated in October 2008
Editorial Director: Margaret Rouse
This is a repost from http://searchnetworking.techtarget.com/definition/TCP-IP