tiTorrent: File Syncing using BitTorrent

 

Download Documentation in PDF

The source code for tiTorrent 

 

Advertisements

On Interplanetary Internet (IPN)

During the ACM Turing award, one of the future trends presented by Cerf and Khan is the extension the terrestrial Internet for interplanetary communication. Here’s a brief discussion of the Intreplanetary Internet.

For 40 years NASA has used point to point radio link to send data from the surface of Mars directly to the Deep space stations located in three locations around the globe. Vint Cerf together with some of his colleagues in the Jet Propulsion Lab (JPL) decided to improve the communication between spacecrafts. They decided to use a richer networking architecture than a point to point radio link. Vint Cerf thought that since TCP/IP seems to work well in the terrestrial Internet why don’t they extend and use the same architecture on Mars.  The plan is also an advantage since Mars’ atmosphere is a low delay environment.

The problems arise from Interplanetary communication. The first problem is that the speed of light is too slow. To be able to communicate between Mars and Earth a total of 7 to 40 minutes round trip time is needed, and TCP/IP doesn’t work well with long time delays. Another problem is the celestial motion. Objects in space are constantly moving. Also, transmission of data can be disrupted by other spacecrafts. So to be able to solve the said issues, they decided to create a network architecture that deals with the long delay and disruption between communicating spacecrafts, which they call Delay/Disruption Tolerant Network (DTN). So they implemented and test the model here on Earth.

In January 2004, several rovers landed on Mars, and they are programmed to send data directly back to Earth (using point to point radio link). With the expected speed of 28 kbps, which is very slow. Then, when they turned the radio on, these rovers overheated. They realized that there exists X-band radio on board and in the orbiters of Mars. These orbiters were previously used  to map the surface of Mars and since that project is already finished, they reprogrammed the orbiters and the overheated rovers. The data from the rovers were collected by the orbiters. The orbiters stored the data and when it gets to the right place, it transmits the data to the deep space on Earth. So they used a store and forward technique which is used in terrestrial mobile networks.

When Phoenix Lander landed on Mars, it is not programmed to directly send the data back to Earth, thus it uses the same store and forward technique which is used by the previously overheated networks. So they planned to standardize the store and forward approach for interplanetary communication.

The solution for this is to use the Delay and Disruption Tolerant Network with Bundle protocol for communicating spacecrafts. The details for the DTN and Bundle Protocol is presented in detailed in [1,2,3]. Another transport protocol which is named Saratoga is proposed in [4] to run on top of User Datagram Protocol (UDP). The Saratoga transport protocol is used by the satellites  to communicate with the ground stations on Earth.

References

[1]  Vinton Cerf, et. al., Interplanetary Internet (IPN): Architectural Design, Jet Propulsion Laboratory, The MITRE Corporation, 2001

[2] Kevin Fall. A delay-tolerant network architecture for challenging internets. In Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications (SIGCOMM ’03). ACM, New York, NY, USA, 27-34. DOI=10.1145/863955.863960 http://doi.acm.org/10.1145/863955.863960, 2003

[3] Kevin Fall, Wei Hong, Samuel Madden, Custody Transfer for Reliable Delivery in Delay Tolerant Networks, RT Journal Article, ID 1966918, 2003

[4] Lloyd Wood , Wesley M. Eddy , Will Ivancic , Jim Mckim , Chris Jackson Saratoga: A Delay-Tolerant Networking convergence layer with efficient link utilization, Third International Workshop on Satellite and Space Communications (IWSSC ’07), 2007

Review on the ACM Turing Award Lecture given by Cerf and Khan

Vinton Cerf (left) and Bob Khan (right) (Image source)

In the ACM Turing award lecture given by the awardees (Dr. Vinton Cerf and Dr. Bob Khan), they discussed several of their opinions regarding the evolution of the Internet. They also discussed  challenges and future trends of the Internet architecture.

As seen in previous literatures [1,2,3,4], the Internet follows an end-to-end set of policies that enables communication between two hosts. To support a wide variety of services, functions should be built on the application layer rather than on the network itself [3].

According to Cerf and Khan, the motivation for the initial Internet architecture (that is to establish communication between two processes in different networks) is different to what is the actual need of the users today. As for computer scientists, the Internet is a medium for computational purposes, in a wide range of users, the Internet is used to share files and resources. Although the initial motivation defined by the authors is not the case today, the  Internet remains stable. The basic architecture  has evolved  by introducing different layers over the network. It is said in the interview that, if they have foreseen this set of requirements, the basic principles or architecture of the Internet might be different from what we now have today.

The lecture also mentions different challenges that the Internet is facing today. One of such issue is on network security  examples of which are spam and viruses. Improvements in security side are working in parallel with the improvements in the network. Another issue mentioned is on the existence of outgrowing number of mobile devices connected to the internet. It’s an issue because these devices are not stationary in position and are not always connected to the Internet. This issue was not foreseen in the original Internetwork architecture because at that time the hosts were most of the time connected, and topology of the network is almost fixed (aside from additional hosts in the network). Which became one of the future trends they mentioned. The problem of mobile hosts extends to the problem of interplanetary Internet. Although TCP/IP works fine ( allows fast and reliable communication between processes) in terrestrial Internet, it is not effective in space for the following problems. The first problem is the physical distance between two communicating processes. Second, planets or objects in space are constantly moving. Third, TCP is not well suited for file transfers. The details of interplanetary internet (IPN) is posted here.

References

[1]  Marjory S. Blumenthal and David D. Clark. 2001. Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world. ACM Trans. Internet Technol. 1, 1 (August 2001), 70-109. DOI=10.1145/383034.383037 http://doi.acm.org/10.1145/383034.383037

[2] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[3] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[4] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423

Review on “Rethinking the design of the Internet: The end to end arguments vs. the brave new world”

The increasing number of users and so its changing demands is questioning the capability of the end to end argument design of the Internet. These sudden changes and increasing application of the Internet are compromising the original design principles of the Internet. To be able to understand the issue, let us first recall what end to end argument is all about. It is said in previous memorandum and publications [2, 3, 4] that the end-to-end functions should be implemented to knowledgeable end-to-end entities of the network and not on the network itself to support a variety of applications and services. The implementation should not be built on the lower level of the internet, rather it should be on the application layer.

As we increase the number of hosts connecting to a network, the Internet itself is  becoming uncontrollable. For the operations going on the internet, there is a basic assumption that there are only security measures in the end-to-end host, which in fact are untrustworthy. There is a  growing number of Internet users and the internet itself cannot filter out which users has good behaviors and  which of them has a purpose to annoy. Examples of which are spammers. This instance is in fact less alarming when we think of transactions involving secured and private data. To solve this, the end-to-end communicating parties should enforce security measures on the application layer. The issue of having third parties in an end-to-end communication also has some security issues and therefore needed to check whether the design principle still applies on that context.

Additionally, more demanding applications such as media streaming doesn’t seem to have  an advantage using the end-to-end design. In fact, streaming services would involve multi-party of clients connecting into one server. To speed up communication between server and clients, the application layer provides an option to   sacrifice the fidelity of the information knowing facts that it is impossible to have a perfect transmission of data  from one host to another, not to mention that the longer this data travels, the higher the chance is for data corruption. Another solution for this is the creation of an intermediary server where it provides services to closer recipients.

Different user requirements are also listed down in the paper, such as trust, anonymity, involvement with other communicating parties, and  multi-way communications. As a solution the paper introduces several network functions that are needed based from the current requirements of the users. It introduces firewalls, traffic filters, and Network Address Translation (NAT). It goes to show that the initial argument, that is no functions should be implemented on the network is nullified. Perhaps the Internet architecture before doesn’t foresee the kinds of issue that is presented in this paper.

________________________________________________________________

References

[1]  Marjory S. Blumenthal and David D. Clark. 2001. Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world. ACM Trans. Internet Technol. 1, 1 (August 2001), 70-109. DOI=10.1145/383034.383037 http://doi.acm.org/10.1145/383034.383037

[2] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[3] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[4] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423


Review on “Architectural Principles of the Internet”

This is a review on the memorandum (known as Request for Comments)  entitled “Architectural Principles of the Internet” written by the International Engineering Task Force (IETF).  Basic facts, policies, and issues about the Internet was pointed out in the memorandum. 

The internet as we know it, was not created with a great plan. Instead, it evolved depending on the trends of technology. I agree with the statement in the paper that change really is the only thing that is permanent. (If I will relate this to the evolution of man) Although there are a lot of environmental changes happened and several new strains of viruses came out, human still survives. Likewise, technology changes through time depending on the current needs of the society but still the Internet survives and is continuing to provide service mankind.

Going back to the more technical side of the memorandum, it pointed out several things regarding the Internet architecture. Although it is said that the Internet has no architecture, it has a set of traditions. These traditions are aiming at a specific goal. That is to connect, using the Internet Protocol with the end-to-end intelligence rather than hidden in the network itself. It also points out that although there is one layer of protocol (that is the Internet protocol), several networks implements more than one. The need for having a multilayer protocol is due to inevitable facts that there will be new requirements needed in the network, thus needing another protocol for that. Additionally there is a need for transitioning from another version of IP to another, mainly for the purpose of data transmission.

Since there is no centralized body that owns or manages the Internet, it is therefore necessary to check whether we can still support the main objective (communication) of the Internet itself. Several basic principles should be implemented, despite of the changing technology and needs. One of the goals of the Internet is to support several types of network architectures. Therefore the internet must not depend on the individual specification of the hardwares. It is also reiterated that end-to-end functions should be catered by end-to-end protocols. This is because these functions are subject to failure of transmission and security. It is also mentioned by Saltzer that these functions should be completely implemented only on the end-to-end communicating processes, and is not possible through the communicating system.

The paper list down all the design rules of the Internet. Since there is no body that administrates the Internet, it is a good thing that there are several policies that are published in the form of memoranda. This is to  answer the previous issue that Clark also mentioned in his paper [2]. This memorandum presents several design issues that a user, or a network administrator should know to be able to work together as the Internet. It also points out  practicality and efficiency in terms of solving a network related problem. For instance,  if  there exist several solutions to problems, a user or a network administrator should use one of these solutions and not wait for a perfect one. It also mentions about security which, I think is a very much important aspect nowadays because (1) of the growing number of hosts in the internet and (2) the lack of security mechanisms that are available on the network itself (based from papers [2] [3] which do not give much emphasis on the issue of data security).

____________________________________________________________________________________________

References

[1] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[2] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[3] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423

Review on “The Design Philosophy of the DARPA Internet Protocols”

This paper [1] presents a discussion on the design philosophies of the internet protocol presented in [2]. It enumerates the main  and specific goals of the internet and explains how the proposed internet protocol serves its purpose. This paper also provides a lot of suggestions regarding on how to improve the original design of the protocol.

According to the paper, the main goal of the Internet architecture (Internetwork architecture [1]) was to develop an effective communication scheme for independent network architectures. As a background history, the initial motivation was to connect the original ARPANET to ARPA packet radio network. This is to provide radio network users an access to large scale machines in ARPANET.

According to [1], In order to support an effective internet communication scheme, the following properties should hold true.

  1. Survivability: The internet should continue to work even if there will be gateway or network failures. Between two communicating processes, there should be no interruption (or failure) between their communication not unless there will be a total partitioning of the two communicating processes. As a solution for this, the state information must be protected. The paper presents a fate-sharing model for this matter. The model suggests that “it is acceptable to lose the state information associated with an entity if the same, the entity itself is lost.” The advantages of this model are (1) It’s easy to  implement and (2) It protects intermediate entities from failure. However this model has consequences, the entities should not contain state information about ongoing connections so packet switches should be stateless.
  2. Should support different types of services: The original TCP was published 15 years before the publication of this paper [1]. The basic functionality of the internetnetwork TCP was remote login and file transfer. Although the internetwork TCP support each service, it doesn’t adapt depending on its priorities. File transfer for example, was less concerned with delay but is very concerned with bandwidth. Remote login however is more concerned with low delay and less with the bandwidth. Another example is audio streaming, it is more concerned with low delay but It doesn’t care if all the data is received. To support different types of services, this paper suggests a two layer protocol: the TCP and IP. It also introduces the concept of UDP, which is for services that need high speed no  with a little concern with data accuracy.
  3. Should support a variety of networks: The internet architecture supports a wide variety of networks because of their minimum set of assumptions. Their basic assumption is that a network is capable of transmitting a packet  or a datagram. Including a lot of assumptions of network informations is undesirable because these communication protocols would have to be re-engineered for the specific network type. Thus, limiting  the types of networks that are supported.

The internet communication protocol must permit distributed management of its resources, cost effective, permit host attachment easily, and must be accountable. Take note that the ordering of the goals is based on its importance, which may not apply today. The last goals are said to be less important than first three goals mentioned above. And therefore, they are not given much attention in the original communication protocol.

This paper introduces the concept of the datagram, which in essence a packet which doesn’t guarantee the success of its transmission. In a wide variety of applications, the datagram serves its purpose in providing an optimized transmission of data. However, this paper doesn’t address the issue of the least priority goals of the original TCP, like data accountability and resource management. The author made a statement about not having a solution for these goals using the concept of the datagram.

___________________________________________________________________

References

[1] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[2] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423