I recently imported my tumblr posts in my WordPress blog. Below shows the three easy steps on how to import yours.

The WordPress.com Blog

We’ve recently noticed that a fair number of you have been bringing your tumblelogs over from Tumblr to WordPress.com using one of the variety of Tumblr to WXR conversion tools which exist on the web. We thought you would appreciate an easier way to import your content, so we bring you 3 easy steps to import your content.

Authenticate with Tumblr

To bring your tumblelog’s content to WordPress.com, head to Tools → Import in your WordPress.com dashboard and look for the Tumblr importer. If you don’t already have an account here on WordPress.com then head over and sign up first.

Click the link to get started and then enter the email address you used to sign up to Tumblr, your Tumblr password and click Connect to Tumblr.

Start the Import

The importer will then fetch a list of your blogs and let you pick which one to import. Click Import…

View original post 277 more words

On Interplanetary Internet (IPN)

During the ACM Turing award, one of the future trends presented by Cerf and Khan is the extension the terrestrial Internet for interplanetary communication. Here’s a brief discussion of the Intreplanetary Internet.

For 40 years NASA has used point to point radio link to send data from the surface of Mars directly to the Deep space stations located in three locations around the globe. Vint Cerf together with some of his colleagues in the Jet Propulsion Lab (JPL) decided to improve the communication between spacecrafts. They decided to use a richer networking architecture than a point to point radio link. Vint Cerf thought that since TCP/IP seems to work well in the terrestrial Internet why don’t they extend and use the same architecture on Mars.  The plan is also an advantage since Mars’ atmosphere is a low delay environment.

The problems arise from Interplanetary communication. The first problem is that the speed of light is too slow. To be able to communicate between Mars and Earth a total of 7 to 40 minutes round trip time is needed, and TCP/IP doesn’t work well with long time delays. Another problem is the celestial motion. Objects in space are constantly moving. Also, transmission of data can be disrupted by other spacecrafts. So to be able to solve the said issues, they decided to create a network architecture that deals with the long delay and disruption between communicating spacecrafts, which they call Delay/Disruption Tolerant Network (DTN). So they implemented and test the model here on Earth.

In January 2004, several rovers landed on Mars, and they are programmed to send data directly back to Earth (using point to point radio link). With the expected speed of 28 kbps, which is very slow. Then, when they turned the radio on, these rovers overheated. They realized that there exists X-band radio on board and in the orbiters of Mars. These orbiters were previously used  to map the surface of Mars and since that project is already finished, they reprogrammed the orbiters and the overheated rovers. The data from the rovers were collected by the orbiters. The orbiters stored the data and when it gets to the right place, it transmits the data to the deep space on Earth. So they used a store and forward technique which is used in terrestrial mobile networks.

When Phoenix Lander landed on Mars, it is not programmed to directly send the data back to Earth, thus it uses the same store and forward technique which is used by the previously overheated networks. So they planned to standardize the store and forward approach for interplanetary communication.

The solution for this is to use the Delay and Disruption Tolerant Network with Bundle protocol for communicating spacecrafts. The details for the DTN and Bundle Protocol is presented in detailed in [1,2,3]. Another transport protocol which is named Saratoga is proposed in [4] to run on top of User Datagram Protocol (UDP). The Saratoga transport protocol is used by the satellites  to communicate with the ground stations on Earth.

References

[1]  Vinton Cerf, et. al., Interplanetary Internet (IPN): Architectural Design, Jet Propulsion Laboratory, The MITRE Corporation, 2001

[2] Kevin Fall. A delay-tolerant network architecture for challenging internets. In Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications (SIGCOMM ’03). ACM, New York, NY, USA, 27-34. DOI=10.1145/863955.863960 http://doi.acm.org/10.1145/863955.863960, 2003

[3] Kevin Fall, Wei Hong, Samuel Madden, Custody Transfer for Reliable Delivery in Delay Tolerant Networks, RT Journal Article, ID 1966918, 2003

[4] Lloyd Wood , Wesley M. Eddy , Will Ivancic , Jim Mckim , Chris Jackson Saratoga: A Delay-Tolerant Networking convergence layer with efficient link utilization, Third International Workshop on Satellite and Space Communications (IWSSC ’07), 2007

Review on the ACM Turing Award Lecture given by Cerf and Khan

Vinton Cerf (left) and Bob Khan (right) (Image source)

In the ACM Turing award lecture given by the awardees (Dr. Vinton Cerf and Dr. Bob Khan), they discussed several of their opinions regarding the evolution of the Internet. They also discussed  challenges and future trends of the Internet architecture.

As seen in previous literatures [1,2,3,4], the Internet follows an end-to-end set of policies that enables communication between two hosts. To support a wide variety of services, functions should be built on the application layer rather than on the network itself [3].

According to Cerf and Khan, the motivation for the initial Internet architecture (that is to establish communication between two processes in different networks) is different to what is the actual need of the users today. As for computer scientists, the Internet is a medium for computational purposes, in a wide range of users, the Internet is used to share files and resources. Although the initial motivation defined by the authors is not the case today, the  Internet remains stable. The basic architecture  has evolved  by introducing different layers over the network. It is said in the interview that, if they have foreseen this set of requirements, the basic principles or architecture of the Internet might be different from what we now have today.

The lecture also mentions different challenges that the Internet is facing today. One of such issue is on network security  examples of which are spam and viruses. Improvements in security side are working in parallel with the improvements in the network. Another issue mentioned is on the existence of outgrowing number of mobile devices connected to the internet. It’s an issue because these devices are not stationary in position and are not always connected to the Internet. This issue was not foreseen in the original Internetwork architecture because at that time the hosts were most of the time connected, and topology of the network is almost fixed (aside from additional hosts in the network). Which became one of the future trends they mentioned. The problem of mobile hosts extends to the problem of interplanetary Internet. Although TCP/IP works fine ( allows fast and reliable communication between processes) in terrestrial Internet, it is not effective in space for the following problems. The first problem is the physical distance between two communicating processes. Second, planets or objects in space are constantly moving. Third, TCP is not well suited for file transfers. The details of interplanetary internet (IPN) is posted here.

References

[1]  Marjory S. Blumenthal and David D. Clark. 2001. Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world. ACM Trans. Internet Technol. 1, 1 (August 2001), 70-109. DOI=10.1145/383034.383037 http://doi.acm.org/10.1145/383034.383037

[2] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[3] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[4] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423

Carr party of five

 As I walked home one freezing day, I stumbled on a wallet someone had lost in the street. I picked it up and looked inside to find some identification so I could call the owner. But the wallet contained only three dollars and a crumpled letter that looked as if it had been in there for years. The envelope was worn and the only thing that was legible on it was the return address. …I started to open the letter, hoping to find some clue.

 Then I saw the dateline–1924. The letter had been written almost sixty years ago. It was written in a beautiful feminine handwriting on powder blue stationery with a little flower in the left-hand corner. It was a “Dear John” letter that told the recipient, whose name appeared to be Michael, that the writer could not see him any more because her mother forbade it. Even so, she…

View original post 1,388 more words

Review on “Rethinking the design of the Internet: The end to end arguments vs. the brave new world”

The increasing number of users and so its changing demands is questioning the capability of the end to end argument design of the Internet. These sudden changes and increasing application of the Internet are compromising the original design principles of the Internet. To be able to understand the issue, let us first recall what end to end argument is all about. It is said in previous memorandum and publications [2, 3, 4] that the end-to-end functions should be implemented to knowledgeable end-to-end entities of the network and not on the network itself to support a variety of applications and services. The implementation should not be built on the lower level of the internet, rather it should be on the application layer.

As we increase the number of hosts connecting to a network, the Internet itself is  becoming uncontrollable. For the operations going on the internet, there is a basic assumption that there are only security measures in the end-to-end host, which in fact are untrustworthy. There is a  growing number of Internet users and the internet itself cannot filter out which users has good behaviors and  which of them has a purpose to annoy. Examples of which are spammers. This instance is in fact less alarming when we think of transactions involving secured and private data. To solve this, the end-to-end communicating parties should enforce security measures on the application layer. The issue of having third parties in an end-to-end communication also has some security issues and therefore needed to check whether the design principle still applies on that context.

Additionally, more demanding applications such as media streaming doesn’t seem to have  an advantage using the end-to-end design. In fact, streaming services would involve multi-party of clients connecting into one server. To speed up communication between server and clients, the application layer provides an option to   sacrifice the fidelity of the information knowing facts that it is impossible to have a perfect transmission of data  from one host to another, not to mention that the longer this data travels, the higher the chance is for data corruption. Another solution for this is the creation of an intermediary server where it provides services to closer recipients.

Different user requirements are also listed down in the paper, such as trust, anonymity, involvement with other communicating parties, and  multi-way communications. As a solution the paper introduces several network functions that are needed based from the current requirements of the users. It introduces firewalls, traffic filters, and Network Address Translation (NAT). It goes to show that the initial argument, that is no functions should be implemented on the network is nullified. Perhaps the Internet architecture before doesn’t foresee the kinds of issue that is presented in this paper.

________________________________________________________________

References

[1]  Marjory S. Blumenthal and David D. Clark. 2001. Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world. ACM Trans. Internet Technol. 1, 1 (August 2001), 70-109. DOI=10.1145/383034.383037 http://doi.acm.org/10.1145/383034.383037

[2] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[3] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[4] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423


Review on “Architectural Principles of the Internet”

This is a review on the memorandum (known as Request for Comments)  entitled “Architectural Principles of the Internet” written by the International Engineering Task Force (IETF).  Basic facts, policies, and issues about the Internet was pointed out in the memorandum. 

The internet as we know it, was not created with a great plan. Instead, it evolved depending on the trends of technology. I agree with the statement in the paper that change really is the only thing that is permanent. (If I will relate this to the evolution of man) Although there are a lot of environmental changes happened and several new strains of viruses came out, human still survives. Likewise, technology changes through time depending on the current needs of the society but still the Internet survives and is continuing to provide service mankind.

Going back to the more technical side of the memorandum, it pointed out several things regarding the Internet architecture. Although it is said that the Internet has no architecture, it has a set of traditions. These traditions are aiming at a specific goal. That is to connect, using the Internet Protocol with the end-to-end intelligence rather than hidden in the network itself. It also points out that although there is one layer of protocol (that is the Internet protocol), several networks implements more than one. The need for having a multilayer protocol is due to inevitable facts that there will be new requirements needed in the network, thus needing another protocol for that. Additionally there is a need for transitioning from another version of IP to another, mainly for the purpose of data transmission.

Since there is no centralized body that owns or manages the Internet, it is therefore necessary to check whether we can still support the main objective (communication) of the Internet itself. Several basic principles should be implemented, despite of the changing technology and needs. One of the goals of the Internet is to support several types of network architectures. Therefore the internet must not depend on the individual specification of the hardwares. It is also reiterated that end-to-end functions should be catered by end-to-end protocols. This is because these functions are subject to failure of transmission and security. It is also mentioned by Saltzer that these functions should be completely implemented only on the end-to-end communicating processes, and is not possible through the communicating system.

The paper list down all the design rules of the Internet. Since there is no body that administrates the Internet, it is a good thing that there are several policies that are published in the form of memoranda. This is to  answer the previous issue that Clark also mentioned in his paper [2]. This memorandum presents several design issues that a user, or a network administrator should know to be able to work together as the Internet. It also points out  practicality and efficiency in terms of solving a network related problem. For instance,  if  there exist several solutions to problems, a user or a network administrator should use one of these solutions and not wait for a perfect one. It also mentions about security which, I think is a very much important aspect nowadays because (1) of the growing number of hosts in the internet and (2) the lack of security mechanisms that are available on the network itself (based from papers [2] [3] which do not give much emphasis on the issue of data security).

____________________________________________________________________________________________

References

[1] Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996

[2] D. Clark. 1988. The design philosophy of the DARPA internet protocols. SIGCOMM Comput. Commun. Rev. 18, 4 (August 1988), 106-114. DOI=10.1145/52325.52336 http://doi.acm.org/10.1145/52325.52336

[3] Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423