The increasing number of users and so its changing demands is questioning the capability of the end to end argument design of the Internet. These sudden changes and increasing application of the Internet are compromising the original design principles of the Internet. To be able to understand the issue, let us first recall what end to end argument is all about. It is said in previous memorandum and publications [2, 3, 4] that the end-to-end functions should be implemented to knowledgeable end-to-end entities of the network and not on the network itself to support a variety of applications and services. The implementation should not be built on the lower level of the internet, rather it should be on the application layer.
As we increase the number of hosts connecting to a network, the Internet itself is becoming uncontrollable. For the operations going on the internet, there is a basic assumption that there are only security measures in the end-to-end host, which in fact are untrustworthy. There is a growing number of Internet users and the internet itself cannot filter out which users has good behaviors and which of them has a purpose to annoy. Examples of which are spammers. This instance is in fact less alarming when we think of transactions involving secured and private data. To solve this, the end-to-end communicating parties should enforce security measures on the application layer. The issue of having third parties in an end-to-end communication also has some security issues and therefore needed to check whether the design principle still applies on that context.
Additionally, more demanding applications such as media streaming doesn’t seem to have an advantage using the end-to-end design. In fact, streaming services would involve multi-party of clients connecting into one server. To speed up communication between server and clients, the application layer provides an option to sacrifice the fidelity of the information knowing facts that it is impossible to have a perfect transmission of data from one host to another, not to mention that the longer this data travels, the higher the chance is for data corruption. Another solution for this is the creation of an intermediary server where it provides services to closer recipients.
Different user requirements are also listed down in the paper, such as trust, anonymity, involvement with other communicating parties, and multi-way communications. As a solution the paper introduces several network functions that are needed based from the current requirements of the users. It introduces firewalls, traffic filters, and Network Address Translation (NAT). It goes to show that the initial argument, that is no functions should be implemented on the network is nullified. Perhaps the Internet architecture before doesn’t foresee the kinds of issue that is presented in this paper.
 Marjory S. Blumenthal and David D. Clark. 2001. Rethinking the design of the Internet: the end-to-end arguments vs. the brave new world. ACM Trans. Internet Technol. 1, 1 (August 2001), 70-109. DOI=10.1145/383034.383037 http://doi.acm.org/10.1145/383034.383037
 Fred Baker, Noel Chiappa, Donald Eastlake, Frank Kastenholz, Neal McBurnett, Masataka Ohta, Jeff Schiller and Lansing Sloan, International Engineering Task Force (IETF), Request For Comments 1958, 1996
 Vinton G. Cerf and Robert E. Icahn. 2005. A protocol for packet network intercommunication.SIGCOMM Comput. Commun. Rev. 35, 2 (April 2005), 71-82. DOI=10.1145/1064413.1064423 http://doi.acm.org/10.1145/1064413.1064423