Introduction

The internet has become one of the most important facets of people’s lives, and it has enabled some truly amazing and inspiring developments (as well as some terrible things).  The internet is one of the most important inventions mankind has ever created, and its impact on our lives is rivaled only by things such as electricity, the internal combustion engine, and the lightbulb.  As the internet increasingly permeates everyone’s lives with each passing day – from their computers, to their smartphones, to wearable devices, to embedded devices and the internet of things – it becomes increasingly important that people understand how the internet works.

In a series of articles (How the Internet Works), I will go into the details of the major elements and processes that you need to understand in order to really grasp how the internet works and how you are able to talk, write, and share information with each other.  The truly amazing thing about the internet is the genius that went into its creation.  The internet is largely unchanged from when it was first devised and this speaks to the brilliance, insight, and forethought of the creators.

There are three simple concepts that create the fundamental foundation of the internet and allow all of the incredible things that we can do on the internet.  These are the notions of packet-based routing, the idea of the best effort being good enough, and the concept of there being a hierarchy of protocols.  These ideas have remained as the foundation of the internet and have stood the test of time.  It is truly remarkable engineering.  Let’s learn about the three core concepts of how the internet works.

How the Internet Works: Three Core Concepts

  • Packet Routing: the concept of breaking data down into smaller pieces that can be sent independent of one another and recombined at their destination
  • Best Effort: the concept that sending your packets of information off towards their destination and hoping that they arrived there was good enough
  • Protocol Hierarchy: the concept that a protocol should be as simple and single-minded as possible and work in conjunction with other protocols to deliver the functionality and capabilities required to share information between devices.

Packet Routing

When the internet was being created, computers were connected to each other in one of two ways: either using a leased line or a switched circuit.  Both of these types of interconnections worked in the same way.

A leased line was a connection that was leased (typically from the phone company at the time), think of it as a point-to-point circuit.  Leased lines were typically 64 baud, so they were quite slow, but they were very reliable.

As computers became more prolific, people wanted them to connect to each other to share information, but it became increasingly expensive because these point-to-point circuits were expensive and they were cross-country in many cases (connecting computers at MIT with those at Stanford for example).  Some early examples of point-to-point connections were AOL, CompuServe, and The Source.  These connections worked by having massive banks of modem pools, and all of the connecting computers had modems, and people would use their phone line to dial into these services in order to create a point-to-point connection between us and them.

These circuits were dedicated to us and that particular computer connection.  This caused issues when someone else wished to use the phone.  In fact, three-way calling had to be disabled on these lines since when another call came in, it would disconnect you and drop your connection.  While this approach worked, it was terribly inefficient and fragile.

That brings us to the switched circuit approach to networking.  Switched circuit networking represented a fundamental rethinking of how computers connected to each other.  Instead of having a discrete point-to-point connection between parties, we are going to break these connections up into packets of data.  At this time, there was no notion of a packet.  You established a connection to another computer, you put data in, and whatever you sent in one end would come out the other.  There was no packetization, nor was there any discrete packaging.  It was a conceptual breakthrough.

Animated diagram showing a packets route through a circuit switch

Circuit switching

Animated diagram of a packet traveling through computers using packet switching

Packet switching

When packetization came into play, you began to think in terms of sharing resources.  Think of it this way: a cable modem is hooked to a cable which is a shared resource between you and your neighbors.  Lots of your neighbors are hooked into the same connection and it is shared between all of you – this is why, if your neighbor is downloading a bunch of torrents, it can impact everyone else’s connection speed in your neighborhood (although this is less of a problem than it used to be).  Data no longer moves from point-to-point, it now moves among shared linkages.  The routers that exist at various points throughout this inter-networked architecture and the links between the routers are being shared by all of the data that moves across them.  It is important to remember that these routers and interconnections are owned and operated by different entities, yet they are still able to talk to one another.

Best Effort

White trophy for best effort on green backgroundThe second revelation that fundamentally changed how computers share information with one another was the concept that the “best effort” was good enough.  This was a huge paradigm shift as during the point-to-point connection period it was vital that the connection was reliable (remember, at this point, the data was being sent as a whole entity).  This allowed you to assume – with a great degree of confidence – that what you sent thru one computer would arrive at the other.  The people behind the packeted approach decided to chop up the data into small packets.  They were unsure of what route the data would take to arrive at its intended destination (I’ll cover routing in a future article).  The idea was that instead of having a continuous connection, you packetized and essentially hoped for the best.  Yes, that is really what happened.

To better illustrate why you hoped for the best, think of it this way:  you have a router that will be connected to a number of other routers, and those routers are connected to a number of other routers themselves.  Each link between them has a bandwidth thru which it can transfer data.  These don’t all have to be the same, the routers and the connections can be different – in fact, this is what helps make the internet so reliable.  You could have some high-bandwidth connections and some low-bandwidth connections.  This early network was very heterogeneous.

So what we have are these individual packets of data that are arriving across some links to routers that then inspect them and determine where they should be sent or forwarded to.  Can you identify the problem in this situation?  Consider when more packets are coming in than the router is able to send out.  There are instances where you could have a burst of packets coming in over one (or more) of the high-speed links and they are all saying that they want to go out over there thru a low-speed link.  The router can’t send them out at the same rate that it received them.

There was a lot of contention in the early days amongst the engineers about this, as many said that it wouldn’t work – that the entire idea was ludicrous.  Eventually, the engineers thought about the problem for awhile and decided that they would put buffers in the routers so that the packets can wait in the outgoing buffer until they are able to be sent out towards their destination.  As you have no doubt figured out, this only partly addressed the issue.  If packets came in in bursts over a high-speed connection into the router, it would check each one and assign them to their outgoing interfaces and put them in a buffer to be sent off and this helped deal with the burstiness issue, but these buffers had a finite capacity and it was possible to overflow them as well.  This puts us right back where we started, so how do we solve this?  The designers said that if the outgoing buffers overfill, that is, if the buffer is so full that it cannot accept another packet to be forwarded, we’re just going to discard it.  As you can imagine, there was a contingent that thought this was a crazy idea.

What do you mean you’re just going to throw it away!?

The designers said yes, we know.  You did your best and we are sorry that you were trying to squeeze so much through a link that could not take it.  Remember that this was a link that couldn’t handle that much traffic, and you wouldn’t have been able to send that much in the first place.  It was over a switched circuit.  So in this system, the system simply dropped packets.  It doesn’t notify anyone, it just drops the packets.  This was a key point and also another point of contention as the original designers said:

Wait a minute, if we’re going to drop a packet, then we need to send a message back saying that we dropped a packet and it needs to be sent again.

You can see the problem with this already, can’t you?  That would create more problems because if the network is already congested (hence the dropping of packets in the first place), the last thing that we want to do is send more packets to send these notifications back to the original sender!  The system has to work blind.

It was designing imperfection.

It is truly remarkable that they were able to take what used to be a concept based on a continuous, reliable connection that worked at a known speed and they were able to heterogenize it by chopping up the same data into pieces and launching it off across the network and hoping for the best.  That piece of data (a packet) would travel across all of these links, through interconnected routers, and maybe it would arrive to its destination.

This presents its own problems however.  What if we have to send a file?  We can’t just have pieces of the file missing.  If we aren’t going to get an acknowledgement that pieces of the file were dropped, then we need some means for getting the recipient to notice that I’ve got some missing pieces here (I will cover this in detail in How the Internet Works: TCP In Detail).

Protocol Hierarchy

The third component that allowed for a reliable protocol to be built on top of an unreliable protocol was the notion of hierarchy.

Let’s review quickly – we currently have these fundamental concepts:

  • Packet Routing:  the concept of breaking up our communications into individual pieces and hoping for the best
  • Best Effort: the concept that all we need by nature of a heterogeneous network of interlinked routers with links that are coming up and down (a router might be put in or taken out of service) is that somehow the data needs to get through the best it can.  That the best effort is all that we can provide
  • Protocol Hierarchy:  a carefully designed set of encapsulated protocols (think of the Russian nested matryoshka dolls)

Green block diagram illustrating hierarchy of protocolsThe base layer – the only thing we’re going to do with the lowest level protocol – is start with addressing the packet to its destination.  We are just saying that this chunk of data wants to go there and that is all that we are going to say.  There is no way to say if it got there or not.  We are just going to give it an address, say where it was from so that the recipient can reply, and send it along to its target IP address (Internet Protocol Address).  The packet will either get there or it won’t.

That is essentially the IP protocol.  The Internet Protocol (called IP) is this packet, sometimes called a datagram, which contained four bits.  You had to have addresses for these chunks of data.  So, for example, the version number of the IP Protocol, there were a number of bytes at the beginning of the datagram (the least that you can have is 20 bytes).  There are only 20 bytes available to handle the mechanics of getting this packet somewhere else, so bytes were expensive and not to be wasted.  Consequently, the version number is just four bits, and zero, one, two, and three were used up before the internet really happened.  So version four (v4) is what finally did happen and is what we have been using (mostly) until this day.

IPv6 is here and is beginning to be used more widely.  In IP datagrams, almost all of them have in those first four bits, they have a binary 4 in them.  Increasingly, in the future, there will be a 6 in there which will tell any equipment which receives it what the rest of the header looks like.  That is the other key to this.  The IP datagram, this IP packet’s version bits, they’re the first four and they have to be, as the first four bits that arrive tell the equipment the format of the rest of the packet.  That is, if this is an IPv4 packet, an IPv6 packet, or something else.  The designers took these 20 bytes, the first four bytes (32 bits) are for the destination IP, and another four bytes is for the source IP, the IP address from which the packet came from.

Now, at this level there is no notion of a port.  That is an abstraction which was added by something that this packet contains by a higher level protocol above the IP protocol.  The IP protocol (the lowest one) is incredibly robust.  The universal IPv4 packet whose first four bits have a binary 4 in them, all it contains is IPv4 addresses and some other housekeeping information such as a checksum and an indication of total length of the whole packet, and also the length of the header.  It is a lean protocol.  That packet can contain anything else, even a different IP version could be contained (an IPv6 packet can be contained within an IPv4 packet for example).

This should give you a sense for the care and thought that was put into the design of the fundamental technologies of the internet, technologies that have survived for decades virtually unchanged.  Those three concepts – that we’re going to drop this concept of a continuous connection in favor of chopping that same data up into pieces, adding some addressing information so that it can get where it is supposed to go, but then just turning it loose and hoping for the best, where we have a bunch of well-intentioned routers that are all interconnected in a huge mesh and each router will do its best to forward the data out of the interface which best takes this packet of data toward its destination.  To do this, the router needs a routing table (which I’ll cover in a future article) to help it determine where it should be sent.  This concept of that if the router can’t do it, it just drops the packet and doesn’t say anything.  And lastly this notion of a very careful protocol hierarchy – this nested hierarchy – where one protocol contains the next one up the chain, with each wrapping providing only the information that you need it to and nothing more.

Upcoming Topics

In the next article, I’ll be talking about these protocols.  I’ll cover the UDP protocol and the TCP protocol, which do bring the notion of port abstraction (they have port numbers in their protocol, but no IP addresses – because that is supplied by the IP protocol).  I’ll also talk about ICMP in a future article as well.  ICMP can best be thought of as the plumbing or maintenance protocol.  ICMP is not port oriented, so it doesn’t have a source and destination port, yet it is carried by the IP protocol which knows about the IP addresses.  By carefully designing these nested layers of protocols, we end up with a lean and efficient system which is very flexible and extensible.

Thank you for reading, and I can tell you that it gets even more interesting as we dive deeper into this technology.

 

About jvaudio

I have masters degrees in information systems management, project management, and computer science. I have bachelors degrees in technical management and finance.

I love to learn. I love to write. I love technology. I love math.

Visit My Website
View All Posts
Recommended Posts