notes on hardware & networking

56

Upload: somag83

Post on 26-Mar-2015

594 views

Category:

Documents


31 download

TRANSCRIPT

Page 1: Notes on Hardware & Networking

id9360760 pdfMachine by Broadgun Software - a great PDF writer! - a great PDF creator! - http://www.pdfmachine.com http://www.broadgun.com

Page 2: Notes on Hardware & Networking

WHAT IS A COMPUTER NETWORK ?

Computer networks may be classified according to the scale: Personal area network (PAN), Local Area Network (LAN), Campus Area Network (CAN), Metropolitan area network (MAN), or Wide area network (WAN). As Ethernet increasingly is the standard interface to networks, these distinctions are more important to the network administrator than the end user. Network administrators may have to tune the network, based on delay that derives from distance, to achieve the desired Quality of Service (QoS). The primary difference in the networks is the size.

Controller Area Networks are a special niche, as in control of a vehicle's engine, a boat's electronics, or a set of factory robots.

By connection method

Computer networks can also be classified according to the hardware technology that is used to connect the individual devices in the network such as Optical fiber, Ethernet, Wireless LAN, HomePNA, or Power line communication.

Ethernets use physical wiring to connect devices. Often, they employ the use of hubs, switches, bridges, and routers.

Wireless LAN technology is built to connect devices without wiring. These devices use a radio frequency to connect.

By functional relationship (Network Architectures)

Computer networks may be classified according to the functional relationships which exist between the elements of the network, e.g., Active Networking, Client-server and Peer-to-peer (workgroup) architectures.

By network topology

Computer networks may be classified according to the network topology upon which the network is based, such as Bus network, Star network, Ring network, Mesh network, Star-bus network, Tree or Hierarchical topology network, etc.

Network Topology signifies the way in which intelligent devices in the network see their logical relations to one another. The use of the term "logical" here is significant. That is, network topology is independent of the "physical" layout of the network. Even if networked computers are physically placed in a linear arrangement, if they are connected via a hub, the network has a Star topology, rather than a Bus Topology. In this regard the visual and operational characteristics of a network are distinct; the logical network topology is not necessarily the same as the physical layout.

Page 3: Notes on Hardware & Networking

By protocol

Computer networks may be classified according to the communications protocol that is being used on the network. See the articles on List of network protocol stacks and List of network protocols for more information. For a development of the foundations of protocol design see Srikant 2004 [1] and Meyn 2007 [2]

Types of networks:

Below is a list of the most common types of computer networks in order of scale.

Personal Area Network (PAN)

A personal area network (PAN) is a computer network used for communication among computer devices close to one person. Some examples of devices that may be used in a PAN are printers, fax machines, telephones, PDAs or scanners. The reach of a PAN is typically within about 20-30 feet (approximately 6-9 Meters). PANs can be used for communication among the individual devices (intrapersonal communication), or for connecting to a higher level network and the Internet (an uplink).

Personal area networks may be wired with computer buses such as USB[3] and FireWire. A wireless personal area network (WPAN) can also be made possible with network technologies such as IrDA and Bluetooth.

Local Area Network (LAN)

A network covering a small geographic area, like a home, office, or building. Current LANs are most likely to be based on Ethernet technology. For example, a library will have a wired or wireless LAN for users to interconnect local devices (e.g., printers and servers) connect to the internet. All of the PCs in the library are connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnection devices and eventually connect to the internet. The cables to the servers are on Cat 5e enhanced cable, which will support IEEE 802.3 at 1 Gbps.

The staff computers (bright green) can get to the color printer, checkout records, and the academic network and the Internet. All user computers can get to the Internet and the card catalog. Each workgroup can get to its local printer. Note that the printers are not accessible from outside their workgroup.

Page 4: Notes on Hardware & Networking

Typical library network, in a branching tree topology and controlled access to resources

All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbps Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers.

The staff have a VoIP network that also connects to both the Internet and the academic network. They could have paths to the central library system telephone switch, via the academic network. Since voice must have the highest priority, it is on the pink network. The VoIP protocols used, such as RSVP, are virtual circuits rather than connectionless forwarding paths.

Depending on the circumstance, the computers in the network might be connected using cables and hubs. Other networks might be connected strictly wirelessly. It depends on the number of PCs that you are trying to connect, the physical layout of your workspace, and the various needs of network. Not shown in this diagram, for example, is a wireless workstation used when shelving books.

The defining characteristics of LANs, in contrast to WANs (wide area networks), include their much higher data transfer rates, smaller geographic range, and lack of a need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 100 Gbit/s, and possibly 40 Gbit/s. Inverse multiplexing is commonly used to build a faster aggregate from slower physical streams, such as bringing 4 Gbit/s aggregate stream into a computer or network element with four 1 Gbit/s interfaces.

Campus Area Network (CAN)

A network that connects two or more LANs but that is limited to a specific and contiguous geographical area such as a college campus, industrial complex, or a military base. A CAN, may be considered a type of MAN (metropolitan area network), but is generally limited to an area that is smaller than a typical MAN.

Page 5: Notes on Hardware & Networking

This term is most often used to discuss the implementation of networks for a contiguous area. For Ethernet based networks in the past, when layer 2 switching (i.e., bridging (networking) was cheaper than routing, campuses were good candidates for layer 2 networks, until they grew to very large size. Today, a campus may use a mixture of routing and bridging. The network elements used, called "campus switches", tend to be optimized to have many Ethernet-family (i.e., IEEE 802.3) interfaces rather than an arbitrary mixture of Ethernet and WAN interfaces.

Metropolitan Area Network (MAN)

A Metropolitan Area Network is a network that connects two or more Local Area Networks or Campus Area Networks together but does not extend beyond the boundaries of the immediate town, city, or metropolitan area. Multiple routers, switches & hubs are connected to create a MAN.

Wide Area Network (WAN)

A WAN is a data communications network that covers a relatively broad geographic area (i.e. one city to another and one country to another country) and that often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.

Global Area Network (GAN)

Global area networks (GAN) specifications are in development by several groups, and there is no common definition. In general, however, a GAN is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is "handing off" the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial Wireless local area networks (WLAN) [4]. INMARSAT has defined a satellite-based Broadband Global Area Network (BGAN).

IEEE mobility efforts focus on the data link layer and make assumptions about the media. Mobile IP is a network layer technique, developed by the IETF, which is independent of the media type and can run over different media while still keeping the connection.

Internetwork

Two or more networks or network segments connected using devices that operate at layer 3 (the 'network' layer) of the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private, commercial, industrial, or governmental networks may also be defined as an internetwork.

In modern practice, the interconnected networks use the Internet Protocol. There are at least three variants of internetwork, depending on who administers and who participates in them:

Intranet Extranet

Page 6: Notes on Hardware & Networking

"The" Internet

Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the intranet or extranet is normally protected from being accessed from the Internet without proper authorization. The Internet itself is not considered to be a part of the intranet or extranet, although the Internet may serve as a portal for access to portions of an extranet.

Intranet

An intranet is a set of interconnected networks, using the Internet Protocol and uses IP-based tools such as web browsers, that is under the control of a single administrative entity. That administrative entity closes the intranet to the rest of the world, and allows only specific users. Most commonly, an intranet is the internal network of a company or other enterprise.

Extranet

An extranet is a network or internetwork that is limited in scope to a single organization or entity but which also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g. a company's customers may be given access to some part of its intranet creating in this way an extranet, while at the same time the customers may not be considered 'trusted' from a security standpoint). Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least one connection with an external network.

Internet

A specific internetwork , consisting of a worldwide interconnection of governmental, academic, public, and private networks based upon the Advanced Research Projects Agency Network (ARPANET) developed by ARPA of the U.S. Department of Defense � also home to the World Wide Web (WWW) and referred to as the 'Internet' with a capital 'I' to distinguish it from other generic internetworks.

Participants in the Internet, or their service providers, use IP Addresses obtained from address registries that control assignments. Service providers and large enterprises also exchange information on the reachability of their address ranges through the BGP Border Gateway Protocol.

Basic Hardware Components

All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition, some method of connecting these building blocks is required, usually in the form of galvanic cable (most commonly Category 5 cable). Less common are microwave links (as in IEEE 802.11) or optical cable ("optical fiber").

Page 7: Notes on Hardware & Networking

Network Interface Cards

A network card, network adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It provides physical access to a networking medium and often provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.

Repeaters

A repeater is an electronic device that receives a signal and retransmits it at a higher level or higher power, or onto the other side of an obstruction, so that the signal can cover longer distances without degradation.

Because repeaters work with the actual physical signal, and do not attempt to interpret the data being transmitted, they operate on the Physical layer, the first layer of the OSI model.

Hubs

A hub contains multiple ports. When a packet arrives at one port, it is copied to all the ports of the hub. When the packets are copied, the destination address in the frame does not change to a broadcast address. It does this in a rudimentary way, it simply copies the data to all of the Nodes connected to the hub. [5]

Bridges

A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do. but learns which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received.

Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.

Bridges come in three basic types:

1. Local bridges: Directly connect local area networks (LANs) 2. Remote bridges: Can be used to create a wide area network (WAN) link between LANs.

Remote bridges, where the connecting link is slower than the end networks, largely have been replaced by routers.

3. Wireless bridges: Can be used to join LANs or connect remote stations to LANs.

Page 8: Notes on Hardware & Networking

Switches

A switch is a device that does switching, that is it forwards and filters OSI layer two datagrams chunk of data communication) between ports (connected cables) based on the Mac-Addresses in the packets.[6] This is distinct from a hub in that it only forwards the datagrams to the ports involved in the communications rather than all ports connected. Strictly speaking, a switch is not capable of routing traffic based on IP address (layer 3)

which is necessary for communicating between network segments or within a large or complex LAN. Some switches are capable of routing based on IP addresses but are still called switches as a marketing term. A switch normally has numerous ports with the intention that most or all of the network be connected directly to a switch, or another switch that is in turn connected to a switch. [7]

"Switches" is a marketing term that encompasses routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more OSI layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch.

Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand networking. Many experienced network designers and operators recommend starting with the logic of devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting particular implementations, but multilayer switching is simply not a real-world design concept.

Routers

Routers are the networking device that forward data packets along networks by using headers and forwarding tables to determine the best path to forward the packets. Routers work at the network layer of the TCP/IP model or layer 3 of the OSI model. Routers also provide interconnectivity between like and unlike media (RFC 1812) This is accomplished by examining the Header of a data packet, and making a decision on the next hop to which it should be sent (RFC 1812) They use preconfigured static routes, status of their hardware interfaces, and routing protocols to select the best route between any two subnets. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some DSL and cable modems, for home use, have been integrated with routers to allow multiple home computers to access the Internet.

Building a simple computer network

A simple computer network may be constructed from two computers by adding a network adapter (Network Interface Controller (NIC)) to each computer and then connecting them together with a special cable called a crossover cable. This type of network is useful for transferring information between two computers that are not normally connected to each other by a permanent network connection or for basic home networking applications. Alternatively, a network between two computers can be established without dedicated extra hardware by using a standard connection such

Page 9: Notes on Hardware & Networking

as the RS-232 serial port on both computers, connecting them to each other via a special crosslinked null modem cable.

Practical networks generally consist of more than two interconnected computers and generally require special devices in addition to the Network Interface Controller that each computer needs to be equipped with. Examples of some of these special devices are hubs, switches and routers.

Ancillary equipment used by networks

To keep a network operating, to diagnose failures or degradation, and to circumvent problems, networks may have a wide-ranging amount of ancillary equipment.

Providing Electrical Power

Individual network components may have surge protectors - an appliance designed to protect electrical devices from voltage spikes. Surge protectors attempt to regulate the voltage supplied to an electric device by either blocking or shorting to ground voltage above a safe threshold.[8]

Beyond the surge protector, network elements may have uninterruptible power supplies (UPS), which can be anywhere from a line-charged battery to take the element through a brief power dropout, to an extensive network of generators and large battery banks that can protect the network for hours or days of commercial power outages.

A network as simple as two computers linked with a crossover cable has several points at which the network could fail: either network interface, and the cable. Large networks, without careful design, can have many points at which a single failure could disable the network.

When networks are critical the general rule is that they should have no single point of failure. The broad factors that can bring down networks, according to the Software Engineering Institute [9] at Carnegie-Mellon University:

1. Attacks: these include software attacks by various miscreants (e.g., malicious hackers, computer criminals) as well as physical destruction of facilities.

2. Failures: these are in no way deliberate, but range from human error in entering commands, bugs in network element executable code, failures of electronic components, and other things that involve deliberate human action or system design.

3. Accidents: Ranging from spilling coffee into a network element to a natural disaster or war that destroys a data center, these are largely unpredictable events. Survivability from severe accidents will require physically diverse, redundant facilities. Among the extreme protections against both accidents and attacks are airborne command posts and communications relays[10], which either are continuously in the air, or take off on warning. In like manner, systems of communications satellites may have standby spares in space, which can be activated and brought into the constellation.

Page 10: Notes on Hardware & Networking

Dealing with Power Failures

One obvious form of failure is the loss of electrical power. Depending on the criticality and budget of the network, protection from power failures can range from simple filters against excessive voltage spikes, to consumer-grade Uninterruptible Power Supplies(UPS) that can protect against loss of commercial power for a few minutes, to independent generators with large battery banks. Critical installations may switch from commercial to internal power in the event of a brownout,where the voltage level is below the normal minimum level specified for the system. Systems supplied with three-phase electric power also suffer brownouts if one or more phases are absent, at reduced voltage, or incorrectly phased. Such malfunctions are particularly damaging to electric motors. Some brownouts, called voltage reductions, are made intentionally to prevent a full power outage.

Some network elements operate in a manner to protect themselves and shut down gracefully in the event of a loss of power. These might include noncritical application and network management servers, but not true network elements such as routers. UPS may provide a signal called the "Power-Good" signal. Its purpose is to tell the computer all is well with the power supply and that the computer can continue to operate normally. If the Power-Good signal is not present, the computer shuts down. The Power-Good signal prevents the computer from attempting to operate on improper voltages and damaging itself

To help standardize approaches to power failures, the Advanced Configuration and Power Interface (ACPI) specification is an open industry standard first released in December 1996 developed by HP, Intel, Microsoft, Phoenix and Toshiba that defines common interfaces for hardware recognition, motherboard and device configuration and power management.

Monitoring and Diagnostic Equipment

Networks, depending on their criticality and the skill set available among the operators, may have a variety of temporarily or permanently connected performance measurement and diagnostic equipment. Routers and bridges intended more for the enterprise or ISP market than home use, for example, usually record the amount of traffic and errors experienced on their interfaces.

Diagnostic equipment, to isolate failures, may be nothing more complicated than a spare piece of equipment. If the problem disappears when the spare is manually replaced, the problem has been diagnosed. More sophisticated and expensive installations will have redundant (duplicate) equipment active that can automatically take over from a failed unit. Unfortunately it is difficult to install sufficient and correct redundant equipment to prevent all predictable failures from impacting the (potentially very numerous) network users. Failures can be made transparent to user computers with techniques such as the Virtual Router Redundancy Protocol (VRRP), as specified in RFC 3768.

Page 11: Notes on Hardware & Networking

The Internet protocol suite is the set of communications protocols that implement the protocol stack on which the Internet and most commercial networks run. It has also been referred to as the TCP/IP protocol suite, which is named after two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were also the first two networking protocols defined. Today's IP networking represents a synthesis of two developments that began to evolve in the 1960s and 1970s, namely LANs (Local Area Networks) and the Internet, which, together with the invention of the World Wide Web by Tim Berners-Lee in 1989, have revolutionized computing.

The Internet Protocol suite�like many protocol suites�can be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted. The TCP/IP reference model consists of four

The five-layer TCP/IP model

5. Application layer

DHCP · DNS · FTP · Gopher · HTTP · IMAP4 · IRC · NNTP · XMPP · POP3 · RTP · SIP · SMTP · SNMP · SSH · TELNET · RPC · RTCP · RTSP · TLS · SDP · SOAP · GTP · STUN · NTP · (more)

4. Transport layer

TCP · UDP · DCCP · SCTP · RSVP · (more)

3. Network/Internet layer

IP (IPv4 · IPv6) · OSPF · IS-IS · BGP · IPsec · ARP · RARP · RIP · ICMP · ICMPv6 ·IGMP · (more)

2. Data link layer

802.11 (WLAN) · 802.16 · Wi-Fi · WiMAX · ATM · DTM · Token ring · Ethernet · FDDI · Frame Relay · GPRS · EVDO · HSPA · HDLC · PPP · PPTP · L2TP · ISDN · ARCnet · (more)

1. Physical layer

Page 12: Notes on Hardware & Networking

Ethernet physical layer · Modems · PLC · SONET/SDH · G.709 · Optical fiber · Coaxial cable · Twisted pair · (more)

History

The Internet protocol suite came from work done by Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After building the pioneering ARPANET in the late 1960s, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn was hired at the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across them. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol for the ARPANET.

By the summer of 1973, Kahn and Cerf had soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. (Cerf credits Hubert Zimmerman and Louis Pouzin [designer of the CYCLADES network] with important influences on this design.)

With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string." There is even an implementation designed to run using homing pigeons, IP over Avian Carriers (documented in Request for Comments 1149 [2] [3]).

A computer called a router (a name changed from gateway to avoid confusion with other types of gateway) is provided with an interface to each network, and forwards packets back and forth between them. Requirements for routers are defined in (Request for Comments 1812). [4]

The idea was worked out in more detailed form by Cerf's networking research group at Stanford in the 1973�74 period, resulting in the first TCP specification (Request for Comments 675) [5] (The early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which was contemporaneous, was also a significant technical influence; people moved between the two).

DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability with TCP/IP v4 � the standard protocol still in use on the Internet today.

Page 13: Notes on Hardware & Networking

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between the U.S., UK, and Norway. Between 1978 and 1983, several other TCP/IP prototypes were developed at multiple research centers. A full switchover to TCP/IP on the ARPANET took place January 1, 1983.[6]

In March 1982, the US Department of Defense made TCP/IP the standard for all military computer networking.[7] In 1985, the Internet Architecture Board held a three day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, helping popularize the protocol and leading to its increasing commercial use.

On November 9, 2005 Kahn and Cerf were presented with the Presidential Medal of Freedom for their contribution to American culture.

Layers in the Internet Protocol suite

IP suite stack showing the physical network connection of two hosts via two routers and the corresponding layers used at each hop

Sample encapsulation of data within a UDP datagram within an IP packet

The IP suite uses encapsulation to provide abstraction of protocols and services. Generally a protocol at a higher level uses a protocol at a lower level to help accomplish its aims. The Internet protocol stack has never been altered, by the IETF, from the four layers defined in RFC 1122. The IETF makes no effort to follow the seven-layer OSI model and does not refer to it in standards-track protocol specifications and other architectural documents.

DNS, TFTP, TLS/SSL, FTP, Gopher, HTTP,

Page 14: Notes on Hardware & Networking

IMAP, IRC, NNTP, POP3, SIP, SMTP, SNMP, SSH, TELNET, ECHO, RTP, PNRP, rlogin, ENRP

Application Routing protocols like BGP, which for a variety of reasons run over TCP, may also be considered part of the application or network layer.

Transport TCP, UDP, DCCP, SCTP, IL, RUDP

Routing protocols like OSPF, which run over IP, are also to be considered part of the network layer, as they provide path selection. ICMP and IGMP run over IP and are considered part of the network layer, as they provide control information.

IP (IPv4, IPv6)

Internet

ARP and RARP operate underneath IP but above the link layer so they belong somewhere in between.

Network access (combines Data link and Physical) Ethernet, Wi-Fi, token ring, PPP, SLIP, FDDI, ATM, Frame Relay, SMDS

Some textbooks have attempted to map the Internet Protocol suite model onto the seven layer OSI Model. The mapping often splits the Internet Protocol suite's Network access layer into a Data link layer on top of a Physical layer, and the Internet layer is mapped to the OSI's Network layer. These textbooks are secondary sources that contravene the intent of RFC 1122 and other IETF primary sources[8]. The IETF has repeatedly stated that Internet protocol and architecture development is not intended to be OSI-compliant.

RFC 3439, on Internet architecture, contains a section entitled: "Layering Considered Harmful": Emphasizing layering as the key driver of architecture is not a feature of the TCP/IP model, but rather of OSI. Much confusion comes from attempts to force OSI-like layering onto an architecture that minimizes their use.[8]

Page 15: Notes on Hardware & Networking

Implementations

Today, most commercial operating systems include and install the TCP/IP stack by default. For most users, there is no need to look for implementations. TCP/IP is included in all commercial Unix systems, Mac OS X, and all free-software Unix-like systems such as Linux distributions and BSD systems, as well as Microsoft Windows.

Unique implementations include Lightweight TCP/IP, an open source stack designed for embedded systems and KA9Q NOS, a stack and associated protocols for amateur packet radio systems and personal computers connected via serial lines

The Open Systems Interconnection Basic Reference Model (OSI Reference Model or OSI Model for short) is a layered, abstract description for communications and computer network protocol design. It was developed as part of the Open Systems Interconnection (OSI) initiative and is sometimes known as the OSI seven layer model. From top to bottom, the OSI Model consists of the Application, Presentation, Session, Transport, Network, Data Link, and Physical layers. A layer is a collection of related functions that provides services to the layer above it and receives service from the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of the path.

Even though newer IETF, IEEE, and indeed OSI protocol work subsequent to the publication of the original architectural standards that have largely superseded it, the OSI model is an excellent place to begin the study of network architecture. Not understanding that the pure seven-layer model is more historic than current, many beginners make the mistake of trying to fit every protocol they study into one of the seven basic layers. This is not always easy to do as many of the protocols in use on the Internet today were designed as part of the TCP/IP model, and may not fit cleanly into the OSI model.

History

In 1977, work on a layered model of network architecture, which was to become the OSI model, started in the American National Standards Institute (ANSI) working group on Distributed Systems (DISY).[1] With the DISY work and worldwide input, the International Organization for Standardization (ISO) began to develop its OSI networking suite. [2] According to Bachman, the term "OSI" came into use on 12 October 1979. OSI has two major components: an abstract model of networking (the Basic Reference Model, or seven-layer model) and a set of concrete protocols. The standard documents that describe OSI can be downloaded from ISO or ITU-T[citation needed].

Page 16: Notes on Hardware & Networking

Parts of OSI have influenced Internet protocol development, but none more than the abstract model itself, documented in ISO 7498 and its various agenda. In this model, a networking system is divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacts directly only with the layer immediately beneath it, and provides facilities for use by the layer above it.

In particular, Internet protocols are deliberately not as rigorously designed as the OSI model, but a common version of the TCP/IP model splits it into four layers. The Internet Application Layer includes the OSI Application Layer, Presentation Layer, and most of the Session Layer. Its End-to-End Layer includes the graceful close function of the OSI Session Layer as well as the Transport Layer. Its Internetwork Layer is equivalent to the OSI Network Layer, while its Interface layer includes the OSI Data Link and Physical Layers. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the Internal Organization of the Network Layer document.

Protocols enable an entity in one host to interact with a corresponding entity at the same layer in a remote host. Service definitions abstractly describe the functionality provided to an (N)-layer by an (N-1) layer, where N is one of the seven layers inside the local host.

Description of OSI layers

Remembering The OSI Layers

Other invented mnemonics are detailed at OSI layers (mnemonics)

Layer 7: Application layer

This application layer interfaces directly to and performs application services for the application processes; it also issues requests to the presentation layer. Note carefully that this layer provides services to user-defined application processes, and not to the end user. For example, it defines a file transfer protocol, but the end user must go through an application process to invoke file transfer. The OSI model does not include human interfaces. The common application services

OSI Model

Data unit Layer Function

7. Application Network process to application

6. Presentation Data representation and encryption Data

5. Session Interhost communication

Host layers

Segment 4. Transport End-to-end connections and reliability (TCP)

Packet/Datagram 3. Network Path determination and logical addressing (IP)

Frame 2. Data link Physical addressing (MAC & LLC) Media layers

Bit 1. Physical Media, signal and binary transmission

Page 17: Notes on Hardware & Networking

sublayer provides functional elements including the Remote Operations Service Element (comparable to Internet Remote Procedure Call), Association Control, and Transaction Processing (according to the ACID requirements).

Layer 6: Presentation layer

The presentation layer establishes a context between application layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the Presentation Service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol Data Units, and moved down the stack.

The original presentation structure used the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serializing objects and other data structures into and out of XML. ASN.1 has a set of cryptographic encoding rules that allows end-to-end encryption between application entities.

Layer 5: Session layer

The session layer controls the dialogues/connections (sessions) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for "graceful close" of sessions, which is a property of TCP, and also for session checkpointing and recovery, which is not usually used in the Internet protocols suite. Session layers are commonly used in application environments that make use of remote procedure calls (RPCs).

iSCSI, which implements the Small Computer Systems Interface (SCSI) encapsulated into TCP/IP packets, is a session layer protocol increasingly used in Storage Area Networks and internally between processors and high-performance storage devices. iSCSI uses TCP for guaranteed delivery, and carries SCSI command descriptor blocks (CDB) as payload to create a virtual SCSI bus between iSCSI initiators and iSCSI targets.

Layer 4: Transport layer

The transport layer provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state and connection oriented. This means that the transport layer can keep track of the segments and retransmit those that fail.

Although it was not developed under the OSI Reference Model and does not strictly conform to the OSI definition of the Transport layer, the best known examples of a layer 4 protocol are the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).

Of the actual OSI protocols, there are five classes of transport protocols ranging from class 0 (which is also known as TP0 and provides the least error recovery) to class 4 (which is also known as TP4

Page 18: Notes on Hardware & Networking

and is designed for less reliable networks, similar to the Internet). Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the Session Layer.

Perhaps an easy way to visualize the transport layer is to compare it with a Post Office, which deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a post office manages the outer envelope of mail. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunneling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete frames or packets to deliver to an endpoint. L2TP carries PPP frames inside transport packets.

Layer 3: Network layer

The network layer provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks while maintaining the quality of service requested by the Transport layer. The Network layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer�sending data throughout the extended network and making the Internet possible. This is a logical addressing scheme � values are chosen by the network engineer. The addressing scheme is hierarchical.

The best-known example of a layer 3 protocol is the Internet Protocol (IP). It manages the connectionless transfer of data one hop at a time, from end system to ingress router, to router to router, and from egress router to destination end system. It is not responsible for reliable delivery to a next hop, but only for the detection of errored packets so they may be discarded. When the medium of the next hop cannot accept a packet in its current length, IP is responsible for fragmenting into sufficiently small packets that the medium can accept it.

A number of layer management protocols, a function defined in the Management Annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network layer information and error, and network layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them.

Layer 2: Data Link layer

The data link layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the physical layer. Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic of wide area media in the telephone system. Local area network architecture, which included broadcast-capable multiaccess media, was developed independently of the ISO work, in IEEE Project 802. IEEE work assumed sublayering and management functions not required for WAN use. In modern practice, only error detection, not flow control using sliding window, is present in modern data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used

Page 19: Notes on Hardware & Networking

for most protocols on Ethernet, and, on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the transport layers by protocols such as TCP, but is still used in niches where X.25 offers performance advantages.

Both WAN and LAN services arrange bits, from the physical layer, into logical sequences called frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely intended for physical layer functions. For example, every fifth bit of the FDDI bit stream is not used by the data link layer.

WAN Protocol Architecture

Connection-oriented WAN data link protocols, in addition to framing, detect and may correct errors. They also are capable of controlling the rate of transmission. A WAN data link layer might implement a sliding window flow control and acknowledgment mechanism to provide reliable delivery of frames; that is the case for SDLC and HDLC, and derivatives of HDLC such as LAPB and LAPD.

IEEE 802 LAN Architecture

Practical, connectionless LANs began with the pre-IEEE Ethernet specification, which is the ancestor of IEEE 802.3. This layer manages the interaction of devices with a shared medium, which is the function of a Media Access Control sublayer. Above this MAC sublayer is the media-independent IEEE 802.2 Logical Link Control (LLC) sublayer, which deals with addressing and multiplexing on multiaccess media.

While IEEE 802.3 is the dominant wired LAN protocol and IEEE 802.11 the wireless LAN protocol, obsolescent MAC layers include Token Ring and FDDI. The MAC sublayer detects but does not correct errors.

Layer 1: Physical layer

The physical layer defines all the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. This includes the layout of pins, voltages, cable specifications, Hubs, repeaters, network adapters, Host Bus Adapters (HBAs used in Storage Area Networks) and more.

To understand the function of the physical layer in contrast to the functions of the data link layer, think of the physical layer as concerned primarily with the interaction of a single device with a medium, where the data link layer is concerned more with the interactions of multiple devices (i.e., at least two) with a shared medium. The physical layer will tell one device how to transmit to the medium, and another device how to receive from it (in most cases it does not tell the device how to connect to the medium). Obsolescent physical layer standards such as RS-232 do use physical wires to control access to the medium.

The major functions and services performed by the physical layer are:

Page 20: Notes on Hardware & Networking

Establishment and termination of a connection to a communications medium.

Participation in the process whereby the communication resources are effectively shared among multiple users. For example, contention resolution and flow control.

Modulation, or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. These are signals operating over the physical cabling (such as copper and optical fiber) or over a radio link.

Parallel SCSI buses operate in this layer, although it must be remembered that the logical SCSI protocol is a transport-layer protocol that runs over this bus. Various physical-layer Ethernet standards are also in this layer; Ethernet incorporates both this layer and the data-link layer. The same applies to other local-area networks, such as Token ring, FDDI, and IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4.

Interfaces

Neither the OSI Reference Model nor OSI protocols specify any programming interfaces, other than as deliberately abstract service specifications. Protocol specifications precisely define the interfaces between different computers, but the software interfaces inside computers are implementation-specific.

For example, Microsoft Windows' Winsock, and Unix's Berkeley sockets and System V Transport Layer Interface, are interfaces between applications (layers 5 and above) and the transport (layer 4). NDIS and ODI are interfaces between the media (layer 2) and the network protocol (layer 3).

Interface standards, except for the physical layer to media, are approximate implementations of OSI Service Specifications.

Layer

# Name

Misc. examples

TCP/IP suite

SS7 AppleTalk suite OSI suite

IPX suite

SNA UMTS

7 Application HL7, Modbus, CDP

NNTP, SIP, SSI, DNS, FTP, Gopher, HTTP, NFS, NTP, DHCP, SMPP, SMTP, SNMP,

ISUP, INAP, MAP, TUP, TCAP,

AFP, ZIP, RTMP, NBP

FTAM, X.400, X.500, DAP

RIP, SAP

APPC

Page 21: Notes on Hardware & Networking

Telnet,

6 Presentation

TDI, ASCII, EBCDIC, MIDI, MPEG

MIME, XDR, SSL, TLS (Not a separate layer)

AFP ISO 8823, X.226

5 Session

Named Pipes, NetBIOS, SAP, SDP

Sockets. Session establishment in TCP. SIP. (Not a separate layer with standardized API.)

ASP, ADSP, PAP ISO 8327, X.225

NWLink DLC?

4 Transport NBF, nanoTCP, nanoUDP

TCP, UDP, IPsec, PPTP, L2TP

SCTP, SCCP, RTP

DDP

TP0, TP1, TP2, TP3, TP4

SPX

3 Network NBF, Q.931

IP, ARP, ICMP, RIP, OSPF, BGP, IGMP, IS-IS

MTP-3

ATP (TokenTalk or EtherTalk)

X.25 (PLP), CLNP

IPX

RRC (Radio Resource ControlPDCP (Packet Data Convergence Protocol) and Broadcast/Multicast Control (BMC)

2 Data Link

802.3 (Ethernet), 802.11a/b/g/n MAC/LLC, 802.1Q (VLAN), ATM, HDP, FDDI, Fibre Channel,

PPP, SLIP MTP-2

LocalTalk,AppleTalk Remote Access, PPP

X.25 (LAPB), Token Bus

IEEE 802.3 framing, Ethernet II framing

SDLC

LLC (Logical Link Control), MAC(Media Access Control)

Page 22: Notes on Hardware & Networking

The TCP/IP model or Internet reference model, sometimes called the DoD (Department of Defense) model or the ARPANET reference model, is a layered abstract description for communications and computer network protocol design. It was created in the 1970s by DARPA for use in developing the Internet's protocols. The structure of the Internet is still closely reflected by the TCP/IP model.

The original TCP/IP reference model consists of four layers. No Internet Engineering Task Force (IETF) standards-track document has accepted a five-layer model, probably since physical layer and data link layer protocols are not standardized by IETF. IETF documents deprecate strict layering of all sorts. Given the lack of acceptance of the five-layer model by the body with technical responsibility for the protocol suite, it is not unreasonable to regard the occasional five-layer presentation as a teaching aid, making it possible to talk about non-IETF protocols at the physical layer.

This model was developed before the OSI Reference Model, and the IETF, which is responsible for the model and protocols developed under it, has never felt obligated to be compliant with OSI. While the basic OSI model (a seven-layer model) is widely used in teaching, it does not reflect the real-world protocol architecture (RFC 1122) as used in the dominant Internet environment.

Frame Relay, HDLC, ISL, PPP, Q.921, Token Ring

1 Physical

RS-232, V.35, V.34, I.430, I.431, T1, E1, 10BASE-T, 100BASE-TX, POTS, SONET, DSL, 802.11a/b/g/n PHY

MTP-1

RS-232, RS-422, STP, PhoneNet

X.25 (X.21bis, EIA/TIA-232, EIA/TIA-449, EIA-530, G.703)

Twinax UMTS L1 (UMTS Physical Layer)

Page 23: Notes on Hardware & Networking

Key Architectural Principles

An early architectural document, RFC 1122, emphasizes architectural principles over layering[2].

End-to-End Principle: This principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this Principle. [3]

Robustness Principle: "Be liberal in what you accept, and conservative in what you send. Software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features".

Even when layer is examined, the assorted architectural documents -- there is no single architectural model such as ISO 7498, the OSI Reference Model -- have fewer, less rigidly defined layers than the commonly referenced OSI model, and thus provides an easier fit for real-world protocols. In point of fact, one frequently referenced document does not contain a stack of layers. The lack of emphasis on layering is a strong difference between the IETF and OSI approaches. It only refers to the existence of the "internetworking layer" and generally to "upper layers"; this document was intended as a 1996 "snapshot" of the architecture: "The Internet and its architecture have grown in evolutionary fashion from modest beginnings, rather than from a Grand Plan. While this process of evolution is one of the main reasons for the technology's success, it nevertheless seems useful to record a snapshot of the current principles of the Internet architecture."

No document officially specifies the model, another reason to deemphasize the emphasis on layering. Different names are given to the layers by different documents, and different numbers of layers are shown by different documents.

There are versions of this model with four layers and with five[citation needed] layers. RFC 1122 on Host Requirements makes general reference to layering, but refers to many other architectural principles not emphasizing layering. It loosely defines a four-layer version, with the layers having names, not numbers, as

Process Layer or Application Layer: this is where the "higher level" protocols such as SMTP, FTP, SSH, HTTP, etc. operate.

Host-To-Host (Transport) Layer: this is where flow-control and connection protocols exist, such as TCP. This layer deals with opening and maintaining connections, ensuring that packets are in fact received.

Internet or Internetworking Layer: this layer defines IP addresses, with many routing schemes for navigating packets from one IP address to another.

Network Access Layer: this layer describes both the protocols (i.e., the OSI Data Link Layer) used to mediate access to shared media, and the physical protocols and technologies necessary for communications from individual hosts to a medium.

Page 24: Notes on Hardware & Networking

The Internet protocol suite (and corresponding protocol stack), and its layering model, were in use before the OSI model was established. Since then, the TCP/IP model has been compared with the OSI model numerous times in books and classrooms, which often results in confusion because the two models use different assumptions, including about the relative importance of strict layering.

Layers in the TCP/IP model

IP suite stack showing the physical network connection of two hosts via two routers and the corresponding layers used at each hop. The dotted line represents a virtual connection.

Page 25: Notes on Hardware & Networking

The layers near the top are logically closer to the user application (as opposed to the human user), while those near the bottom are logically closer to the physical transmission of the data. Viewing layers as providing or consuming a service is a method of abstraction to isolate upper layer protocols from the nitty gritty detail of transmitting bits over, say, Ethernet and collision detection, while the lower layers avoid having to know the details of each and every application and its protocol.

This abstraction also allows upper layers to provide services that the lower layers cannot, or choose not to, provide. Again, the original OSI Reference Model was extended to include connectionless services (OSIRM CL).[4] For example, IP is not designed to be reliable and is a best effort delivery protocol. This means that all transport layers must choose whether or not to provide reliability and to what degree. UDP provides data integrity (via a checksum) but does not guarantee delivery; TCP provides both data integrity and delivery guarantee (by retransmitting until the receiver receives the packet).

This model lacks the formalism of the OSI Reference Model and associated documents, but the IETF does not use a formal model and does not consider this a limitation, as in the comment by David D. Clark, "We don't believe in kings, presidents, or voting. We believe in rough consensus and running code." Criticisms of this model, which have been made with respect to the OSI Reference Model, often do not consider ISO's later extensions to that model.

For multiaccess links with their own addressing systems (e.g. Ethernet) an address mapping protocol is needed. Such protocols can be considered to be below IP but above the existing link system. While the IETF does not use the terminology, this is a subnetwork dependent convergence facility according to an extension to the OSI model, the Internal Organization of the Network Layer (IONL) [5].

ICMP & IGMP operate on top of IP but do not transport data like UDP or TCP. Again, this functionality exists as layer management extensions to the OSI model, in its Management Framework (OSIRM MF) [6]

Sample encapsulation of data within a UDP datagram within an IP packet

Page 26: Notes on Hardware & Networking

The SSL/TLS library operates above the transport layer (utilizes TCP) but below application protocols. Again, there was no intention, on the part of the designers of these protocols, to comply with OSI architecture.

The link is treated like a black box here. This is fine for discussing IP (since the whole point of IP is it will run over virtually anything). The IETF explicitly does not intend to discuss transmission systems, which is a less academic but practical alternative to the OSI Reference Model.

OSI and TCP/IP Layering Differences

The three top layers in the OSI model - the application layer, the presentation layer and the session layer - usually are lumped into one layer in the TCP/IP model. While some pure OSI protocol applications, such as X.400, also lumped them together, there is no requirement that a TCP/IP protocol stack needs to be monolithic above the transport layer. For example, the Network File System (NFS) application protocol runs over the eXternal Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol with session layer functionality, Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can run safely over the best-effort User Datagram Protocol (UDP) transport.

The session layer roughly corresponds to the Telnet virtual terminal functionality, which is part of text based protocols such as HTTP and SMTP TCP/IP model application layer protocols. It also corresponds to TCP and UDP port numbering, which is considered as part of the transport layer in the TCP/IP model. The presentation layer has similarities to the MIME standard, which also is used in HTTP and SMTP.

Since the IETF protocol development effort is not concerned with strict layering, some of its protocols may not appear to fit cleanly into the OSI model. These conflicts, however, are more frequent when one only looks at the original OSI model, ISO 7498, without looking at the annexes to this model (e.g., ISO 7498/4 Management Framework), or the ISO 8648 Internal Organization of the Network Layer (IONL). When the IONL and Management Framework documents are considered, the ICMP and IGMP are neatly defined as layer management protocols for the network layer. In like manner, the IONL provides a structure for "subnetwork dependent convergence facilities" such as ARP and RARP.

IETF protocols can be applied recursively, as demonstrated by tunneling protocols such as Generic Routing Encapsulation (GRE). While basic OSI documents do not consider tunneling, there is some concept of tunneling in yet another extension to the OSI architecture, specifically the transport layer gateways within the International Standardized Profile framework [7]. The associated OSI development effort, however, has been abandoned given the real-world adoption of TCP/IP protocols.

7 Application ECHO, ENRP, FTP, Gopher, HTTP, NFS, RTSP, SIP, SMTP, SNMP, SSH, Telnet, Whois, XMPP

6 Presentation XDR, ASN.1, SMB, AFP, NCP

Page 27: Notes on Hardware & Networking

5 Session ASAP, TLS, SSL, ISO 8327 / CCITT X.225, RPC, NetBIOS, ASP

4 Transport TCP, UDP, RTP, SCTP, SPX, ATP, IL

3 Network IP, ICMP, IGMP, IPX, OSPF, RIP, IGRP, EIGRP, ARP, RARP, X.25

2 Data Link Ethernet, Token ring, HDLC, Frame relay, ISDN, ATM, 802.11 WiFi, FDDI, PPP

1 Physical 10BASE-T, 100BASE-T, 1000BASE-T, SONET/SDH, G.709, T-carrier/E-carrier, various 802.11 physical layers

The layers

The following is a description of each layer in the IP suite stack.

Application layer

The application layer is used by most programs for network communication. Data is passed from the program in an application-specific format, then encapsulated into a transport layer protocol.

Since the IP stack has no layers between the application and transport layers, the application layer must include any protocols that act like the OSI's presentation and session layer protocols. This is usually done through libraries.

Data sent over the network is passed into the application layer where it is encapsulated into the application layer protocol. From there, the data is passed down into the lower layer protocol of the transport layer.

The two most common end-to-end protocols are TCP and UDP. Common servers have specific ports assigned to them (HTTP has port 80; Telnet has port 23; etc.) while clients use ephemeral ports. Some protocols, such as File Transfer Protocol and Telnet may set up a session using a well-known port, but then redirect the actual user session to ephemeral ports.

Routers and switches do not utilize this layer but bandwidth throttling applications do, as with the Resource Reservation Protocol (RSVP).

Transport layer

The transport layer's responsibilities include end-to-end message transfer capabilities independent of the underlying network, along with error control, fragmentation and flow control. End to end message transmission or connecting applications at the transport layer can be categorized as either:

connection-oriented e.g. TCP

connectionless e.g UDP

Page 28: Notes on Hardware & Networking

The transport layer can be thought of literally as a transport mechanism e.g. a vehicle whose responsibility is to make sure that its contents (passengers/goods) reach its destination safely and soundly, unless a higher or lower layer is responsible for safe delivery.

The transport layer provides this service of connecting applications together through the use of ports. Since IP provides only a best effort delivery, the transport layer is the first layer of the TCP/IP stack to offer reliability. Note that IP can run over a reliable data link protocol such as the High-Level Data Link Control (HDLC). Protocols above transport, such as RPC, also can provide reliability.

For example, TCP is a connection-oriented protocol that addresses numerous reliability issues to provide a reliable byte stream:

data arrives in-order

data has minimal error (i.e correctness)

duplicate data is discarded

lost/discarded packets are resent

includes traffic congestion control

The newer SCTP is also a "reliable", connection-oriented, transport mechanism. It is Message-stream-oriented � not byte-stream-oriented like TCP � and provides multiple streams multiplexed over a single connection. It also provides multi-homing support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP), but can also be used for other applications.

UDP is a connectionless datagram protocol. Like IP, it is a best effort or "unreliable" protocol. Reliability is addressed through error detection using a weak checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP etc) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. RTP is a datagram protocol that is designed for real-time data such as streaming audio and video.

TCP and UDP are used to carry an assortment of higher-level applications. The appropriate transport protocol is chosen based on the higher-layer protocol application. For example, the File Transfer Protocol expects a reliable connection, but the Network File System assumes that the subordinate Remote Procedure Call protocol, not transport, will guarantee reliable transfer. Other applications, such as VoIP, can tolerate some loss of packets, but not the reordering or delay that could be caused by retransmission.

The applications at any given network address are distinguished by their TCP or UDP port. By convention certain well known ports are associated with specific applications. (See List of TCP and UDP port numbers.)

Page 29: Notes on Hardware & Networking

Network layer

As originally defined, the Network layer solves the problem of getting packets across a single network. Examples of such protocols are X.25, and the ARPANET's Host/IMP Protocol.

With the advent of the concept of internetworking, additional functionality was added to this layer, namely getting data from the source network to the destination network. This generally involves routing the packet across a network of networks, known as an internetwork or (lower-case) internet.[8]

In the Internet protocol suite, IP performs the basic task of getting packets of data from source to destination. IP can carry data for a number of different upper layer protocols; these protocols are each identified by a unique protocol number: ICMP and IGMP are protocols 1 and 2, respectively.

Some of the protocols carried by IP, such as ICMP (used to transmit diagnostic information about IP transmission) and IGMP (used to manage IP Multicast data) are layered on top of IP but perform internetwork layer functions, illustrating an incompatibility between the Internet and the IP stack and OSI model. All routing protocols, such as OSPF, and RIP are also part of the network layer. What makes them part of the network layer is that their payload is totally concerned with management of the network layer. The particular encapsulation of that payload is irrelevant for layering purposes.

Data link layer

The link layer, which is the method used to move packets from the network layer on two different hosts, is not really part of the Internet protocol suite, because IP can run over a variety of different link layers. The processes of transmitting packets on a given link layer and receiving packets from a given link layer can be controlled both in the software device driver for the network card, as well as on firmware or specialist chipsets. These will perform data link functions such as adding a packet header to prepare it for transmission, then actually transmit the frame over a physical medium.

For Internet access over a dial-up modem, IP packets are usually transmitted using PPP. For broadband Internet access such as ADSL or cable modems, PPPoE is often used. On a local wired network, Ethernet is usually used, and on local wireless networks, IEEE 802.11 is usually used. For wide-area networks, either PPP over T-carrier or E-carrier lines, Frame relay, ATM, or packet over SONET/SDH (POS) are often used.

The link layer can also be the layer where packets are intercepted to be sent over a virtual private network. When this is done, the link layer data is considered the application data and proceeds back down the IP stack for actual transmission. On the receiving end, the data goes up the IP stack twice (once for routing and the second time for the VPN).

The link layer can also be considered to include the physical layer, which is made up of the actual physical network components (hubs, repeaters, fiber optic cable, coaxial cable, network cards, Host Bus Adapter cards and the associated network connectors: RJ-45, BNC, etc), and the low level specifications for the signals (voltage levels, frequencies, etc).

Page 30: Notes on Hardware & Networking

Physical layer

The Physical layer is responsible for encoding and transmission of data over network communications media. It operates with data in the form of bits that are sent from the Physical layer of the sending (source) device and received at the Physical layer of the destination device.

Ethernet, Token Ring, SCSI, hubs, repeaters, cables and connectors are standard network devices that function at the Physical layer. The Physical layer is also considered the domain of many hardware-related network design issues, such as LAN and WAN topology and wireless technology.

Hardware and software implementation

Normally the application programmers are in charge of layer 5 protocols (the application layer), while the layer 3 and 4 protocols are services provided by the TCP/IP stack in the operating system. Microcontroller firmware in the network adapter typically handle layer 2 issues, supported by a driver software in the operational system. Non-programmable analog and digital electronics are normally in charge of the physical layer, typically using an application-specific integrated circuit (ASIC) chipset for each radio interface or other physical standard.

However, hardware or software implementation is not stated in the protocols or the layered reference model. High-performance routers are to a large extent based on fast non-programmable digital electronics, carrying out layer 3 switching. In modern modems and wireless equipment, the physical layer may partly be implemented using programmable DSP processors or software radio (soft radio) programmable chipsets, allowing the chip to be reused in several alternative standards and radio interfaces instead of separate circuits for each standard, and facilitating. The Apple Geoport concept was an example of CPU software implementation of the physical layer, making it possible to emulate some modem standards.

Page 31: Notes on Hardware & Networking
Page 32: Notes on Hardware & Networking

Computer hardware is the physical part of a computer, including the digital circuitry, as distinguished from the computer software that executes within the hardware. The hardware of a computer is infrequently changed, in comparison with software and hardware data, which are "soft" in the sense that they are readily created, modified or erased on the computer. Firmware is a special type of software that rarely, if ever, needs to be changed and so is stored on hardware devices such as read-only memory (ROM) where it is not readily changed (and is, therefore, "firm" rather than just "soft").

Most computer hardware is not seen by normal users. It is in embedded systems in automobiles, microwave ovens, electrocardiograph machines, compact disc players, and other devices. Personal computers, the computer hardware familiar to most people, form only a small minority of computers (about 0.2% of all new computers produced in 2003).

Typical PC hardware

A typical personal computer consists of a case or chassis in a tower shape (desktop) and the following parts:

Internals of typical personal computer Image:ASRock K7VT4A Pro Mainboard-eng-labels.jpg Typical Motherboard found in a computer

Page 33: Notes on Hardware & Networking

Inside a Custom Computer

Motherboard

The motherboard is the "heart" of the computer, through which all other components interface.

Central processing unit (CPU) - Performs most of the calculations which enable a computer to function, sometimes referred to as the "brain" of the computer.

Computer fan - Used to lower the temperature of the computer; a fan is almost always attached to the CPU, and the computer case will generally have several fans to maintain a constant airflow. Liquid cooling can also be used to cool a computer, though it focuses more on individual parts rather than the overall temperature inside the chassis.

Random Access Memory (RAM) - Fast-access memory that is cleared when the computer is powered-down. RAM attaches directly to the motherboard, and is used to store programs that are currently running.

Firmware is loaded from the Read only memory ROM run from the Basic Input-Output System (BIOS) or in newer systems [[Extensible Firmware

External Bus Controllers - used to connect to external peripherals, such as printers and input devices. These ports may also be based upon expansion cards, attached to the internal buses.

parallel port

serial port

USB

firewire

SCSI (On Servers and older machines)

PS/2 (For mice and keyboards, being phased out and replaced by USB.)

ISA (outdated)

EISA (outdated)

MCA (outdated)

Page 34: Notes on Hardware & Networking

Power supply

A case that holds a transformer, voltage control, and (usually) a cooling fan, and supplies power to run the rest of the computer, the most common types of power supplies are AT and BabyAT (old) but the standard for PC's actually are ATX and micro ATX

Storage controllers

Controllers for hard disk, CD-ROM and other drives like internal Zip and Jaz conventionally for a PC are IDE/ATA; the controllers sit directly on the motherboard (on-board) or on expansion cards, such as a Disk array controller. IDE is usually integrated, unlike SCSI which is found in most servers. The floppy drive interface is a legacy MFM interface which is now slowly disappearing. All these interfaces are gradually being phased out to be replaced by SATA and SAS.

Video display controller

Produces the output for the computer display. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or AGP), in the form of a Graphics Card.

Removable media devices

CD - the most common type of removable media, inexpensive but has a short life-span.

CD-ROM Drive - a device used for reading data from a CD.

CD Writer - a device used for both reading and writing data to and from a CD.

DVD - a popular type of removable media that is the same dimensions as a CD but stores up to times as much information. It is the most common way of transferring digital video.

DVD-ROM Drive - a device used for reading data from a DVD.

DVD Writer - a device used for both reading and writing data to and from a DVD.

DVD-RAM Drive - a device used for rapid writing and reading of data from a special type of DVD.

Blu-ray - a high-density optical disc format for the storage of digital information, including high-definition video.

BD-ROM Drive - a device used for reading data from a Blu-ray disc.

BD Writer - a device used for both reading and writing data to and from a Blu-ray disc.

HD DVD - a high-density optical disc format and successor to the standard DVD. Currently

Page 35: Notes on Hardware & Networking

Floppy disk - an outdated storage device consisting of a thin disk of a flexible magnetic storage medium.

Zip drive - an outdated medium capacity removable disk storage system, first introduced by Iomega in 1994.

USB flash drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable and rewritable.

Tape drive - a device that reads and writes data on a magnetic tape, usually used for long term storage.

Internal storage

Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.

Hard disk - for medium-term storage of data.

Solid-state drive - a device emulating a hard disk, but containing no moving parts.

Disk array controller - a device to manage several hard disks, to achieve performance or reliability improvement.

Sound card

Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade.

Networking

Connects the computer to the Internet and/or other computers.

Modem - for dial-up connections

Network card - for DSL/Cable internet, and/or connecting to other computers.

Direct Cable Connection - Use of a null modem, connecting two computers together using their serial ports or a Laplink Cable, connecting two computers together with their parallel ports.

Page 36: Notes on Hardware & Networking

Other peripherals

In addition, hardware devices can include external components of a computer system. The following are either standard or very common.

Wheel mouse

Includes various input and output devices, usually external to the computer system

Input

Text input devices

Keyboard - a device, to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.

Pointing devices

Mouse - a pointing device that detects two dimensional motion relative to its supporting surface.

Trackball - a pointing device consisting of an exposed portruding ball housed in a socket that detects rotation about two axes.

Xbox 360 Controller - A controller used for XBOX 360, Which with the use of the application Switchblade(tm), Can be be used as an additional pointing device with the left or right thumbstick.

Gaming devices

Joystick - a general control device that consists of a handheld stick that pivots around one end, to detect angles in two or three dimensions.

Gamepad - a general game controller held in the hand that relies on the digits (especially thumbs) to provide input.

Game controller - a specific type of controller specialized for certain gaming purposes.

Image, Video input devices

Page 37: Notes on Hardware & Networking

Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.

Webcam - a low resolution video camera used to provide visual input that can be easily transferred over the internet.

Audio input devices

Microphone - an acoustic sensor that provides input by converting sound into an electrical signal

Output

Image, Video output devices

Printer - a peripheral device that produces a hard (usually paper) copy of a document.

Monitor - device that displays a video signal, similar to a television, to provide the user with information and an interface with which to interact.

Audio output devices

Speakers - a device that converts analog audio signals into the equivalent air vibrations in order to make audible sound.

Headset - a device similar in functionality to computer speakers used mainly to not disturb others nearby.

In computer engineering, computer architecture is the conceptual design and fundamental operational structure of a computer system. It is a blueprint and functional description of requirements (especially speeds and interconnections) and design implementations for the various parts of a computer � focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory.

It may also be defined as the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals.

Computer architecture comprises at least three main subcategories[1]

Instruction set architecture, or ISA, is the abstract image of a computing system that is seen by a machine language (or assembly language) programmer, including the instruction set, memory address modes, processor registers, and address and data formats.

Microarchitecture, also known as Computer organization is a lower level, more concrete, description of the system that involves how the constituent parts of the system are interconnected and how they interoperate in order to implement the ISA.[2] The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA.

Page 38: Notes on Hardware & Networking

System Design which includes all of the other hardware components within a computing system such as:

system interconnects such as computer buses and switches

memory controllers and hierarchies

CPU off-load mechanisms such as direct memory access

issues like multi-processing.

Once both ISA and microarchitecture has been specified, the actual device needs to be designed into hardware. This design process is often called implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering.

Implementation can be further broken down into three pieces:

Logic Implementation/Design - where the blocks that were defined in the microarchitecture are implemented as logic equations.

Circuit Implementation/Design - where speed critical blocks or logic equations or logic gates are implemented at the transistor level.

Physical Implementation/Design - where the circuits are drawn out, the different circuit components are placed in a chip floor-plan or on a board and the wires connecting them are routed.

For CPUs, the entire implementation process is often called CPU design.

More specific usages of the term include more general wider-scale hardware architectures, such as cluster computing and Non-Uniform Memory Access (NUMA) architectures.

Overview

CPU design focuses on these areas:

datapaths (such as ALUs and pipelines)

control unit: logic which controls the datapaths

Memory components such as register files, caches

Clock circuitry such as clock drivers, PLLs, clock distribution networks

Page 39: Notes on Hardware & Networking

Pad transceiver circuitry

Logic gate cell library which is used to implement the logic

CPUs designed for high performance markets might require custom designs for each of these items to achieve frequency, power-dissipation, and chip-area goals.

CPUs designed for lower performance markets might lessen the implementation burden by:

acquiring some of these items by purchasing them as intellectual property

use control logic implementation techniques (logic synthesis using CAD tools) to implement the other components - datapaths, register files, clocks

Common logic styles used in CPU design include:

unstructured random logic

finite state machines

microprogramming (common from 1965 to 1985, no longer common except for CISC CPUs)

programmable logic array (common in the 1980s, no longer common)

Device types used to implement the logic include:

Transistor-transistor logic Small Scale Integration jelly-bean logic chips - no longer used for CPUs

Programmable Array Logic and Programmable logic devices - no longer used for CPUs

Emitter Coupled Logic gate arrays - no longer common

CMOS gate arrays - no longer used for CPUs

CMOS ASICs - what's commonly used today, they're so common that the term ASIC is not used for CPUs

Field Programmable Gate Arrays - common for soft microprocessors, and more or less required for reconfigurable computing

A CPU design project generally has these major tasks:

architectural study and performance modeling

RTL (eg. logic) design and verification

Page 40: Notes on Hardware & Networking

circuit design of speed critical components (caches, registers, ALUs)

logic synthesis or logic-gate-level design

timing analysis to confirm that all logic and circuits will run at the specified operating frequency

physical design including floorplanning, place and route of logic gates

checking that RTL, gate-level, transistor-level and physical-level representatations are equivalent

checks for signal integrity, chip manufacturability

As with most complex electronic designs, the logic verification effort (proving that the design does not have bugs) now dominates the project schedule of a CPU.

Key CPU architectural innovations include cache, virtual memory, instruction pipelining, superscalar, CISC, RISC, virtual machine, emulators, microprogram, and stack.

Goals

The first CPUs were designed to do mathematical calculations faster and more reliably than human computers.

Each successive generation of CPU might be designed to achieve some of these goals:

higher performance levels of a single program or thread

higher throughput levels of multiple programs/threads

less power consumption for the same performance level

lower cost for the same performance level

greater connectivity to build larger, more parallel systems

more specialization to aid in specific targeted markets

Re-designing a CPU core to a smaller die-area helps achieve several of these goals.

Shrinking everything (a "photomask shrink"), resulting in the same number of transistors on a smaller die, improves performance (smaller transistors switch faster), reduces power (smaller wires have less parasitic capacitance) and reduces cost (more CPUs fit on the same wafer of silicon).

Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within one VLSI chip (additional cache, multiple CPUs, or other components), improving performance and reducing overall system cost.

Page 41: Notes on Hardware & Networking

Performance analysis and benchmarking

Because there are too many programs to test a CPU's speed on all of them, benchmarks were developed. The most famous benchmarks are the SPECint and SPECfp benchmarks developed by Standard Performance Evaluation Corporation and the ConsumerMark benchmark developed by the Embedded Microprocessor Benchmark Consortium EEMBC.

Some important measurements include:

Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing pre-compiled software. Being relatively uninformed on computer benchmarks, most of them pick a particular CPU based on operating frequency (see Megahertz Myth).

System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself. [1][2]

Some system designers building parallel computers pick CPUs based on the speed per dollar.

System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP)

Computer programmers who program directly in assembly language want a CPU to support a full featured instruction set.

Some of these measures conflict. In particular, many design techniques that make CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa.

Markets

There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets.

General purpose computing

The vast majority of revenues generated from CPU sales is for general purpose computing. That is, desktop, laptop and server computers commonly used in businesses and homes. In this market, the Intel IA-32 architecture dominates, with its rivals PowerPC and SPARC maintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market.

Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a

Page 42: Notes on Hardware & Networking

wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption.

High-end processor economics

Developing new, high-end CPUs is a very costly proposition. Both the logical complexity (needing very large logic design and logic verification teams and simulation farms with perhaps thousands of computers) and the high operating frequencies (needing large circuit design teams and access to the state-of-the-art fabrication process) account for the high cost of design for this type of chip. The design cost of a high-end CPU will be on the order of US $100 million. Since the design of such high-end chips nominally takes about five years to complete, to stay competitive a company has to fund at least two of these large design teams to release products at the rate of 2.5 years per product generation.

As an example, the typical loaded cost for one computer engineer is often quoted to be $250,000 US dollars/year. This includes salary, benefits, CAD tools, computers, office space rent, etc. Assuming that 100 engineers are needed to design a CPU and the project takes 4 years.

Total cost = $250,000/engineer-man_year X 100 engineers X 4 years = $100,000,000 US dollars.

The above amount is just an example. The design teams for modern day general purpose CPUs have several hundred team members.

Only the personal computer mass market (with production rates in the hundreds of millions, producing billions of dollars in revenue) can support such a large design and implementation teams.[citation needed] As of 2004, only four companies are actively designing and fabricating state of the art general purpose computing CPU chips: Intel, AMD, IBM and Fujitsu.[citation needed] Motorola has spun off its semiconductor division as Freescale as that division was dragging down profit margins for the rest of the company. Texas Instruments, TSMC and Toshiba are a few examples of a companies doing manufacturing for another company's CPU chip design.

Scientific computing

A much smaller niche market (in revenue and units shipped) is scientific computing, used in government research labs and universities. Previously much CPU design was done for this market, but the cost-effectiveness of using mass markets CPUs has curtailed almost all specialized designs for this market. The main remaining area of active hardware design and research for scientific computing is for high-speed system interconnects.

Embedded design

As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors.

Page 43: Notes on Hardware & Networking

These single-function devices differ from the more familiar general-purpose CPUs in several ways:

Low cost is of utmost importance.

Power dissipation is highly important as most embedded systems do not allow for fans.

To give lower system cost, peripherals are integrated with the processor on the same silicon chip.

The program and data memories are often integrated on the same chip. When the only allowed program memory is ROM, the device is known as a microcontroller.

Interrupt latency is more important to these embedded devices and their associated functions than to more general-purpose processors.

Embedded Devices must be in production (or have stockpiles that can last) for long amounts of time, perhaps for a decade. Any particular version of desktop computing CPUs rarely stay in production for more than two years due to the rapid pace of progress.

Soft microprocessor cores

For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented by logic synthesis techniques. These synthesized processors can be implemented in a much shorter amount of time, giving quicker time-to-market.

Optical communication

One interesting possibility would be to eliminate the front side bus. Modern vertical laser diodes enable this change. In theory, an optical computer's components could directly connect through a holographic or phased open-air switching system. This would provide a large increase in effective speed and design flexibility, and a large reduction in cost. Since a computer's connectors are also its most likely failure point, a busless system might be more reliable, as well.

Optical processors

Another farther-term possibility is to use light instead of electricity for the digital logic itself. In theory, this could run about 30% faster and use less power, as well as permit a direct interface with quantum computational devices. The chief problem with this approach is that for the foreseeable future, electronic devices are faster, smaller (i.e. cheaper) and more reliable. An important theoretical problem is that electronic computational elements are already smaller than some wavelengths of light, and therefore even wave-guide based optical logic may be uneconomic compared to electronic logic. The majority of development effort, as of 2006 is focused on electronic circuitry. See also optical computing.

Page 44: Notes on Hardware & Networking

Clockless CPUs

Yet another possibility is the "clockless CPU" (asynchronous CPU). Unlike conventional processors, clockless processors have no central clock to coordinate the progress of data through the pipeline. Instead, stages of the CPU are coordinated using logic devices called "pipe line controls" or "FIFO sequencers." Basically, the pipeline controller clocks the next stage of logic when the existing stage is complete. In this way, a central clock is unnecessary.

It might be easier implement high performance devices in asynchronous logic as opposed to clocked logic:

components can run at different speeds in the clockless CPU. In a clocked CPU, no component can run faster than the clock rate.

In a clocked CPU, the clock can go no faster than the worst-case performance of the slowest stage. In a clockless CPU, when a stage finishes faster than normal, the next stage can immediately take the results rather than waiting for the next clock tick. A stage might finish faster than normal because of the particular data inputs (multiplication can be very fast if it is multiplying by 0 or 1), or because it is running at a higher voltage or lower temperature than normal.

Asynchronous logic proponents believe these capabilities would have these benefits:

lower power dissipation for a given performance level

highest possible execution speeds

Two examples of asynchronous CPUs are the ARM-implementing AMULET and the asynchronous implementation of MIPS R3000, dubbed MiniMIPS.

The biggest disadvantage of the clockless CPU is that most CPU design tools assume a clocked CPU (a synchronous circuit), so making a clockless CPU (designing an asynchronous circuit) involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids metastable problems. For example, the group that designs the aforementioned AMULET developed a tool called LARD to cope with the complex design of AMULET3.

A smaller disadvantage is how these devices would operate with Automated Test Equipment chip testers that are more geared for synchronous behavior.

Page 45: Notes on Hardware & Networking

Classful networking is the name given to the first round of changes to the structure of the IP address in IPv4.

Classful networking is obsolete on the modern Internet. There is no longer any such thing as a class A/B/C network. The correct modern representation for what would have been referred to as a "Class B" prior to 1993 would be "a set of /16 addresses", under the Classless Inter-Domain Routing (CIDR) system.

Before Classes

The prototype Internet in 1982; note that all the networks (the ovals) have addresses which are single integers; the rectangles are switches.

Originally, the 32-bit IPv4 address consisted simply of an 8-bit network number field (which specified the particular network a host was attached to), and a rest field, which gave the address of the host within that network. This format was picked before the advent of local area networks (LANs), when there were only a few, large, networks such as the ARPANET.

Page 46: Notes on Hardware & Networking

This resulted in a very low count (256) of network numbers being available, and very early on, as LANs started to appear, it became obvious that that would not be enough.

Classes

As a kludge, the definition of IP addresses was changed in 1981 by RFC 791 to allow three different sizes of the network number field (and the associated rest field), as specified in the table below:

Class Leading Bit String Size of Network Number Bit field

Size of Rest Bit field

Class A 0 7 24

Class B 10 14 16

Class C 110 21 8

Class D (multicast) 1110 not defined not defined

Class E (reserved) 1111 not defined not defined

This allowed the following population of network numbers (excluding addresses consisting of all zeros or all ones, which are not allowed):

Class Leading Bit String Number of Networks Addresses Per Network

Class A 0 126 16,777,214

Class B 10 16,382 65,534

Class C 110 2,097,150 254

Page 47: Notes on Hardware & Networking

The number of valid networks and hosts available is always 2N - 2 (where N is the number of bits used, and the subtraction of 2 adjusts for the invalidity of the first and last addresses). Thus, for a class C address with 8 bits available for hosts, the number of hosts is 254.

The larger network number field allowed a larger number of networks, thereby accommodating the continued growth of the Internet.

The IP address netmask (which is so commonly associated with an IP address today) was not required because the mask length was part of the IP address itself. Any network device could inspect the first few bits of a 32-bit IP address to see which class the address belonged to.

The method of comparing two IP address's physical networks did not change, however (see subnet). For each address, the network number field size and its subsequent value were determined (the rest field was ignored). The network numbers were then compared. If they matched, then the two addresses were on the same network.

The replacement of classes

This first round of changes was enough to work in the short run, but an IP address shortage still developed. The principal problem was that most sites were too big for a "class C" network number, and received a "class B" number instead. With the rapid growth of the Internet, the available pool of class B addresses (basically 214, or about 16,000 total) was rapidly being depleted. Classful networking was replaced by Classless Inter-Domain Routing (CIDR), starting in about 1993, to solve this problem (and others).

Early allocations of IP addresses by IANA were in some cases not made very efficiently, which contributed to the problem. (However, the commonly held notion that some American organizations unfairly or unnecessarily received class A networks is a canard; most such allocations date to the period before the introduction of address classes, when the only thing available was what later became known as "class A" network number.)

Useful tables

Class ranges

The address ranges used for each class are given in the following table, in the standard dotted decimal notation.

Class Leading bits

Start End CIDR equivalent

Default subnet mask

Class A 0 0.0.0.0 127.255.255.255 /8 255.0.0.0

Page 48: Notes on Hardware & Networking

Class B 10 128.0.0.0 191.255.255.255 /16 255.255.0.0

Class C 110 192.0.0.0 223.255.255.255 /24 255.255.255.0

Class D (multicast)

1110 224.0.0.0 239.255.255.255 /4 not defined

Class E (reserved)

1111 240.0.0.0 255.255.255.255 /4 not defined

Special ranges

Some addresses are reserved for special uses (RFC 3330).

Addresses CIDR Equivalent

Purpose RFC Class Total # of addresses

0.0.0.0 � 0.255.255.255 0.0.0.0/8 Zero Addresses RFC 1700 A 16,777,216

10.0.0.0 - 10.255.255.255 10.0.0.0/8

Private IP addresses

RFC 1918 A 16,777,216

127.0.0.0 - 127.255.255.255 127.0.0.0/8

Localhost Loopback Address

RFC 1700 A 16,777,216

169.254.0.0 - 169.254.255.255 169.254.0.0/16 Zeroconf / APIPA RFC 3330 B 65,536

172.16.0.0 - 172.31.255.255 172.16.0.0/12

Private IP addresses

RFC 1918 B 1,048,576

Page 49: Notes on Hardware & Networking

192.0.2.0 - 192.0.2.255 192.0.2.0/24

Documentation and Examples

RFC 3330 C 256

192.88.99.0 - 192.88.99.255 192.88.99.0/24

IPv6 to IPv4 relay Anycast

RFC 3068 C 256

192.168.0.0 - 192.168.255.255 192.168.0.0/16

Private IP addresses

RFC 1918 C 65,536

198.18.0.0 - 198.19.255.255 198.18.0.0/15

Network Device Benchmark

RFC 2544 C 131,072

224.0.0.0 - 239.255.255.255 224.0.0.0/4 Multicast RFC 3171 D 268,435,456

240.0.0.0 - 255.255.255.255 240.0.0.0/4 Reserved

RFC 1700, Fuller 240/4 space draft[1]

E 268,435,456

Bit-wise representation

In the following table:

n indicates a binary slot used for network ID.

H indicates a binary slot used for host ID.

X indicates a binary slot (without specified purpose)))

Class A 0. 0. 0. 0 = 00000000.00000000.00000000.00000000 127.255.255.255 = 01111111.11111111.11111111.11111111 0nnnnnnn.HHHHHHHH.HHHHHHHH.HHHHHHHH Class B 128. 0. 0. 0 = 10000000.00000000.00000000.00000000 191.255.255.255 = 10111111.11111111.11111111.11111111 10nnnnnn.nnnnnnnn.HHHHHHHH.HHHHHHHH Class C 192. 0. 0. 0 = 11000000.00000000.00000000.00000000 223.255.255.255 = 11011111.11111111.11111111.11111111 110nnnnn.nnnnnnnn.nnnnnnnn.HHHHHHHH

Page 50: Notes on Hardware & Networking

Class D 224. 0. 0. 0 = 11100000.00000000.00000000.00000000 239.255.255.255 = 11101111.11111111.11111111.11111111 1110XXXX.XXXXXXXX.XXXXXXXX.XXXXXXXX Class E 240. 0. 0. 0 = 11110000.00000000.00000000.00000000 255.255.255.255 = 11111111.11111111.11111111.11111111 1111XXXX.XXXXXXXX.XXXXXXXX.XXXXXXXX

Classless Inter-Domain Routing Classless Inter-Domain Routing (CIDR, pronounced "cider") was introduced in 1993 and is the latest refinement to the way IP addresses are interpreted. It replaced the previous generation of IP address syntax, classful networks. Specifically, rather than allocating address blocks on eight-bit (i.e., octet) boundaries forcing 8, 16, or 24-bit prefixes, it used the technique of variable-length subnet masking (VLSM) to allow allocation on arbitrary-length prefixes. CIDR encompasses:

The VLSM technique of specifying arbitrary length prefix boundaries. A CIDR-compliant address is written with a suffix indicating the number of bits in the prefix length, such as 192.168.0.0/16. This permits more efficient use of increasingly scarce IPv4 addresses.

The aggregation of multiple contiguous prefixes into supernets, and, wherever possible in the Internet, advertising aggregates, thus reducing the number of entries in the global routing table. Aggregation hides multiple levels of subnetting from the Internet routing table, and reverses the process of "subnetting a subnet" with VLSM.

The administrative process of allocating address blocks to organizations based on their actual and short-term projected need, rather than the very large or very small blocks required by classful addressing schemes.

IPv6 utilizes the CIDR convention of indicating prefix length with a suffix, but the longer address field of IPv6 made it unnecessary to practice great economy in allocating the minimum amount of address space an organization could justify. The concept of class was never used in IPv6.

Page 51: Notes on Hardware & Networking

CIDR blocks

CIDR is principally a bitwise, prefix-based standard for the interpretation of IP addresses. It facilitates routing by allowing blocks of addresses to be grouped together into single routing table entries. These groups, commonly called CIDR blocks, share an initial sequence of bits in the binary representation of their IP addresses. IPv4 CIDR blocks are identified using a syntax similar to that of IPv4 addresses: a four-part dotted-decimal address, followed by a slash, then a number from 0 to 32: A.B.C.D/N. The dotted decimal portion is interpreted, like an IPv4 address, as a 32-bit binary number that has been broken into four octets. The number following the slash is the prefix length, the number of shared initial bits, counting from the left-hand side of the address. When speaking in abstract terms, the dotted-decimal portion is sometimes omitted, thus a /20 is a CIDR block with an unspecified 20-bit prefix.

An IP address is part of a CIDR block, and is said to match the CIDR prefix if the initial N bits of the address and the CIDR prefix are the same. Thus, understanding CIDR requires that IP address be visualized in binary. Since the length of an IPv4 address is fixed at 32 binary bits, an N-bit CIDR prefix leaves 32 − N bits unmatched, and there are 2(32 − N) possible combinations of these bits, meaning that 2(32 − N) IPv4 addresses match a given N-bit CIDR prefix. Shorter CIDR prefixes match more addresses, while longer CIDR prefixes match fewer. An address can match multiple CIDR prefixes of different lengths.

Page 52: Notes on Hardware & Networking

CIDR is also used with IPv6 addresses, where the prefix length can range from 0 to 128, due to the larger number of bits in the address. A similar syntax is used: the prefix is written as an IPv6 address, followed by a slash and the number of significant bits.

Assignment of CIDR blocks

The Internet Assigned Numbers Authority (IANA) issues to Regional Internet Registries (RIRs) large, short-prefix CIDR blocks. For example, 62.0.0.0/8, with over sixteen million addresses, is administered by RIPE NCC, the European RIR. The RIRs, each responsible for a single, large, geographic area (such as Europe or North America), then subdivide these blocks into smaller blocks and issue them publicly. This subdividing process can be repeated several times at different levels of delegation. Large Internet service providers (ISPs) typically obtain CIDR blocks from an RIR, then subdivide them into smaller CIDR blocks for their subscribers, sized according to the size of the subscriber's network. Networks served by a single ISP are encouraged by IETF to obtain IP address space directly from their ISP. Networks served by multiple ISPs, on the other hand, will often obtain independent CIDR blocks directly from the appropriate RIR.

For example, in the late 1990s, the IP address 208.130.29.33 (since reassigned) was used by www.freesoft.org. An analysis of this address identified three CIDR prefixes. 208.128.0.0/11, a large CIDR block containing over 2 million addresses, had been assigned by ARIN (the North American RIR) to MCI. Automation Research Systems, a Virginia VAR, leased an Internet connection from MCI and was assigned the 208.130.28.0/22 block, capable of addressing just over 1000 devices. ARS used a /24 block for its publicly accessible servers, of which 208.130.29.33 was one.

All of these CIDR prefixes would be used, at different locations in the network. Outside of MCI's network, the 208.128.0.0/11 prefix would be used to direct to MCI traffic bound not only for 208.130.29.33, but also for any of the roughly two million IP addresses with the same initial 11 bits. Within MCI's network, 208.130.28.0/22 would become visible, directing traffic to the leased line serving ARS. Only within the ARS corporate network would the 208.130.29.0/24 prefix have been used.

Page 53: Notes on Hardware & Networking

CIDR and masks

A subnet mask is a bitmask that encodes the prefix length in a form similar to an IP address: 32 bits, starting with a number of 1 bits equal to the prefix length, ending with 0 bits, and encoded in four-part dotted-decimal format. A subnet mask encodes the same information as a prefix length, but predates the advent of CIDR.

CIDR uses variable length subnet masks (VLSM) to allocate IP addresses to subnets according to individual need, rather than some general network-wide rule. Thus the network/host division can occur at any bit boundary in the address. The process can be recursive, with a portion of the address space being further divided into even smaller portions, through the use of masks which cover more bits.

CIDR/VLSM network addresses are now used throughout the public Internet, although they are also used elsewhere, particularly in large private networks. An average desktop LAN user generally does not see them in practice, as their LAN is usually numbered using special private network addresses.

Prefix aggregation

Another benefit of CIDR is the possibility of routing prefix aggregation (also known as "supernetting" or "route summarization"). For example, sixteen contiguous Class C (/24) networks could now be aggregated together, and advertised to the outside world as a single /20 route (if the first 20 bits of their network addresses match). Two aligned contiguous /20s could then be aggregated to a /19, and so forth. This allows a significant reduction in the number of routes that have to be advertised over the Internet, preventing 'routing table explosions' from overwhelming routers, and stopping the Internet from expanding further.

CIDR

IP/CIDR Ä to last IP addr Mask Hosts (*) Class Notes

a.b.c.d/32 +0.0.0.0 255.255.255.255 1 1/256 C

a.b.c.d/31 +0.0.0.1 255.255.255.254 2 1/128 C d = 0 ... (2n) ... 254

a.b.c.d/30 +0.0.0.3 255.255.255.252 4 1/64 C d = 0 ... (4n) ... 252

a.b.c.d/29 +0.0.0.7 255.255.255.248 8 1/32 C d = 0 ... (8n) ... 248

Page 54: Notes on Hardware & Networking

a.b.c.d/28 +0.0.0.15 255.255.255.240 16 1/16 C d = 0 ... (16n) ... 240

a.b.c.d/27 +0.0.0.31 255.255.255.224 32 1/8 C d = 0 ... (32n) ... 224

a.b.c.d/26 +0.0.0.63 255.255.255.192 64 1/4 C d = 0, 64, 128, 192

a.b.c.d/25 +0.0.0.127 255.255.255.128 128 1/2 C d = 0, 128

a.b.c.0/24 +0.0.0.255 255.255.255.000 256 1 C

a.b.c.0/23 +0.0.1.255 255.255.254.000 512 2 C c = 0 ... (2n) ... 254

a.b.c.0/22 +0.0.3.255 255.255.252.000 1,024 4 C c = 0 ... (4n) ... 252

a.b.c.0/21 +0.0.7.255 255.255.248.000 2,048 8 C c = 0 ... (8n) ... 248

a.b.c.0/20 +0.0.15.255 255.255.240.000 4,096 16 C c = 0 ... (16n) ... 240

a.b.c.0/19 +0.0.31.255 255.255.224.000 8,192 32 C c = 0 ... (32n) ... 224

a.b.c.0/18 +0.0.63.255 255.255.192.000 16,384 64 C c = 0, 64, 128, 192

a.b.c.0/17 +0.0.127.255 255.255.128.000 32,768 128 C c = 0, 128

a.b.0.0/16 +0.0.255.255 255.255.000.000 65,536 256 C = 1 B

a.b.0.0/15 +0.1.255.255 255.254.000.000 131,072 2 B b = 0 ... (2n) ... 254

Page 55: Notes on Hardware & Networking

a.b.0.0/14 +0.3.255.255 255.252.000.000 262,144 4 B b = 0 ... (4n) ... 252

a.b.0.0/13 +0.7.255.255 255.248.000.000 524,288 8 B b = 0 ... (8n) ... 248

a.b.0.0/12 +0.15.255.255 255.240.000.000 1,048,576 16 B b = 0 ... (16n) ... 240

a.b.0.0/11 +0.31.255.255 255.224.000.000 2,097,152 32 B b = 0 ... (32n) ... 224

a.b.0.0/10 +0.63.255.255 255.192.000.000 4,194,304 64 B b = 0, 64, 128, 192

a.b.0.0/9 +0.127.255.255 255.128.000.000 8,388,608 128 B b = 0, 128

a.0.0.0/8 +0.255.255.255 255.000.000.000 16,777,216 256 B = 1 A

a.0.0.0/7 +1.255.255.255 254.000.000.000 33,554,432 2 A a = 0 ... (2n) ... 254

a.0.0.0/6 +3.255.255.255 252.000.000.000 67,108,864 4 A a = 0 ... (4n) ... 252

a.0.0.0/5 +7.255.255.255 248.000.000.000 134,217,728 8 A a = 0 ... (8n) ... 248

a.0.0.0/4 +15.255.255.255 240.000.000.000 268,435,456 16 A a = 0 ... (16n) ... 240

a.0.0.0/3 +31.255.255.255 224.000.000.000 536,870,912 32 A a = 0 ... (32n) ... 224

a.0.0.0/2 +63.255.255.255 192.000.000.000 1,073,741,824 64 A a = 0, 64, 128, 192

a.0.0.0/1 +127.255.255.255 128.000.000.000 2,147,483,648 128 A a = 0, 128

Page 56: Notes on Hardware & Networking

0.0.0.0/0 +255.255.255.255 000.000.000.000 4,294,967,296 256 A

(*) Note that for routed subnets bigger than /31 or /32, 2 needs to be subtracted from the number of available addresses - the largest address is used as the broadcast address, and typically the smallest address is used to identify the network itself. See RFC 1812 for more detail. It is also common for the gateway IP for that subnet to use an address, meaning that you would subtract 3 from the number of usable hosts that can be used on the subnet.

Historical background

IP addresses were originally separated into two parts: the network address (which identified a whole network or subnet), and the host address (which identified a particular machine's connection or interface to that network). This division was used to control how traffic was routed in and among IP networks.

Historically, the IP address space was divided into three main 'classes of network', where each class had a fixed size network address. The class, and hence the length of the network address and the number of hosts on the network, could always be determined from the most significant bits of the IP address. Without any way of specifying a prefix length or a subnet mask, routing protocols, such as RIP-1, IGRP, necessarily used the class of the IP address specified in route advertisements to determine the size of the routing prefixes to be set up in the routing tables.

As the experimental TCP/IP network expanded into the Internet during the 1980s, the need for more flexible addressing schemes became increasingly apparent. This led to the successive development of subnetting and CIDR. Because the old class distinctions are ignored, the new system was called classless routing. It is supported by modern routing protocols, such as RIP-2, EIGRP, IS-IS and OSPF. This led to the original system being called, by back-formation, classful routing.

Variable-Length Subnet Masking (VLSM) is the same concept as CIDR, but is mostly in historical usage.

Internet RFC 1338 was a major paradigm shift to establish a provider-based addressing and a routing hierarchy. With the new RFC 1338-style provider-based supernetting, it was possible to create multiple hierarchical tiers and most tiers were envisioned to be internet service providers. Provider-based address space allocation was the new model, and BGP would evolve to BGP-4, incorporating the RFC 1338 paradigm. For this shift to occur, the technique for supernetting-subnetting the IP address space required a modification. This new feature was called Classless Inter-Domain Routing (CIDR). (Note that RFC 1338 was replaced by RFC 1519)